Feb 13 19:48:41.177686 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:48:41.177730 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:48:41.177755 kernel: KASLR disabled due to lack of seed Feb 13 19:48:41.177771 kernel: efi: EFI v2.7 by EDK II Feb 13 19:48:41.177787 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 19:48:41.177803 kernel: ACPI: Early table checksum verification disabled Feb 13 19:48:41.177820 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:48:41.177836 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:48:41.177852 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:48:41.177868 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:48:41.177888 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:48:41.177904 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:48:41.177919 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:48:41.177935 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:48:41.177953 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:48:41.177974 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:48:41.177991 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:48:41.178008 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:48:41.178024 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:48:41.178040 kernel: printk: bootconsole [uart0] enabled Feb 13 19:48:41.178057 kernel: NUMA: Failed to initialise from firmware Feb 13 19:48:41.178073 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:48:41.178121 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:48:41.178140 kernel: Zone ranges: Feb 13 19:48:41.178157 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:48:41.178173 kernel: DMA32 empty Feb 13 19:48:41.178195 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:48:41.178212 kernel: Movable zone start for each node Feb 13 19:48:41.178228 kernel: Early memory node ranges Feb 13 19:48:41.178244 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:48:41.178261 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:48:41.178277 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:48:41.178293 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:48:41.178310 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:48:41.178326 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:48:41.178342 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:48:41.178359 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:48:41.178375 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:48:41.178396 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:48:41.178413 kernel: psci: probing for conduit method from ACPI. Feb 13 19:48:41.178437 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:48:41.178454 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:48:41.178472 kernel: psci: Trusted OS migration not required Feb 13 19:48:41.178493 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:48:41.178511 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:48:41.178528 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:48:41.178546 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:48:41.178563 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:48:41.178580 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:48:41.178597 kernel: CPU features: detected: Spectre-v2 Feb 13 19:48:41.178614 kernel: CPU features: detected: Spectre-v3a Feb 13 19:48:41.178631 kernel: CPU features: detected: Spectre-BHB Feb 13 19:48:41.178648 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:48:41.178666 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:48:41.178687 kernel: alternatives: applying boot alternatives Feb 13 19:48:41.178707 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:48:41.178725 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:48:41.178743 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:48:41.178760 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:48:41.178777 kernel: Fallback order for Node 0: 0 Feb 13 19:48:41.178795 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:48:41.178812 kernel: Policy zone: Normal Feb 13 19:48:41.178829 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:48:41.178846 kernel: software IO TLB: area num 2. Feb 13 19:48:41.178863 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:48:41.178886 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 19:48:41.178914 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:48:41.178937 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:48:41.178956 kernel: rcu: RCU event tracing is enabled. Feb 13 19:48:41.178974 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:48:41.178991 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:48:41.179009 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:48:41.179026 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:48:41.179043 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:48:41.179061 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:48:41.181545 kernel: GICv3: 96 SPIs implemented Feb 13 19:48:41.181597 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:48:41.181617 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:48:41.181635 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:48:41.181653 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:48:41.181670 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:48:41.181688 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:48:41.181706 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:48:41.181724 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:48:41.181742 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:48:41.181759 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:48:41.181777 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:48:41.181795 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:48:41.181818 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:48:41.181836 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:48:41.181853 kernel: Console: colour dummy device 80x25 Feb 13 19:48:41.181872 kernel: printk: console [tty1] enabled Feb 13 19:48:41.181890 kernel: ACPI: Core revision 20230628 Feb 13 19:48:41.181908 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:48:41.181926 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:48:41.181944 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:48:41.181962 kernel: landlock: Up and running. Feb 13 19:48:41.181984 kernel: SELinux: Initializing. Feb 13 19:48:41.182002 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:48:41.182020 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:48:41.182038 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:48:41.182056 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:48:41.182074 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:48:41.182118 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:48:41.182142 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:48:41.182160 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:48:41.182185 kernel: Remapping and enabling EFI services. Feb 13 19:48:41.182203 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:48:41.182221 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:48:41.182238 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:48:41.182256 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:48:41.182274 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:48:41.182292 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:48:41.182309 kernel: SMP: Total of 2 processors activated. Feb 13 19:48:41.182327 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:48:41.182349 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:48:41.182367 kernel: CPU features: detected: CRC32 instructions Feb 13 19:48:41.182385 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:48:41.182415 kernel: alternatives: applying system-wide alternatives Feb 13 19:48:41.182438 kernel: devtmpfs: initialized Feb 13 19:48:41.182457 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:48:41.182476 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:48:41.182494 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:48:41.182512 kernel: SMBIOS 3.0.0 present. Feb 13 19:48:41.182530 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:48:41.182554 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:48:41.182572 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:48:41.182591 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:48:41.182610 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:48:41.182628 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:48:41.182646 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Feb 13 19:48:41.182665 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:48:41.182688 kernel: cpuidle: using governor menu Feb 13 19:48:41.182706 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:48:41.182725 kernel: ASID allocator initialised with 65536 entries Feb 13 19:48:41.182743 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:48:41.182762 kernel: Serial: AMBA PL011 UART driver Feb 13 19:48:41.182780 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 19:48:41.182798 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:48:41.182817 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:48:41.182836 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:48:41.182859 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:48:41.182878 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:48:41.182896 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:48:41.182915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:48:41.182933 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:48:41.182951 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:48:41.182970 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:48:41.182988 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:48:41.183007 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:48:41.183029 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:48:41.183048 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:48:41.183067 kernel: ACPI: Interpreter enabled Feb 13 19:48:41.184218 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:48:41.184249 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:48:41.184269 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:48:41.184594 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:48:41.184811 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:48:41.185026 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:48:41.185266 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:48:41.185475 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:48:41.185502 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:48:41.185522 kernel: acpiphp: Slot [1] registered Feb 13 19:48:41.185540 kernel: acpiphp: Slot [2] registered Feb 13 19:48:41.185559 kernel: acpiphp: Slot [3] registered Feb 13 19:48:41.185577 kernel: acpiphp: Slot [4] registered Feb 13 19:48:41.185603 kernel: acpiphp: Slot [5] registered Feb 13 19:48:41.185622 kernel: acpiphp: Slot [6] registered Feb 13 19:48:41.185641 kernel: acpiphp: Slot [7] registered Feb 13 19:48:41.185659 kernel: acpiphp: Slot [8] registered Feb 13 19:48:41.185678 kernel: acpiphp: Slot [9] registered Feb 13 19:48:41.185696 kernel: acpiphp: Slot [10] registered Feb 13 19:48:41.185715 kernel: acpiphp: Slot [11] registered Feb 13 19:48:41.185733 kernel: acpiphp: Slot [12] registered Feb 13 19:48:41.185752 kernel: acpiphp: Slot [13] registered Feb 13 19:48:41.185770 kernel: acpiphp: Slot [14] registered Feb 13 19:48:41.185794 kernel: acpiphp: Slot [15] registered Feb 13 19:48:41.185813 kernel: acpiphp: Slot [16] registered Feb 13 19:48:41.185831 kernel: acpiphp: Slot [17] registered Feb 13 19:48:41.185849 kernel: acpiphp: Slot [18] registered Feb 13 19:48:41.185867 kernel: acpiphp: Slot [19] registered Feb 13 19:48:41.185885 kernel: acpiphp: Slot [20] registered Feb 13 19:48:41.185904 kernel: acpiphp: Slot [21] registered Feb 13 19:48:41.185923 kernel: acpiphp: Slot [22] registered Feb 13 19:48:41.185941 kernel: acpiphp: Slot [23] registered Feb 13 19:48:41.185964 kernel: acpiphp: Slot [24] registered Feb 13 19:48:41.185983 kernel: acpiphp: Slot [25] registered Feb 13 19:48:41.186001 kernel: acpiphp: Slot [26] registered Feb 13 19:48:41.186019 kernel: acpiphp: Slot [27] registered Feb 13 19:48:41.186038 kernel: acpiphp: Slot [28] registered Feb 13 19:48:41.186056 kernel: acpiphp: Slot [29] registered Feb 13 19:48:41.186074 kernel: acpiphp: Slot [30] registered Feb 13 19:48:41.188462 kernel: acpiphp: Slot [31] registered Feb 13 19:48:41.188485 kernel: PCI host bridge to bus 0000:00 Feb 13 19:48:41.188797 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:48:41.189007 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:48:41.189252 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:48:41.189444 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:48:41.189692 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:48:41.189919 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:48:41.191256 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:48:41.191573 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:48:41.191795 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:48:41.192013 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:48:41.193329 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:48:41.193544 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:48:41.193751 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:48:41.193961 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:48:41.196306 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:48:41.196553 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:48:41.196763 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:48:41.196983 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:48:41.197230 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:48:41.197452 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:48:41.197660 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:48:41.200243 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:48:41.200477 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:48:41.200505 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:48:41.200525 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:48:41.200544 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:48:41.200562 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:48:41.200581 kernel: iommu: Default domain type: Translated Feb 13 19:48:41.200600 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:48:41.200629 kernel: efivars: Registered efivars operations Feb 13 19:48:41.200647 kernel: vgaarb: loaded Feb 13 19:48:41.200666 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:48:41.200685 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:48:41.200703 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:48:41.200721 kernel: pnp: PnP ACPI init Feb 13 19:48:41.200940 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:48:41.200968 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:48:41.200994 kernel: NET: Registered PF_INET protocol family Feb 13 19:48:41.201013 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:48:41.201032 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:48:41.201051 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:48:41.201069 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:48:41.201110 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:48:41.201132 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:48:41.201151 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:48:41.201170 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:48:41.201195 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:48:41.201213 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:48:41.201232 kernel: kvm [1]: HYP mode not available Feb 13 19:48:41.201250 kernel: Initialise system trusted keyrings Feb 13 19:48:41.201269 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:48:41.201287 kernel: Key type asymmetric registered Feb 13 19:48:41.201305 kernel: Asymmetric key parser 'x509' registered Feb 13 19:48:41.201324 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:48:41.201343 kernel: io scheduler mq-deadline registered Feb 13 19:48:41.201366 kernel: io scheduler kyber registered Feb 13 19:48:41.201385 kernel: io scheduler bfq registered Feb 13 19:48:41.201597 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:48:41.201626 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:48:41.201645 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:48:41.201663 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:48:41.201682 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:48:41.201701 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:48:41.201725 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:48:41.201936 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:48:41.201963 kernel: printk: console [ttyS0] disabled Feb 13 19:48:41.201982 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:48:41.202001 kernel: printk: console [ttyS0] enabled Feb 13 19:48:41.202020 kernel: printk: bootconsole [uart0] disabled Feb 13 19:48:41.202038 kernel: thunder_xcv, ver 1.0 Feb 13 19:48:41.202056 kernel: thunder_bgx, ver 1.0 Feb 13 19:48:41.202074 kernel: nicpf, ver 1.0 Feb 13 19:48:41.202171 kernel: nicvf, ver 1.0 Feb 13 19:48:41.202383 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:48:41.203268 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:48:40 UTC (1739476120) Feb 13 19:48:41.203302 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:48:41.203322 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:48:41.203341 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:48:41.203360 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:48:41.203379 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:48:41.203405 kernel: Segment Routing with IPv6 Feb 13 19:48:41.203424 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:48:41.203442 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:48:41.203461 kernel: Key type dns_resolver registered Feb 13 19:48:41.203496 kernel: registered taskstats version 1 Feb 13 19:48:41.203518 kernel: Loading compiled-in X.509 certificates Feb 13 19:48:41.203537 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:48:41.203555 kernel: Key type .fscrypt registered Feb 13 19:48:41.203573 kernel: Key type fscrypt-provisioning registered Feb 13 19:48:41.203597 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:48:41.203616 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:48:41.203635 kernel: ima: No architecture policies found Feb 13 19:48:41.203653 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:48:41.203672 kernel: clk: Disabling unused clocks Feb 13 19:48:41.203690 kernel: Freeing unused kernel memory: 39360K Feb 13 19:48:41.203708 kernel: Run /init as init process Feb 13 19:48:41.203727 kernel: with arguments: Feb 13 19:48:41.203745 kernel: /init Feb 13 19:48:41.203763 kernel: with environment: Feb 13 19:48:41.203785 kernel: HOME=/ Feb 13 19:48:41.203804 kernel: TERM=linux Feb 13 19:48:41.203822 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:48:41.203845 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:48:41.203869 systemd[1]: Detected virtualization amazon. Feb 13 19:48:41.203890 systemd[1]: Detected architecture arm64. Feb 13 19:48:41.203909 systemd[1]: Running in initrd. Feb 13 19:48:41.203934 systemd[1]: No hostname configured, using default hostname. Feb 13 19:48:41.203954 systemd[1]: Hostname set to . Feb 13 19:48:41.203975 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:48:41.203995 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:48:41.204015 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:41.204036 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:41.204057 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:48:41.204095 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:48:41.204127 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:48:41.204149 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:48:41.204173 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:48:41.204194 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:48:41.204214 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:41.204235 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:41.204255 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:48:41.204280 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:48:41.204300 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:48:41.204320 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:48:41.204341 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:48:41.204361 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:48:41.204382 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:48:41.204402 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:48:41.204422 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:41.204442 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:41.204468 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:41.204488 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:48:41.204508 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:48:41.204528 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:48:41.204548 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:48:41.204568 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:48:41.204589 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:48:41.204609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:48:41.204634 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:41.204654 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:48:41.204674 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:41.204729 systemd-journald[250]: Collecting audit messages is disabled. Feb 13 19:48:41.204779 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:48:41.204802 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:48:41.204824 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:48:41.204844 systemd-journald[250]: Journal started Feb 13 19:48:41.204886 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2c604f2c91fbc57501385fd3109dde) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:48:41.168761 systemd-modules-load[251]: Inserted module 'overlay' Feb 13 19:48:41.215779 kernel: Bridge firewalling registered Feb 13 19:48:41.215816 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:48:41.208399 systemd-modules-load[251]: Inserted module 'br_netfilter' Feb 13 19:48:41.216907 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:41.222770 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:41.233292 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:48:41.252364 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:41.259372 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:48:41.262480 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:48:41.265976 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:48:41.301604 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:41.312896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:41.320388 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:41.331593 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:48:41.340401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:41.354510 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:48:41.400281 dracut-cmdline[291]: dracut-dracut-053 Feb 13 19:48:41.405888 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:48:41.419330 systemd-resolved[287]: Positive Trust Anchors: Feb 13 19:48:41.419366 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:48:41.419430 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:48:41.539121 kernel: SCSI subsystem initialized Feb 13 19:48:41.549111 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:48:41.560120 kernel: iscsi: registered transport (tcp) Feb 13 19:48:41.582120 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:48:41.582193 kernel: QLogic iSCSI HBA Driver Feb 13 19:48:41.668123 kernel: random: crng init done Feb 13 19:48:41.668305 systemd-resolved[287]: Defaulting to hostname 'linux'. Feb 13 19:48:41.671805 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:48:41.675685 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:41.698206 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:48:41.710430 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:48:41.744679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:48:41.744758 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:48:41.744786 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:48:41.812143 kernel: raid6: neonx8 gen() 6750 MB/s Feb 13 19:48:41.829116 kernel: raid6: neonx4 gen() 6583 MB/s Feb 13 19:48:41.846112 kernel: raid6: neonx2 gen() 5490 MB/s Feb 13 19:48:41.863114 kernel: raid6: neonx1 gen() 3982 MB/s Feb 13 19:48:41.880132 kernel: raid6: int64x8 gen() 3818 MB/s Feb 13 19:48:41.897131 kernel: raid6: int64x4 gen() 3720 MB/s Feb 13 19:48:41.914131 kernel: raid6: int64x2 gen() 3612 MB/s Feb 13 19:48:41.931919 kernel: raid6: int64x1 gen() 2767 MB/s Feb 13 19:48:41.931990 kernel: raid6: using algorithm neonx8 gen() 6750 MB/s Feb 13 19:48:41.950151 kernel: raid6: .... xor() 4828 MB/s, rmw enabled Feb 13 19:48:41.950231 kernel: raid6: using neon recovery algorithm Feb 13 19:48:41.958131 kernel: xor: measuring software checksum speed Feb 13 19:48:41.960514 kernel: 8regs : 10177 MB/sec Feb 13 19:48:41.960583 kernel: 32regs : 11457 MB/sec Feb 13 19:48:41.961726 kernel: arm64_neon : 9546 MB/sec Feb 13 19:48:41.961783 kernel: xor: using function: 32regs (11457 MB/sec) Feb 13 19:48:42.050140 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:48:42.071823 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:48:42.091404 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:42.133948 systemd-udevd[472]: Using default interface naming scheme 'v255'. Feb 13 19:48:42.143649 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:42.157688 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:48:42.194866 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Feb 13 19:48:42.257411 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:48:42.267425 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:48:42.399313 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:42.411849 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:48:42.456164 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:48:42.459306 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:48:42.461751 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:42.464112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:48:42.482648 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:48:42.524535 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:48:42.598181 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:48:42.598253 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:48:42.629530 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:48:42.629790 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:48:42.630028 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:36:be:9f:1b:31 Feb 13 19:48:42.612205 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:48:42.612432 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:42.615149 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:42.617445 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:42.617771 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:42.620044 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:42.633270 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:42.656452 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:48:42.691545 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:48:42.691621 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:48:42.702132 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:48:42.705482 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:42.714700 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:48:42.714738 kernel: GPT:9289727 != 16777215 Feb 13 19:48:42.714764 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:48:42.716538 kernel: GPT:9289727 != 16777215 Feb 13 19:48:42.716594 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:48:42.717455 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:42.718475 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:42.756660 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:42.839193 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (529) Feb 13 19:48:42.858649 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (527) Feb 13 19:48:42.894930 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:48:42.960861 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:48:42.978478 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:48:43.005050 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:48:43.009966 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:48:43.032475 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:48:43.045431 disk-uuid[662]: Primary Header is updated. Feb 13 19:48:43.045431 disk-uuid[662]: Secondary Entries is updated. Feb 13 19:48:43.045431 disk-uuid[662]: Secondary Header is updated. Feb 13 19:48:43.055108 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:43.063123 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:43.073108 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:44.074112 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:44.074476 disk-uuid[663]: The operation has completed successfully. Feb 13 19:48:44.253845 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:48:44.255357 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:48:44.305442 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:48:44.315407 sh[1004]: Success Feb 13 19:48:44.340159 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:48:44.450707 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:48:44.471277 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:48:44.480164 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:48:44.509262 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:48:44.509336 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:44.511040 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:48:44.511076 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:48:44.512293 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:48:44.623115 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:48:44.650276 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:48:44.654171 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:48:44.665362 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:48:44.670346 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:48:44.705785 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:44.705876 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:44.707249 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:44.715117 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:44.735430 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:48:44.737376 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:44.749242 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:48:44.760443 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:48:44.855869 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:48:44.867411 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:48:44.924526 systemd-networkd[1197]: lo: Link UP Feb 13 19:48:44.924548 systemd-networkd[1197]: lo: Gained carrier Feb 13 19:48:44.929491 systemd-networkd[1197]: Enumeration completed Feb 13 19:48:44.929645 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:48:44.931927 systemd[1]: Reached target network.target - Network. Feb 13 19:48:44.937215 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:44.937222 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:48:44.946688 systemd-networkd[1197]: eth0: Link UP Feb 13 19:48:44.946702 systemd-networkd[1197]: eth0: Gained carrier Feb 13 19:48:44.946719 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:44.961172 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.20.210/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:48:45.180637 ignition[1119]: Ignition 2.19.0 Feb 13 19:48:45.181181 ignition[1119]: Stage: fetch-offline Feb 13 19:48:45.181725 ignition[1119]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:45.186069 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:48:45.181749 ignition[1119]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:45.182264 ignition[1119]: Ignition finished successfully Feb 13 19:48:45.202371 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:48:45.224274 ignition[1207]: Ignition 2.19.0 Feb 13 19:48:45.224301 ignition[1207]: Stage: fetch Feb 13 19:48:45.225918 ignition[1207]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:45.225945 ignition[1207]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:45.226363 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:45.245989 ignition[1207]: PUT result: OK Feb 13 19:48:45.249121 ignition[1207]: parsed url from cmdline: "" Feb 13 19:48:45.249146 ignition[1207]: no config URL provided Feb 13 19:48:45.249162 ignition[1207]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:48:45.249189 ignition[1207]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:48:45.249231 ignition[1207]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:45.256431 ignition[1207]: PUT result: OK Feb 13 19:48:45.256974 ignition[1207]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:48:45.259804 ignition[1207]: GET result: OK Feb 13 19:48:45.260783 ignition[1207]: parsing config with SHA512: e85b1ac4cdc375a7223ee1b8fc4078bea450e4814f28939ced49b1a9e0a2fb021ca726a7067754870fdfb5e8a814f7d3d9fec9fce7832a112f56aca382bb8765 Feb 13 19:48:45.270600 unknown[1207]: fetched base config from "system" Feb 13 19:48:45.270631 unknown[1207]: fetched base config from "system" Feb 13 19:48:45.270646 unknown[1207]: fetched user config from "aws" Feb 13 19:48:45.276310 ignition[1207]: fetch: fetch complete Feb 13 19:48:45.276325 ignition[1207]: fetch: fetch passed Feb 13 19:48:45.276455 ignition[1207]: Ignition finished successfully Feb 13 19:48:45.286069 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:48:45.299486 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:48:45.327541 ignition[1213]: Ignition 2.19.0 Feb 13 19:48:45.327591 ignition[1213]: Stage: kargs Feb 13 19:48:45.330420 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:45.330507 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:45.331547 ignition[1213]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:45.336678 ignition[1213]: PUT result: OK Feb 13 19:48:45.340922 ignition[1213]: kargs: kargs passed Feb 13 19:48:45.341543 ignition[1213]: Ignition finished successfully Feb 13 19:48:45.345819 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:48:45.362481 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:48:45.387281 ignition[1219]: Ignition 2.19.0 Feb 13 19:48:45.387307 ignition[1219]: Stage: disks Feb 13 19:48:45.388012 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:45.388038 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:45.388215 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:45.392780 ignition[1219]: PUT result: OK Feb 13 19:48:45.400987 ignition[1219]: disks: disks passed Feb 13 19:48:45.401113 ignition[1219]: Ignition finished successfully Feb 13 19:48:45.404812 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:48:45.409169 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:48:45.413276 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:48:45.415550 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:48:45.417832 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:48:45.425218 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:48:45.435485 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:48:45.473489 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:48:45.481955 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:48:45.491495 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:48:45.569130 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:48:45.569656 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:48:45.573195 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:48:45.588292 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:48:45.596068 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:48:45.597745 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:48:45.597821 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:48:45.597868 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:48:45.642257 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1247) Feb 13 19:48:45.642455 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:45.642483 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:45.642510 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:45.629153 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:48:45.651344 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:48:45.660128 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:45.662455 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:48:46.014241 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:48:46.042460 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:48:46.062584 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:48:46.071241 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:48:46.088218 systemd-networkd[1197]: eth0: Gained IPv6LL Feb 13 19:48:46.387564 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:48:46.397281 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:48:46.412449 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:48:46.428157 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:48:46.430204 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:46.464176 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:48:46.476129 ignition[1360]: INFO : Ignition 2.19.0 Feb 13 19:48:46.476129 ignition[1360]: INFO : Stage: mount Feb 13 19:48:46.479547 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:46.479547 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:46.479547 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:46.479547 ignition[1360]: INFO : PUT result: OK Feb 13 19:48:46.489924 ignition[1360]: INFO : mount: mount passed Feb 13 19:48:46.491685 ignition[1360]: INFO : Ignition finished successfully Feb 13 19:48:46.494355 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:48:46.510390 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:48:46.581025 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:48:46.603116 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1372) Feb 13 19:48:46.606568 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:46.606606 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:46.606633 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:46.613120 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:46.616367 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:48:46.661532 ignition[1389]: INFO : Ignition 2.19.0 Feb 13 19:48:46.661532 ignition[1389]: INFO : Stage: files Feb 13 19:48:46.665142 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:46.665142 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:46.665142 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:46.671478 ignition[1389]: INFO : PUT result: OK Feb 13 19:48:46.676744 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:48:46.679839 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:48:46.679839 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:48:46.708674 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:48:46.711811 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:48:46.714591 unknown[1389]: wrote ssh authorized keys file for user: core Feb 13 19:48:46.718468 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:48:46.730434 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:48:46.735196 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:48:46.735196 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:48:46.735196 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:48:46.843481 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:48:47.027768 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:48:47.031425 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:48:47.031425 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:48:47.485252 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 13 19:48:47.613524 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:48:47.616879 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:48:47.616879 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:48:47.616879 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:48:47.616879 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:48:47.616879 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:48:47.616879 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:48:47.616879 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:48:47.616879 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:48:47.616879 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:48:47.616879 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:48:47.616879 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:48:47.653493 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:48:47.653493 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:48:47.653493 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:48:48.027657 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Feb 13 19:48:48.340201 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:48:48.340201 ignition[1389]: INFO : files: op(d): [started] processing unit "containerd.service" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: op(d): [finished] processing unit "containerd.service" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:48:48.347850 ignition[1389]: INFO : files: files passed Feb 13 19:48:48.347850 ignition[1389]: INFO : Ignition finished successfully Feb 13 19:48:48.385073 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:48:48.397453 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:48:48.407376 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:48:48.419690 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:48:48.421996 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:48:48.437704 initrd-setup-root-after-ignition[1417]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:48.437704 initrd-setup-root-after-ignition[1417]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:48.444342 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:48.449610 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:48:48.452434 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:48:48.468524 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:48:48.519631 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:48:48.521308 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:48:48.528185 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:48:48.530160 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:48:48.532401 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:48:48.547971 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:48:48.578509 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:48:48.588511 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:48:48.615193 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:48.619574 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:48.620217 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:48:48.620465 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:48:48.620688 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:48:48.622294 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:48:48.622612 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:48:48.622908 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:48:48.623227 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:48:48.623522 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:48:48.623819 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:48:48.624125 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:48:48.624418 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:48:48.624708 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:48:48.624993 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:48:48.625527 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:48:48.625728 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:48:48.626764 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:48.627128 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:48.627323 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:48:48.653480 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:48.656038 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:48:48.656278 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:48:48.670174 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:48:48.670422 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:48:48.674785 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:48:48.674985 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:48:48.687511 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:48:48.730645 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:48:48.738684 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:48:48.743160 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:48.752744 ignition[1441]: INFO : Ignition 2.19.0 Feb 13 19:48:48.752744 ignition[1441]: INFO : Stage: umount Feb 13 19:48:48.752744 ignition[1441]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:48.752744 ignition[1441]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:48.752744 ignition[1441]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:48.771714 ignition[1441]: INFO : PUT result: OK Feb 13 19:48:48.771714 ignition[1441]: INFO : umount: umount passed Feb 13 19:48:48.771714 ignition[1441]: INFO : Ignition finished successfully Feb 13 19:48:48.752939 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:48:48.753223 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:48:48.765288 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:48:48.765679 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:48:48.780946 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:48:48.786004 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:48:48.792887 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:48:48.792995 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:48:48.796664 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:48:48.796849 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:48:48.800394 systemd[1]: Stopped target network.target - Network. Feb 13 19:48:48.802105 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:48:48.802242 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:48:48.804815 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:48:48.815210 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:48:48.818990 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:48.821465 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:48:48.823153 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:48:48.824953 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:48:48.825036 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:48:48.826980 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:48:48.827402 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:48:48.830531 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:48:48.830623 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:48:48.832531 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:48:48.832610 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:48:48.834778 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:48:48.836727 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:48:48.840771 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:48:48.842488 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:48:48.842706 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:48:48.847927 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:48:48.848132 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:48:48.853978 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:48:48.854474 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:48:48.864482 systemd-networkd[1197]: eth0: DHCPv6 lease lost Feb 13 19:48:48.866210 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:48:48.866464 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:48:48.877043 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:48:48.877319 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:48.907312 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:48:48.907787 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:48:48.910896 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:48:48.910968 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:48.942373 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:48:48.944381 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:48:48.944499 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:48:48.946914 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:48:48.946994 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:48.951057 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:48:48.951249 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:48.968174 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:48.990400 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:48:48.990784 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:48:49.005412 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:48:49.005901 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:49.013656 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:48:49.013747 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:49.016227 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:48:49.016294 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:49.018289 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:48:49.018821 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:48:49.022282 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:48:49.022365 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:48:49.024529 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:48:49.024605 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:49.049469 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:48:49.051850 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:48:49.051955 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:49.060507 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:48:49.060598 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:48:49.062878 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:48:49.062952 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:49.065404 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:49.065488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:49.091475 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:48:49.091864 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:48:49.099363 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:48:49.115375 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:48:49.152783 systemd[1]: Switching root. Feb 13 19:48:49.182490 systemd-journald[250]: Journal stopped Feb 13 19:48:51.946034 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Feb 13 19:48:51.946262 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:48:51.946320 kernel: SELinux: policy capability open_perms=1 Feb 13 19:48:51.946361 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:48:51.946402 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:48:51.946434 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:48:51.946465 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:48:51.946496 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:48:51.946527 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:48:51.946566 kernel: audit: type=1403 audit(1739476130.046:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:48:51.946600 systemd[1]: Successfully loaded SELinux policy in 60.478ms. Feb 13 19:48:51.946649 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.662ms. Feb 13 19:48:51.946685 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:48:51.946717 systemd[1]: Detected virtualization amazon. Feb 13 19:48:51.946750 systemd[1]: Detected architecture arm64. Feb 13 19:48:51.946787 systemd[1]: Detected first boot. Feb 13 19:48:51.946819 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:48:51.946851 zram_generator::config[1500]: No configuration found. Feb 13 19:48:51.946886 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:48:51.946924 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:48:51.946958 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:48:51.946991 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:48:51.947024 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:48:51.947056 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:48:51.949380 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:48:51.949432 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:48:51.949470 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:48:51.949504 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:48:51.949544 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:48:51.949577 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:51.949609 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:51.949639 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:48:51.949670 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:48:51.949702 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:48:51.949732 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:48:51.949763 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:48:51.949798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:51.949832 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:48:51.949866 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:51.949899 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:48:51.949933 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:48:51.949966 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:48:51.949996 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:48:51.950032 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:48:51.950165 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:48:51.950203 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:48:51.950236 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:51.950266 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:51.950299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:51.950332 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:48:51.950362 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:48:51.950395 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:48:51.950428 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:48:51.950460 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:48:51.950498 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:48:51.950530 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:48:51.950562 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:48:51.950593 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:51.950624 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:48:51.950655 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:48:51.950685 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:51.950715 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:48:51.950751 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:51.950782 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:48:51.950812 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:51.950843 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:48:51.950873 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:48:51.950910 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:48:51.950940 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:48:51.950972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:48:51.951005 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:48:51.951042 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:48:51.951071 kernel: fuse: init (API version 7.39) Feb 13 19:48:51.952313 kernel: loop: module loaded Feb 13 19:48:51.952350 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:48:51.952385 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:48:51.952417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:48:51.952449 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:48:51.952479 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:48:51.952511 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:48:51.952548 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:48:51.952579 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:48:51.952611 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:51.952641 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:48:51.952671 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:48:51.952703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:51.952735 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:51.952764 kernel: ACPI: bus type drm_connector registered Feb 13 19:48:51.952797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:51.952828 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:51.952858 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:48:51.952890 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:48:51.952920 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:48:51.952955 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:48:51.952986 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:51.953065 systemd-journald[1604]: Collecting audit messages is disabled. Feb 13 19:48:51.954216 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:51.954261 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:51.954295 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:48:51.954327 systemd-journald[1604]: Journal started Feb 13 19:48:51.954383 systemd-journald[1604]: Runtime Journal (/run/log/journal/ec2c604f2c91fbc57501385fd3109dde) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:48:51.960170 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:48:51.974251 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:48:52.002447 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:48:52.015424 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:48:52.023409 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:48:52.030214 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:48:52.042127 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:48:52.076607 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:48:52.081565 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:48:52.085973 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:48:52.095248 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:48:52.108758 systemd-journald[1604]: Time spent on flushing to /var/log/journal/ec2c604f2c91fbc57501385fd3109dde is 70.855ms for 896 entries. Feb 13 19:48:52.108758 systemd-journald[1604]: System Journal (/var/log/journal/ec2c604f2c91fbc57501385fd3109dde) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:48:52.193482 systemd-journald[1604]: Received client request to flush runtime journal. Feb 13 19:48:52.112585 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:48:52.124414 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:48:52.136553 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:52.148051 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:48:52.150563 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:48:52.178367 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:48:52.183004 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:48:52.186776 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:48:52.197330 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:48:52.221935 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:52.236567 udevadm[1659]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:48:52.270866 systemd-tmpfiles[1653]: ACLs are not supported, ignoring. Feb 13 19:48:52.270897 systemd-tmpfiles[1653]: ACLs are not supported, ignoring. Feb 13 19:48:52.279847 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:48:52.291570 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:48:52.363318 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:48:52.375519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:48:52.412506 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Feb 13 19:48:52.413051 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Feb 13 19:48:52.423859 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:53.101163 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:48:53.111391 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:53.173021 systemd-udevd[1680]: Using default interface naming scheme 'v255'. Feb 13 19:48:53.265042 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:53.277315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:48:53.311430 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:48:53.393380 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 19:48:53.433279 (udev-worker)[1688]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:48:53.446104 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:48:53.616500 systemd-networkd[1683]: lo: Link UP Feb 13 19:48:53.617035 systemd-networkd[1683]: lo: Gained carrier Feb 13 19:48:53.619994 systemd-networkd[1683]: Enumeration completed Feb 13 19:48:53.620767 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:48:53.626553 systemd-networkd[1683]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:53.626710 systemd-networkd[1683]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:48:53.629469 systemd-networkd[1683]: eth0: Link UP Feb 13 19:48:53.629760 systemd-networkd[1683]: eth0: Gained carrier Feb 13 19:48:53.629796 systemd-networkd[1683]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:53.633368 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:48:53.642304 systemd-networkd[1683]: eth0: DHCPv4 address 172.31.20.210/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:48:53.711425 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1688) Feb 13 19:48:53.718270 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:53.912790 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:48:53.941831 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:48:53.945030 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:53.954411 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:48:54.002123 lvm[1809]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:48:54.044547 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:48:54.047531 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:54.059409 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:48:54.076099 lvm[1812]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:48:54.115767 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:48:54.118443 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:48:54.120941 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:48:54.121002 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:48:54.122971 systemd[1]: Reached target machines.target - Containers. Feb 13 19:48:54.127159 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:48:54.139404 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:48:54.147461 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:48:54.150535 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:54.155898 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:48:54.167034 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:48:54.174682 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:48:54.178859 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:48:54.214238 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:48:54.232881 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:48:54.237389 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:48:54.239963 kernel: loop0: detected capacity change from 0 to 52536 Feb 13 19:48:54.334113 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:48:54.354138 kernel: loop1: detected capacity change from 0 to 194096 Feb 13 19:48:54.466131 kernel: loop2: detected capacity change from 0 to 114328 Feb 13 19:48:54.567506 kernel: loop3: detected capacity change from 0 to 114432 Feb 13 19:48:54.670131 kernel: loop4: detected capacity change from 0 to 52536 Feb 13 19:48:54.691129 kernel: loop5: detected capacity change from 0 to 194096 Feb 13 19:48:54.722125 kernel: loop6: detected capacity change from 0 to 114328 Feb 13 19:48:54.734128 kernel: loop7: detected capacity change from 0 to 114432 Feb 13 19:48:54.747825 (sd-merge)[1833]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:48:54.748881 (sd-merge)[1833]: Merged extensions into '/usr'. Feb 13 19:48:54.758814 systemd[1]: Reloading requested from client PID 1820 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:48:54.758848 systemd[1]: Reloading... Feb 13 19:48:54.861132 zram_generator::config[1857]: No configuration found. Feb 13 19:48:55.163720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:48:55.240248 systemd-networkd[1683]: eth0: Gained IPv6LL Feb 13 19:48:55.309659 systemd[1]: Reloading finished in 549 ms. Feb 13 19:48:55.336490 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:48:55.339892 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:48:55.356429 systemd[1]: Starting ensure-sysext.service... Feb 13 19:48:55.368436 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:48:55.393310 systemd[1]: Reloading requested from client PID 1920 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:48:55.393337 systemd[1]: Reloading... Feb 13 19:48:55.413510 systemd-tmpfiles[1921]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:48:55.414193 systemd-tmpfiles[1921]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:48:55.415961 systemd-tmpfiles[1921]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:48:55.416516 systemd-tmpfiles[1921]: ACLs are not supported, ignoring. Feb 13 19:48:55.416669 systemd-tmpfiles[1921]: ACLs are not supported, ignoring. Feb 13 19:48:55.424994 systemd-tmpfiles[1921]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:48:55.425021 systemd-tmpfiles[1921]: Skipping /boot Feb 13 19:48:55.457807 systemd-tmpfiles[1921]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:48:55.457840 systemd-tmpfiles[1921]: Skipping /boot Feb 13 19:48:55.550901 zram_generator::config[1946]: No configuration found. Feb 13 19:48:55.694242 ldconfig[1816]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:48:55.815898 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:48:55.957140 systemd[1]: Reloading finished in 563 ms. Feb 13 19:48:55.986013 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:48:55.997048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:56.015413 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:48:56.028534 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:48:56.038061 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:48:56.052359 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:48:56.065819 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:48:56.077595 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:56.086411 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:56.104995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:56.124550 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:56.127392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:56.130053 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:56.133556 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:56.140581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:56.140972 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:56.169917 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:48:56.179652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:56.189860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:56.204428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:56.208395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:56.237173 augenrules[2046]: No rules Feb 13 19:48:56.238203 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:48:56.252835 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:48:56.261394 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:56.263385 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:56.268337 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:56.268689 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:56.274284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:56.275506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:56.309060 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:48:56.320676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:56.334517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:56.347870 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:48:56.357567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:56.383386 systemd-resolved[2017]: Positive Trust Anchors: Feb 13 19:48:56.383422 systemd-resolved[2017]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:48:56.383504 systemd-resolved[2017]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:48:56.384063 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:56.388276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:56.391461 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:48:56.400418 systemd-resolved[2017]: Defaulting to hostname 'linux'. Feb 13 19:48:56.402042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:48:56.409908 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:48:56.414425 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:48:56.420654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:56.421016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:56.425098 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:48:56.425637 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:48:56.429449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:56.430024 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:56.434464 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:56.435151 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:56.445887 systemd[1]: Finished ensure-sysext.service. Feb 13 19:48:56.458001 systemd[1]: Reached target network.target - Network. Feb 13 19:48:56.460044 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:48:56.462373 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:56.464781 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:48:56.465018 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:48:56.465195 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:48:56.465342 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:48:56.467770 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:48:56.470593 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:48:56.473245 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:48:56.475458 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:48:56.477814 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:48:56.480194 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:48:56.480238 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:48:56.481927 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:48:56.485641 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:48:56.491752 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:48:56.496491 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:48:56.501337 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:48:56.503682 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:48:56.508251 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:48:56.512719 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:48:56.512991 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:48:56.513304 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:48:56.523429 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:48:56.532382 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:48:56.546493 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:48:56.552041 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:48:56.566363 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:48:56.568378 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:48:56.577344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:48:56.589056 jq[2086]: false Feb 13 19:48:56.589599 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:48:56.616575 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:48:56.621542 dbus-daemon[2085]: [system] SELinux support is enabled Feb 13 19:48:56.627929 dbus-daemon[2085]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1683 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:48:56.632601 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:48:56.644014 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:48:56.658461 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:48:56.670628 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:48:56.701513 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:48:56.731627 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:48:56.736216 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:48:56.764458 ntpd[2091]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:48:56.775723 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:48:56.775723 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:48:56.775723 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: ---------------------------------------------------- Feb 13 19:48:56.775723 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:48:56.775723 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:48:56.775723 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: corporation. Support and training for ntp-4 are Feb 13 19:48:56.775723 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: available at https://www.nwtime.org/support Feb 13 19:48:56.775723 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: ---------------------------------------------------- Feb 13 19:48:56.775723 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: proto: precision = 0.096 usec (-23) Feb 13 19:48:56.775723 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: basedate set to 2025-02-01 Feb 13 19:48:56.775723 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: gps base set to 2025-02-02 (week 2352) Feb 13 19:48:56.774333 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:48:56.776588 extend-filesystems[2087]: Found loop4 Feb 13 19:48:56.776588 extend-filesystems[2087]: Found loop5 Feb 13 19:48:56.776588 extend-filesystems[2087]: Found loop6 Feb 13 19:48:56.776588 extend-filesystems[2087]: Found loop7 Feb 13 19:48:56.776588 extend-filesystems[2087]: Found nvme0n1 Feb 13 19:48:56.776588 extend-filesystems[2087]: Found nvme0n1p1 Feb 13 19:48:56.776588 extend-filesystems[2087]: Found nvme0n1p2 Feb 13 19:48:56.776588 extend-filesystems[2087]: Found nvme0n1p3 Feb 13 19:48:56.776588 extend-filesystems[2087]: Found usr Feb 13 19:48:56.776588 extend-filesystems[2087]: Found nvme0n1p4 Feb 13 19:48:56.776588 extend-filesystems[2087]: Found nvme0n1p6 Feb 13 19:48:56.776588 extend-filesystems[2087]: Found nvme0n1p7 Feb 13 19:48:56.776588 extend-filesystems[2087]: Found nvme0n1p9 Feb 13 19:48:56.776588 extend-filesystems[2087]: Checking size of /dev/nvme0n1p9 Feb 13 19:48:56.764511 ntpd[2091]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:48:56.837593 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:48:56.837593 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:48:56.837593 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:48:56.837593 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: Listen normally on 3 eth0 172.31.20.210:123 Feb 13 19:48:56.837593 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: Listen normally on 4 lo [::1]:123 Feb 13 19:48:56.837593 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: Listen normally on 5 eth0 [fe80::436:beff:fe9f:1b31%2]:123 Feb 13 19:48:56.837593 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: Listening on routing socket on fd #22 for interface updates Feb 13 19:48:56.837593 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:56.837593 ntpd[2091]: 13 Feb 19:48:56 ntpd[2091]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:56.780309 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:48:56.764532 ntpd[2091]: ---------------------------------------------------- Feb 13 19:48:56.787798 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:48:56.764551 ntpd[2091]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:48:56.826130 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:48:56.764570 ntpd[2091]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:48:56.826693 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:48:56.764589 ntpd[2091]: corporation. Support and training for ntp-4 are Feb 13 19:48:56.764607 ntpd[2091]: available at https://www.nwtime.org/support Feb 13 19:48:56.764625 ntpd[2091]: ---------------------------------------------------- Feb 13 19:48:56.769875 ntpd[2091]: proto: precision = 0.096 usec (-23) Feb 13 19:48:56.771670 ntpd[2091]: basedate set to 2025-02-01 Feb 13 19:48:56.771707 ntpd[2091]: gps base set to 2025-02-02 (week 2352) Feb 13 19:48:56.777468 ntpd[2091]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:48:56.777539 ntpd[2091]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:48:56.777790 ntpd[2091]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:48:56.777856 ntpd[2091]: Listen normally on 3 eth0 172.31.20.210:123 Feb 13 19:48:56.777926 ntpd[2091]: Listen normally on 4 lo [::1]:123 Feb 13 19:48:56.778001 ntpd[2091]: Listen normally on 5 eth0 [fe80::436:beff:fe9f:1b31%2]:123 Feb 13 19:48:56.780197 ntpd[2091]: Listening on routing socket on fd #22 for interface updates Feb 13 19:48:56.787739 ntpd[2091]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:56.787790 ntpd[2091]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:56.853336 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:48:56.853870 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:48:56.856961 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:48:56.882437 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:48:56.882971 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:48:56.894121 jq[2119]: true Feb 13 19:48:56.978795 extend-filesystems[2087]: Resized partition /dev/nvme0n1p9 Feb 13 19:48:56.997072 extend-filesystems[2144]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:48:57.003479 coreos-metadata[2084]: Feb 13 19:48:57.002 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:48:57.003479 coreos-metadata[2084]: Feb 13 19:48:57.002 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:48:57.003479 coreos-metadata[2084]: Feb 13 19:48:57.002 INFO Fetch successful Feb 13 19:48:57.003479 coreos-metadata[2084]: Feb 13 19:48:57.002 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:48:57.003479 coreos-metadata[2084]: Feb 13 19:48:57.002 INFO Fetch successful Feb 13 19:48:57.003479 coreos-metadata[2084]: Feb 13 19:48:57.002 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:48:57.003479 coreos-metadata[2084]: Feb 13 19:48:57.002 INFO Fetch successful Feb 13 19:48:57.003479 coreos-metadata[2084]: Feb 13 19:48:57.002 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:48:57.005048 coreos-metadata[2084]: Feb 13 19:48:57.004 INFO Fetch successful Feb 13 19:48:57.005048 coreos-metadata[2084]: Feb 13 19:48:57.004 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:48:57.007783 coreos-metadata[2084]: Feb 13 19:48:57.006 INFO Fetch failed with 404: resource not found Feb 13 19:48:57.007783 coreos-metadata[2084]: Feb 13 19:48:57.006 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:48:57.012265 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:48:57.022024 coreos-metadata[2084]: Feb 13 19:48:57.018 INFO Fetch successful Feb 13 19:48:57.022024 coreos-metadata[2084]: Feb 13 19:48:57.018 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:48:57.022024 coreos-metadata[2084]: Feb 13 19:48:57.018 INFO Fetch successful Feb 13 19:48:57.022024 coreos-metadata[2084]: Feb 13 19:48:57.018 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:48:57.022024 coreos-metadata[2084]: Feb 13 19:48:57.018 INFO Fetch successful Feb 13 19:48:57.022024 coreos-metadata[2084]: Feb 13 19:48:57.018 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:48:57.022024 coreos-metadata[2084]: Feb 13 19:48:57.018 INFO Fetch successful Feb 13 19:48:57.022024 coreos-metadata[2084]: Feb 13 19:48:57.018 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:48:57.022024 coreos-metadata[2084]: Feb 13 19:48:57.018 INFO Fetch successful Feb 13 19:48:57.042802 (ntainerd)[2141]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:48:57.059364 update_engine[2116]: I20250213 19:48:57.054646 2116 main.cc:92] Flatcar Update Engine starting Feb 13 19:48:57.067805 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:48:57.067872 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:48:57.071287 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:48:57.071324 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:48:57.074874 dbus-daemon[2085]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:48:57.081498 jq[2139]: true Feb 13 19:48:57.097458 update_engine[2116]: I20250213 19:48:57.097374 2116 update_check_scheduler.cc:74] Next update check in 2m16s Feb 13 19:48:57.111778 tar[2129]: linux-arm64/helm Feb 13 19:48:57.113533 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:48:57.149998 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:48:57.157829 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:48:57.162361 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:48:57.176421 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:48:57.210876 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:48:57.215783 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:48:57.267137 extend-filesystems[2144]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:48:57.267137 extend-filesystems[2144]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:48:57.267137 extend-filesystems[2144]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:48:57.274708 extend-filesystems[2087]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:48:57.299663 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:48:57.301248 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:48:57.315665 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:48:57.320969 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:48:57.345915 systemd-logind[2111]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:48:57.416003 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2187) Feb 13 19:48:57.345964 systemd-logind[2111]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:48:57.346327 systemd-logind[2111]: New seat seat0. Feb 13 19:48:57.411827 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:48:57.466681 amazon-ssm-agent[2173]: Initializing new seelog logger Feb 13 19:48:57.466681 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Feb 13 19:48:57.466681 amazon-ssm-agent[2173]: 2025/02/13 19:48:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.466681 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.466681 amazon-ssm-agent[2173]: 2025/02/13 19:48:57 processing appconfig overrides Feb 13 19:48:57.469123 amazon-ssm-agent[2173]: 2025/02/13 19:48:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.469123 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.469123 amazon-ssm-agent[2173]: 2025/02/13 19:48:57 processing appconfig overrides Feb 13 19:48:57.469123 amazon-ssm-agent[2173]: 2025/02/13 19:48:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.469123 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.469123 amazon-ssm-agent[2173]: 2025/02/13 19:48:57 processing appconfig overrides Feb 13 19:48:57.469530 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO Proxy environment variables: Feb 13 19:48:57.474705 amazon-ssm-agent[2173]: 2025/02/13 19:48:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.475359 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:57.475743 amazon-ssm-agent[2173]: 2025/02/13 19:48:57 processing appconfig overrides Feb 13 19:48:57.496667 bash[2212]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:48:57.526885 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:48:57.548695 systemd[1]: Starting sshkeys.service... Feb 13 19:48:57.574553 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:48:57.578256 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO https_proxy: Feb 13 19:48:57.602662 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:48:57.679283 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO http_proxy: Feb 13 19:48:57.790728 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO no_proxy: Feb 13 19:48:57.871913 containerd[2141]: time="2025-02-13T19:48:57.871772714Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:48:57.892016 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:48:57.996199 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:48:58.005333 coreos-metadata[2228]: Feb 13 19:48:58.004 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:48:58.008470 coreos-metadata[2228]: Feb 13 19:48:58.007 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:48:58.008470 coreos-metadata[2228]: Feb 13 19:48:58.008 INFO Fetch successful Feb 13 19:48:58.008470 coreos-metadata[2228]: Feb 13 19:48:58.008 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:48:58.014098 coreos-metadata[2228]: Feb 13 19:48:58.010 INFO Fetch successful Feb 13 19:48:58.014934 dbus-daemon[2085]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:48:58.015271 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:48:58.018671 unknown[2228]: wrote ssh authorized keys file for user: core Feb 13 19:48:58.023796 dbus-daemon[2085]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2166 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:48:58.045815 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:48:58.094114 update-ssh-keys[2286]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:48:58.095898 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:48:58.104114 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO Agent will take identity from EC2 Feb 13 19:48:58.109856 systemd[1]: Finished sshkeys.service. Feb 13 19:48:58.127227 polkitd[2283]: Started polkitd version 121 Feb 13 19:48:58.157519 polkitd[2283]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:48:58.157654 polkitd[2283]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:48:58.163755 containerd[2141]: time="2025-02-13T19:48:58.162744755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:58.166414 polkitd[2283]: Finished loading, compiling and executing 2 rules Feb 13 19:48:58.169232 dbus-daemon[2085]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:48:58.172353 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:48:58.176724 polkitd[2283]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:48:58.181212 containerd[2141]: time="2025-02-13T19:48:58.181125803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:58.181212 containerd[2141]: time="2025-02-13T19:48:58.181201367Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:48:58.181369 containerd[2141]: time="2025-02-13T19:48:58.181238135Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:48:58.184360 containerd[2141]: time="2025-02-13T19:48:58.181534667Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:48:58.184360 containerd[2141]: time="2025-02-13T19:48:58.181580855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:58.184360 containerd[2141]: time="2025-02-13T19:48:58.181722755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:58.184360 containerd[2141]: time="2025-02-13T19:48:58.181758215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:58.186704 containerd[2141]: time="2025-02-13T19:48:58.186634907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:58.186704 containerd[2141]: time="2025-02-13T19:48:58.186698495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:58.186854 containerd[2141]: time="2025-02-13T19:48:58.186738167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:58.186854 containerd[2141]: time="2025-02-13T19:48:58.186771155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:58.188029 containerd[2141]: time="2025-02-13T19:48:58.186985139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:58.189284 containerd[2141]: time="2025-02-13T19:48:58.189224171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:58.189682 containerd[2141]: time="2025-02-13T19:48:58.189546767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:58.189682 containerd[2141]: time="2025-02-13T19:48:58.189605891Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:48:58.195103 containerd[2141]: time="2025-02-13T19:48:58.193598243Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:48:58.195103 containerd[2141]: time="2025-02-13T19:48:58.193779035Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:48:58.198581 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:58.211289 containerd[2141]: time="2025-02-13T19:48:58.209641979Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:48:58.211289 containerd[2141]: time="2025-02-13T19:48:58.209737703Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:48:58.211289 containerd[2141]: time="2025-02-13T19:48:58.209773667Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:48:58.211289 containerd[2141]: time="2025-02-13T19:48:58.209826419Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:48:58.211289 containerd[2141]: time="2025-02-13T19:48:58.209862023Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:48:58.216842 containerd[2141]: time="2025-02-13T19:48:58.216786840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237304320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237589752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237626604Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237657924Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237690360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237721128Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237755148Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237787752Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237821124Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237854640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237889200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237920700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.237961536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239106 containerd[2141]: time="2025-02-13T19:48:58.238002324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238035336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238070136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238132428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238167732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238197840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238228632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238259820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238295904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238324404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238353336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238382352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238439496Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238495056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238526952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.239806 containerd[2141]: time="2025-02-13T19:48:58.238560648Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:48:58.240455 containerd[2141]: time="2025-02-13T19:48:58.238678824Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:48:58.240455 containerd[2141]: time="2025-02-13T19:48:58.238717500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:48:58.240455 containerd[2141]: time="2025-02-13T19:48:58.238745292Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:48:58.240455 containerd[2141]: time="2025-02-13T19:48:58.238776096Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:48:58.240455 containerd[2141]: time="2025-02-13T19:48:58.238800444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.240455 containerd[2141]: time="2025-02-13T19:48:58.238830708Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:48:58.240455 containerd[2141]: time="2025-02-13T19:48:58.238854516Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:48:58.240455 containerd[2141]: time="2025-02-13T19:48:58.238879656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.250528668Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.250679208Z" level=info msg="Connect containerd service" Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.250751760Z" level=info msg="using legacy CRI server" Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.250771272Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.250930860Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.251986824Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.252384168Z" level=info msg="Start subscribing containerd event" Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.252471504Z" level=info msg="Start recovering state" Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.252594084Z" level=info msg="Start event monitor" Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.252618648Z" level=info msg="Start snapshots syncer" Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.252640920Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:48:58.252694 containerd[2141]: time="2025-02-13T19:48:58.252660624Z" level=info msg="Start streaming server" Feb 13 19:48:58.258642 systemd-hostnamed[2166]: Hostname set to (transient) Feb 13 19:48:58.260418 systemd-resolved[2017]: System hostname changed to 'ip-172-31-20-210'. Feb 13 19:48:58.268522 containerd[2141]: time="2025-02-13T19:48:58.268220580Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:48:58.268522 containerd[2141]: time="2025-02-13T19:48:58.268351560Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:48:58.268522 containerd[2141]: time="2025-02-13T19:48:58.268479408Z" level=info msg="containerd successfully booted in 0.410731s" Feb 13 19:48:58.268632 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:48:58.273929 sshd_keygen[2130]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:48:58.299865 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:58.375374 locksmithd[2169]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:48:58.397528 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:58.431926 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:48:58.444614 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:48:58.496381 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:48:58.503791 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:48:58.504508 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:48:58.518554 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:48:58.569911 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:48:58.582736 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:48:58.595687 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:48:58.597057 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:48:58.601741 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:48:58.696003 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:48:58.796725 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:48:58.866223 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO [Registrar] Starting registrar module Feb 13 19:48:58.867399 amazon-ssm-agent[2173]: 2025-02-13 19:48:57 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:48:58.867722 amazon-ssm-agent[2173]: 2025-02-13 19:48:58 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:48:58.867722 amazon-ssm-agent[2173]: 2025-02-13 19:48:58 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:48:58.867722 amazon-ssm-agent[2173]: 2025-02-13 19:48:58 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:48:58.867722 amazon-ssm-agent[2173]: 2025-02-13 19:48:58 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:48:58.896818 amazon-ssm-agent[2173]: 2025-02-13 19:48:58 INFO [CredentialRefresher] Next credential rotation will be in 30.966635283466665 minutes Feb 13 19:48:59.037141 tar[2129]: linux-arm64/LICENSE Feb 13 19:48:59.037720 tar[2129]: linux-arm64/README.md Feb 13 19:48:59.066039 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:48:59.799588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:48:59.803866 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:48:59.809380 systemd[1]: Startup finished in 10.394s (kernel) + 9.821s (userspace) = 20.216s. Feb 13 19:48:59.815670 (kubelet)[2376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:48:59.894752 amazon-ssm-agent[2173]: 2025-02-13 19:48:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:48:59.996217 amazon-ssm-agent[2173]: 2025-02-13 19:48:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2382) started Feb 13 19:49:00.098012 amazon-ssm-agent[2173]: 2025-02-13 19:48:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:49:01.171782 kubelet[2376]: E0213 19:49:01.171692 2376 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:01.177278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:01.178205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:03.933547 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:49:03.944534 systemd[1]: Started sshd@0-172.31.20.210:22-139.178.89.65:43916.service - OpenSSH per-connection server daemon (139.178.89.65:43916). Feb 13 19:49:04.170614 sshd[2401]: Accepted publickey for core from 139.178.89.65 port 43916 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:04.174192 sshd[2401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:04.189210 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:49:04.199522 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:49:04.205708 systemd-logind[2111]: New session 1 of user core. Feb 13 19:49:04.223905 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:49:04.238224 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:49:04.251721 (systemd)[2407]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:49:04.465871 systemd[2407]: Queued start job for default target default.target. Feb 13 19:49:04.467822 systemd[2407]: Created slice app.slice - User Application Slice. Feb 13 19:49:04.468050 systemd[2407]: Reached target paths.target - Paths. Feb 13 19:49:04.468207 systemd[2407]: Reached target timers.target - Timers. Feb 13 19:49:04.478301 systemd[2407]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:49:04.491518 systemd[2407]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:49:04.491640 systemd[2407]: Reached target sockets.target - Sockets. Feb 13 19:49:04.491672 systemd[2407]: Reached target basic.target - Basic System. Feb 13 19:49:04.491757 systemd[2407]: Reached target default.target - Main User Target. Feb 13 19:49:04.491817 systemd[2407]: Startup finished in 228ms. Feb 13 19:49:04.492606 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:49:04.506730 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:49:04.657838 systemd[1]: Started sshd@1-172.31.20.210:22-139.178.89.65:50918.service - OpenSSH per-connection server daemon (139.178.89.65:50918). Feb 13 19:49:04.843408 sshd[2419]: Accepted publickey for core from 139.178.89.65 port 50918 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:04.845908 sshd[2419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:04.854517 systemd-logind[2111]: New session 2 of user core. Feb 13 19:49:04.865825 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:49:04.995408 sshd[2419]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:05.001694 systemd[1]: sshd@1-172.31.20.210:22-139.178.89.65:50918.service: Deactivated successfully. Feb 13 19:49:05.008517 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:49:05.010190 systemd-logind[2111]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:49:05.011954 systemd-logind[2111]: Removed session 2. Feb 13 19:49:05.024596 systemd[1]: Started sshd@2-172.31.20.210:22-139.178.89.65:50920.service - OpenSSH per-connection server daemon (139.178.89.65:50920). Feb 13 19:49:05.200585 sshd[2427]: Accepted publickey for core from 139.178.89.65 port 50920 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:05.203155 sshd[2427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:05.210271 systemd-logind[2111]: New session 3 of user core. Feb 13 19:49:05.222541 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:49:05.344358 sshd[2427]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:05.349172 systemd-logind[2111]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:49:05.350628 systemd[1]: sshd@2-172.31.20.210:22-139.178.89.65:50920.service: Deactivated successfully. Feb 13 19:49:05.357460 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:49:05.358811 systemd-logind[2111]: Removed session 3. Feb 13 19:49:05.381554 systemd[1]: Started sshd@3-172.31.20.210:22-139.178.89.65:50934.service - OpenSSH per-connection server daemon (139.178.89.65:50934). Feb 13 19:49:05.545038 sshd[2435]: Accepted publickey for core from 139.178.89.65 port 50934 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:05.547061 sshd[2435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:05.553991 systemd-logind[2111]: New session 4 of user core. Feb 13 19:49:05.565545 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:49:05.692420 sshd[2435]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:05.699031 systemd[1]: sshd@3-172.31.20.210:22-139.178.89.65:50934.service: Deactivated successfully. Feb 13 19:49:05.705053 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:49:05.706872 systemd-logind[2111]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:49:05.709025 systemd-logind[2111]: Removed session 4. Feb 13 19:49:05.721625 systemd[1]: Started sshd@4-172.31.20.210:22-139.178.89.65:50944.service - OpenSSH per-connection server daemon (139.178.89.65:50944). Feb 13 19:49:05.898710 sshd[2443]: Accepted publickey for core from 139.178.89.65 port 50944 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:05.900689 sshd[2443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:05.910782 systemd-logind[2111]: New session 5 of user core. Feb 13 19:49:05.916592 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:49:06.058500 sudo[2447]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:49:06.059167 sudo[2447]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:06.078690 sudo[2447]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:06.103507 sshd[2443]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:06.110784 systemd[1]: sshd@4-172.31.20.210:22-139.178.89.65:50944.service: Deactivated successfully. Feb 13 19:49:06.117378 systemd-logind[2111]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:49:06.118709 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:49:06.120362 systemd-logind[2111]: Removed session 5. Feb 13 19:49:06.133597 systemd[1]: Started sshd@5-172.31.20.210:22-139.178.89.65:50950.service - OpenSSH per-connection server daemon (139.178.89.65:50950). Feb 13 19:49:06.314560 sshd[2452]: Accepted publickey for core from 139.178.89.65 port 50950 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:06.317377 sshd[2452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:06.325754 systemd-logind[2111]: New session 6 of user core. Feb 13 19:49:06.335587 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:49:06.442641 sudo[2457]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:49:06.443828 sudo[2457]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:06.450296 sudo[2457]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:06.460410 sudo[2456]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:49:06.461045 sudo[2456]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:06.492539 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:49:06.495639 auditctl[2460]: No rules Feb 13 19:49:06.496457 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:49:06.496955 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:49:06.509327 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:49:06.551022 augenrules[2479]: No rules Feb 13 19:49:06.554591 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:49:06.557489 sudo[2456]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:06.582372 sshd[2452]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:06.588069 systemd-logind[2111]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:49:06.588744 systemd[1]: sshd@5-172.31.20.210:22-139.178.89.65:50950.service: Deactivated successfully. Feb 13 19:49:06.593824 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:49:06.596877 systemd-logind[2111]: Removed session 6. Feb 13 19:49:06.612616 systemd[1]: Started sshd@6-172.31.20.210:22-139.178.89.65:50960.service - OpenSSH per-connection server daemon (139.178.89.65:50960). Feb 13 19:49:06.790903 sshd[2488]: Accepted publickey for core from 139.178.89.65 port 50960 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:06.793482 sshd[2488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:06.803191 systemd-logind[2111]: New session 7 of user core. Feb 13 19:49:06.813722 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:49:06.919560 sudo[2492]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:49:06.920797 sudo[2492]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:07.507504 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:49:07.507988 (dockerd)[2508]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:49:07.951876 dockerd[2508]: time="2025-02-13T19:49:07.951803444Z" level=info msg="Starting up" Feb 13 19:49:08.125689 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2395120431-merged.mount: Deactivated successfully. Feb 13 19:49:08.386945 dockerd[2508]: time="2025-02-13T19:49:08.386355936Z" level=info msg="Loading containers: start." Feb 13 19:49:08.575130 kernel: Initializing XFRM netlink socket Feb 13 19:49:08.638825 (udev-worker)[2530]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:49:08.730581 systemd-networkd[1683]: docker0: Link UP Feb 13 19:49:08.749755 dockerd[2508]: time="2025-02-13T19:49:08.749504476Z" level=info msg="Loading containers: done." Feb 13 19:49:08.779794 dockerd[2508]: time="2025-02-13T19:49:08.777860127Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:49:08.779794 dockerd[2508]: time="2025-02-13T19:49:08.778121908Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:49:08.778531 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1650237299-merged.mount: Deactivated successfully. Feb 13 19:49:08.781932 dockerd[2508]: time="2025-02-13T19:49:08.781311688Z" level=info msg="Daemon has completed initialization" Feb 13 19:49:08.832128 dockerd[2508]: time="2025-02-13T19:49:08.831697931Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:49:08.832050 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:49:10.425051 containerd[2141]: time="2025-02-13T19:49:10.424902851Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:49:11.035750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123923963.mount: Deactivated successfully. Feb 13 19:49:11.388847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:49:11.405460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:11.778494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:11.791794 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:11.885587 kubelet[2716]: E0213 19:49:11.885373 2716 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:11.894947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:11.897015 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:12.698285 containerd[2141]: time="2025-02-13T19:49:12.698219482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.700363 containerd[2141]: time="2025-02-13T19:49:12.700308873Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 19:49:12.702166 containerd[2141]: time="2025-02-13T19:49:12.702062912Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.707829 containerd[2141]: time="2025-02-13T19:49:12.707772560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.710871 containerd[2141]: time="2025-02-13T19:49:12.710319869Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.28535856s" Feb 13 19:49:12.710871 containerd[2141]: time="2025-02-13T19:49:12.710379539Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:49:12.747777 containerd[2141]: time="2025-02-13T19:49:12.747708646Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:49:14.325145 containerd[2141]: time="2025-02-13T19:49:14.324762673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:14.327004 containerd[2141]: time="2025-02-13T19:49:14.326937701Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 19:49:14.327869 containerd[2141]: time="2025-02-13T19:49:14.327786708Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:14.339211 containerd[2141]: time="2025-02-13T19:49:14.339067797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:14.342195 containerd[2141]: time="2025-02-13T19:49:14.342129769Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.594354665s" Feb 13 19:49:14.342966 containerd[2141]: time="2025-02-13T19:49:14.342371076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:49:14.381319 containerd[2141]: time="2025-02-13T19:49:14.381253022Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:49:15.474544 containerd[2141]: time="2025-02-13T19:49:15.474468764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:15.476637 containerd[2141]: time="2025-02-13T19:49:15.476570700Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 19:49:15.477621 containerd[2141]: time="2025-02-13T19:49:15.477179959Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:15.482803 containerd[2141]: time="2025-02-13T19:49:15.482711076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:15.486027 containerd[2141]: time="2025-02-13T19:49:15.485156065Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.103824963s" Feb 13 19:49:15.486027 containerd[2141]: time="2025-02-13T19:49:15.485214728Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:49:15.523657 containerd[2141]: time="2025-02-13T19:49:15.523357531Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:49:16.763578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386465790.mount: Deactivated successfully. Feb 13 19:49:17.272434 containerd[2141]: time="2025-02-13T19:49:17.272352074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:17.273874 containerd[2141]: time="2025-02-13T19:49:17.273803803Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 19:49:17.275025 containerd[2141]: time="2025-02-13T19:49:17.274947563Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:17.278571 containerd[2141]: time="2025-02-13T19:49:17.278490327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:17.280214 containerd[2141]: time="2025-02-13T19:49:17.279965805Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.756549107s" Feb 13 19:49:17.280214 containerd[2141]: time="2025-02-13T19:49:17.280029997Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:49:17.317577 containerd[2141]: time="2025-02-13T19:49:17.317435877Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:49:17.833245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85372937.mount: Deactivated successfully. Feb 13 19:49:18.920569 containerd[2141]: time="2025-02-13T19:49:18.920487475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:18.922737 containerd[2141]: time="2025-02-13T19:49:18.922615066Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:49:18.923444 containerd[2141]: time="2025-02-13T19:49:18.923357063Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:18.929214 containerd[2141]: time="2025-02-13T19:49:18.929163119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:18.931757 containerd[2141]: time="2025-02-13T19:49:18.931540689Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.614043739s" Feb 13 19:49:18.931757 containerd[2141]: time="2025-02-13T19:49:18.931602338Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:49:18.972430 containerd[2141]: time="2025-02-13T19:49:18.972354820Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:49:19.428610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1796190810.mount: Deactivated successfully. Feb 13 19:49:19.435675 containerd[2141]: time="2025-02-13T19:49:19.435379893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:19.436999 containerd[2141]: time="2025-02-13T19:49:19.436947881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 19:49:19.437674 containerd[2141]: time="2025-02-13T19:49:19.437433326Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:19.441826 containerd[2141]: time="2025-02-13T19:49:19.441732683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:19.443612 containerd[2141]: time="2025-02-13T19:49:19.443420035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 471.003889ms" Feb 13 19:49:19.443612 containerd[2141]: time="2025-02-13T19:49:19.443474200Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:49:19.481828 containerd[2141]: time="2025-02-13T19:49:19.481759276Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:49:20.000510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368775717.mount: Deactivated successfully. Feb 13 19:49:22.138601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:49:22.151483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:22.532396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:22.538831 (kubelet)[2876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:22.645843 kubelet[2876]: E0213 19:49:22.645539 2876 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:22.651986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:22.652492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:23.125851 containerd[2141]: time="2025-02-13T19:49:23.125588253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:23.127996 containerd[2141]: time="2025-02-13T19:49:23.127918063Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 19:49:23.129498 containerd[2141]: time="2025-02-13T19:49:23.129429343Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:23.135483 containerd[2141]: time="2025-02-13T19:49:23.135402703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:23.138002 containerd[2141]: time="2025-02-13T19:49:23.137952208Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.65613614s" Feb 13 19:49:23.138334 containerd[2141]: time="2025-02-13T19:49:23.138138823Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:49:28.275826 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:49:30.453412 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:30.466674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:30.502230 systemd[1]: Reloading requested from client PID 2955 ('systemctl') (unit session-7.scope)... Feb 13 19:49:30.502264 systemd[1]: Reloading... Feb 13 19:49:30.711129 zram_generator::config[2998]: No configuration found. Feb 13 19:49:30.970243 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:31.129676 systemd[1]: Reloading finished in 626 ms. Feb 13 19:49:31.219786 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:49:31.220215 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:49:31.221209 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:31.232613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:31.535187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:31.552756 (kubelet)[3070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:31.622934 kubelet[3070]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:31.622934 kubelet[3070]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:49:31.622934 kubelet[3070]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:31.625218 kubelet[3070]: I0213 19:49:31.625133 3070 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:49:32.716752 kubelet[3070]: I0213 19:49:32.716680 3070 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:49:32.716752 kubelet[3070]: I0213 19:49:32.716735 3070 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:49:32.717530 kubelet[3070]: I0213 19:49:32.717163 3070 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:49:32.752134 kubelet[3070]: E0213 19:49:32.751994 3070 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.210:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:32.753657 kubelet[3070]: I0213 19:49:32.753323 3070 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:49:32.771129 kubelet[3070]: I0213 19:49:32.771042 3070 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:49:32.775076 kubelet[3070]: I0213 19:49:32.774110 3070 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:49:32.775076 kubelet[3070]: I0213 19:49:32.774185 3070 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-210","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:49:32.775076 kubelet[3070]: I0213 19:49:32.774515 3070 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:49:32.775076 kubelet[3070]: I0213 19:49:32.774534 3070 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:49:32.775076 kubelet[3070]: I0213 19:49:32.774807 3070 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:32.776450 kubelet[3070]: I0213 19:49:32.776415 3070 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:49:32.776576 kubelet[3070]: I0213 19:49:32.776556 3070 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:49:32.776795 kubelet[3070]: I0213 19:49:32.776777 3070 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:49:32.776943 kubelet[3070]: I0213 19:49:32.776924 3070 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:49:32.778407 kubelet[3070]: W0213 19:49:32.778332 3070 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-210&limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:32.778735 kubelet[3070]: E0213 19:49:32.778592 3070 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-210&limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:32.779284 kubelet[3070]: W0213 19:49:32.779048 3070 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:32.779507 kubelet[3070]: E0213 19:49:32.779455 3070 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:32.780139 kubelet[3070]: I0213 19:49:32.779755 3070 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:49:32.780294 kubelet[3070]: I0213 19:49:32.780272 3070 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:49:32.780464 kubelet[3070]: W0213 19:49:32.780443 3070 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:49:32.781724 kubelet[3070]: I0213 19:49:32.781686 3070 server.go:1264] "Started kubelet" Feb 13 19:49:32.791850 kubelet[3070]: E0213 19:49:32.789504 3070 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.210:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.210:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-210.1823dc5cede7f365 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-210,UID:ip-172-31-20-210,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-210,},FirstTimestamp:2025-02-13 19:49:32.781646693 +0000 UTC m=+1.222736707,LastTimestamp:2025-02-13 19:49:32.781646693 +0000 UTC m=+1.222736707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-210,}" Feb 13 19:49:32.792196 kubelet[3070]: I0213 19:49:32.792159 3070 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:49:32.799260 kubelet[3070]: I0213 19:49:32.798335 3070 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:49:32.800286 kubelet[3070]: I0213 19:49:32.800235 3070 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:49:32.802119 kubelet[3070]: I0213 19:49:32.801989 3070 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:49:32.802497 kubelet[3070]: I0213 19:49:32.802452 3070 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:49:32.804216 kubelet[3070]: I0213 19:49:32.803488 3070 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:49:32.804216 kubelet[3070]: I0213 19:49:32.803664 3070 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:49:32.807112 kubelet[3070]: I0213 19:49:32.807047 3070 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:49:32.807820 kubelet[3070]: W0213 19:49:32.807725 3070 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:32.807820 kubelet[3070]: E0213 19:49:32.807824 3070 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:32.807989 kubelet[3070]: E0213 19:49:32.807947 3070 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-210?timeout=10s\": dial tcp 172.31.20.210:6443: connect: connection refused" interval="200ms" Feb 13 19:49:32.808995 kubelet[3070]: E0213 19:49:32.808936 3070 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:49:32.811521 kubelet[3070]: I0213 19:49:32.811444 3070 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:49:32.811521 kubelet[3070]: I0213 19:49:32.811483 3070 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:49:32.811748 kubelet[3070]: I0213 19:49:32.811643 3070 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:49:32.848470 kubelet[3070]: I0213 19:49:32.848353 3070 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:49:32.857154 kubelet[3070]: I0213 19:49:32.856775 3070 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:49:32.857154 kubelet[3070]: I0213 19:49:32.856880 3070 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:49:32.857154 kubelet[3070]: I0213 19:49:32.856916 3070 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:49:32.857154 kubelet[3070]: E0213 19:49:32.856990 3070 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:49:32.859603 kubelet[3070]: W0213 19:49:32.859405 3070 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:32.859869 kubelet[3070]: E0213 19:49:32.859840 3070 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:32.886216 kubelet[3070]: I0213 19:49:32.886179 3070 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:49:32.886945 kubelet[3070]: I0213 19:49:32.886419 3070 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:49:32.886945 kubelet[3070]: I0213 19:49:32.886591 3070 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:32.890784 kubelet[3070]: I0213 19:49:32.890711 3070 policy_none.go:49] "None policy: Start" Feb 13 19:49:32.892184 kubelet[3070]: I0213 19:49:32.892034 3070 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:49:32.892184 kubelet[3070]: I0213 19:49:32.892115 3070 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:49:32.903007 kubelet[3070]: I0213 19:49:32.902792 3070 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:49:32.907472 kubelet[3070]: I0213 19:49:32.907296 3070 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:49:32.907883 kubelet[3070]: I0213 19:49:32.907724 3070 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:49:32.910935 kubelet[3070]: E0213 19:49:32.910735 3070 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-210\" not found" Feb 13 19:49:32.911710 kubelet[3070]: I0213 19:49:32.911666 3070 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-210" Feb 13 19:49:32.912325 kubelet[3070]: E0213 19:49:32.912278 3070 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.210:6443/api/v1/nodes\": dial tcp 172.31.20.210:6443: connect: connection refused" node="ip-172-31-20-210" Feb 13 19:49:32.957740 kubelet[3070]: I0213 19:49:32.957395 3070 topology_manager.go:215] "Topology Admit Handler" podUID="e758b84d99a39aacb9ad87ba8f95df68" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-210" Feb 13 19:49:32.959637 kubelet[3070]: I0213 19:49:32.959597 3070 topology_manager.go:215] "Topology Admit Handler" podUID="4c23436c5d01a804db9bacc495002d6d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:32.963516 kubelet[3070]: I0213 19:49:32.963199 3070 topology_manager.go:215] "Topology Admit Handler" podUID="d894a7582a23c2e824f1b85d6a89d6e5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-210" Feb 13 19:49:33.009804 kubelet[3070]: I0213 19:49:33.007840 3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c23436c5d01a804db9bacc495002d6d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-210\" (UID: \"4c23436c5d01a804db9bacc495002d6d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:33.009804 kubelet[3070]: I0213 19:49:33.008901 3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d894a7582a23c2e824f1b85d6a89d6e5-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-210\" (UID: \"d894a7582a23c2e824f1b85d6a89d6e5\") " pod="kube-system/kube-scheduler-ip-172-31-20-210" Feb 13 19:49:33.009804 kubelet[3070]: I0213 19:49:33.008960 3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e758b84d99a39aacb9ad87ba8f95df68-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-210\" (UID: \"e758b84d99a39aacb9ad87ba8f95df68\") " pod="kube-system/kube-apiserver-ip-172-31-20-210" Feb 13 19:49:33.009804 kubelet[3070]: I0213 19:49:33.008998 3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e758b84d99a39aacb9ad87ba8f95df68-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-210\" (UID: \"e758b84d99a39aacb9ad87ba8f95df68\") " pod="kube-system/kube-apiserver-ip-172-31-20-210" Feb 13 19:49:33.009804 kubelet[3070]: I0213 19:49:33.009038 3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c23436c5d01a804db9bacc495002d6d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-210\" (UID: \"4c23436c5d01a804db9bacc495002d6d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:33.010273 kubelet[3070]: I0213 19:49:33.009073 3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c23436c5d01a804db9bacc495002d6d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-210\" (UID: \"4c23436c5d01a804db9bacc495002d6d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:33.010273 kubelet[3070]: I0213 19:49:33.009130 3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e758b84d99a39aacb9ad87ba8f95df68-ca-certs\") pod \"kube-apiserver-ip-172-31-20-210\" (UID: \"e758b84d99a39aacb9ad87ba8f95df68\") " pod="kube-system/kube-apiserver-ip-172-31-20-210" Feb 13 19:49:33.010273 kubelet[3070]: I0213 19:49:33.009164 3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c23436c5d01a804db9bacc495002d6d-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-210\" (UID: \"4c23436c5d01a804db9bacc495002d6d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:33.010273 kubelet[3070]: I0213 19:49:33.009199 3070 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c23436c5d01a804db9bacc495002d6d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-210\" (UID: \"4c23436c5d01a804db9bacc495002d6d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:33.010273 kubelet[3070]: E0213 19:49:33.009695 3070 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-210?timeout=10s\": dial tcp 172.31.20.210:6443: connect: connection refused" interval="400ms" Feb 13 19:49:33.115179 kubelet[3070]: I0213 19:49:33.114898 3070 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-210" Feb 13 19:49:33.115666 kubelet[3070]: E0213 19:49:33.115601 3070 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.210:6443/api/v1/nodes\": dial tcp 172.31.20.210:6443: connect: connection refused" node="ip-172-31-20-210" Feb 13 19:49:33.270868 containerd[2141]: time="2025-02-13T19:49:33.270417316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-210,Uid:e758b84d99a39aacb9ad87ba8f95df68,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:33.275418 containerd[2141]: time="2025-02-13T19:49:33.275296804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-210,Uid:d894a7582a23c2e824f1b85d6a89d6e5,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:33.279643 containerd[2141]: time="2025-02-13T19:49:33.279300004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-210,Uid:4c23436c5d01a804db9bacc495002d6d,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:33.411162 kubelet[3070]: E0213 19:49:33.411025 3070 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-210?timeout=10s\": dial tcp 172.31.20.210:6443: connect: connection refused" interval="800ms" Feb 13 19:49:33.518944 kubelet[3070]: I0213 19:49:33.518476 3070 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-210" Feb 13 19:49:33.518944 kubelet[3070]: E0213 19:49:33.518887 3070 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.210:6443/api/v1/nodes\": dial tcp 172.31.20.210:6443: connect: connection refused" node="ip-172-31-20-210" Feb 13 19:49:33.722176 kubelet[3070]: W0213 19:49:33.722027 3070 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:33.722176 kubelet[3070]: E0213 19:49:33.722142 3070 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.210:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:33.773655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378701347.mount: Deactivated successfully. Feb 13 19:49:33.782682 containerd[2141]: time="2025-02-13T19:49:33.782595882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:33.786016 containerd[2141]: time="2025-02-13T19:49:33.785962014Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:49:33.787297 containerd[2141]: time="2025-02-13T19:49:33.786788334Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:33.788490 containerd[2141]: time="2025-02-13T19:49:33.788423694Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:33.792131 containerd[2141]: time="2025-02-13T19:49:33.791503626Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:33.792131 containerd[2141]: time="2025-02-13T19:49:33.791929782Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:49:33.793741 containerd[2141]: time="2025-02-13T19:49:33.793674990Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:49:33.797766 containerd[2141]: time="2025-02-13T19:49:33.797681442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:33.801926 containerd[2141]: time="2025-02-13T19:49:33.801573510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 522.165578ms" Feb 13 19:49:33.806696 containerd[2141]: time="2025-02-13T19:49:33.806630442Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 531.222386ms" Feb 13 19:49:33.808339 containerd[2141]: time="2025-02-13T19:49:33.807953538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 537.428546ms" Feb 13 19:49:33.842958 kubelet[3070]: W0213 19:49:33.841853 3070 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:33.842958 kubelet[3070]: E0213 19:49:33.841940 3070 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:34.005671 containerd[2141]: time="2025-02-13T19:49:34.005215275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:34.006231 containerd[2141]: time="2025-02-13T19:49:34.005535183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:34.007631 containerd[2141]: time="2025-02-13T19:49:34.005963727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:34.008075 containerd[2141]: time="2025-02-13T19:49:34.007792671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:34.008603 containerd[2141]: time="2025-02-13T19:49:34.005422743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:34.011430 containerd[2141]: time="2025-02-13T19:49:34.009774219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:34.011430 containerd[2141]: time="2025-02-13T19:49:34.009822987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:34.011430 containerd[2141]: time="2025-02-13T19:49:34.009997935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:34.013343 containerd[2141]: time="2025-02-13T19:49:34.011799423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:34.013343 containerd[2141]: time="2025-02-13T19:49:34.013252611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:34.013552 containerd[2141]: time="2025-02-13T19:49:34.013284783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:34.013960 containerd[2141]: time="2025-02-13T19:49:34.013796247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:34.053439 kubelet[3070]: W0213 19:49:34.052854 3070 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-210&limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:34.053439 kubelet[3070]: E0213 19:49:34.052956 3070 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-210&limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:34.165244 containerd[2141]: time="2025-02-13T19:49:34.164968900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-210,Uid:d894a7582a23c2e824f1b85d6a89d6e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"01ce4274fabf3ba1b3579e4172d2789a6f78914cb32779190c311365c842c8e7\"" Feb 13 19:49:34.169425 kubelet[3070]: W0213 19:49:34.169363 3070 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:34.169554 kubelet[3070]: E0213 19:49:34.169432 3070 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.210:6443: connect: connection refused Feb 13 19:49:34.179987 containerd[2141]: time="2025-02-13T19:49:34.179831860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-210,Uid:4c23436c5d01a804db9bacc495002d6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c463d57dd15d7199826ba6735e9ade235b0d5e675b7791938c109cba78572c4\"" Feb 13 19:49:34.183295 containerd[2141]: time="2025-02-13T19:49:34.183141688Z" level=info msg="CreateContainer within sandbox \"01ce4274fabf3ba1b3579e4172d2789a6f78914cb32779190c311365c842c8e7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:49:34.190131 containerd[2141]: time="2025-02-13T19:49:34.189735148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-210,Uid:e758b84d99a39aacb9ad87ba8f95df68,Namespace:kube-system,Attempt:0,} returns sandbox id \"e74ee739e58a9f395e0524b7d3e718a79c3fa2c0761b6136029326def4d4d845\"" Feb 13 19:49:34.191839 containerd[2141]: time="2025-02-13T19:49:34.191136412Z" level=info msg="CreateContainer within sandbox \"3c463d57dd15d7199826ba6735e9ade235b0d5e675b7791938c109cba78572c4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:49:34.198673 containerd[2141]: time="2025-02-13T19:49:34.198614248Z" level=info msg="CreateContainer within sandbox \"e74ee739e58a9f395e0524b7d3e718a79c3fa2c0761b6136029326def4d4d845\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:49:34.212710 kubelet[3070]: E0213 19:49:34.212632 3070 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-210?timeout=10s\": dial tcp 172.31.20.210:6443: connect: connection refused" interval="1.6s" Feb 13 19:49:34.214206 containerd[2141]: time="2025-02-13T19:49:34.213630508Z" level=info msg="CreateContainer within sandbox \"01ce4274fabf3ba1b3579e4172d2789a6f78914cb32779190c311365c842c8e7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5d0e9f31805727a501e664f5c04710f2994ef15b14034bdc18aa2b72cd38c054\"" Feb 13 19:49:34.215546 containerd[2141]: time="2025-02-13T19:49:34.215012464Z" level=info msg="StartContainer for \"5d0e9f31805727a501e664f5c04710f2994ef15b14034bdc18aa2b72cd38c054\"" Feb 13 19:49:34.231260 containerd[2141]: time="2025-02-13T19:49:34.230781424Z" level=info msg="CreateContainer within sandbox \"e74ee739e58a9f395e0524b7d3e718a79c3fa2c0761b6136029326def4d4d845\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f162e8b696b1a78fb092417f52df9382fc5eda424b9e49753871f53557ef000\"" Feb 13 19:49:34.233204 containerd[2141]: time="2025-02-13T19:49:34.232455748Z" level=info msg="StartContainer for \"8f162e8b696b1a78fb092417f52df9382fc5eda424b9e49753871f53557ef000\"" Feb 13 19:49:34.235416 containerd[2141]: time="2025-02-13T19:49:34.235342648Z" level=info msg="CreateContainer within sandbox \"3c463d57dd15d7199826ba6735e9ade235b0d5e675b7791938c109cba78572c4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"19bf5070011a6a9e530d0d1d0455c9b70798db85ee1f0df7ca7887cec436738e\"" Feb 13 19:49:34.236273 containerd[2141]: time="2025-02-13T19:49:34.236206624Z" level=info msg="StartContainer for \"19bf5070011a6a9e530d0d1d0455c9b70798db85ee1f0df7ca7887cec436738e\"" Feb 13 19:49:34.329177 kubelet[3070]: I0213 19:49:34.328952 3070 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-210" Feb 13 19:49:34.330367 kubelet[3070]: E0213 19:49:34.330282 3070 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.210:6443/api/v1/nodes\": dial tcp 172.31.20.210:6443: connect: connection refused" node="ip-172-31-20-210" Feb 13 19:49:34.457673 containerd[2141]: time="2025-02-13T19:49:34.457050389Z" level=info msg="StartContainer for \"19bf5070011a6a9e530d0d1d0455c9b70798db85ee1f0df7ca7887cec436738e\" returns successfully" Feb 13 19:49:34.457673 containerd[2141]: time="2025-02-13T19:49:34.457254869Z" level=info msg="StartContainer for \"5d0e9f31805727a501e664f5c04710f2994ef15b14034bdc18aa2b72cd38c054\" returns successfully" Feb 13 19:49:34.476589 containerd[2141]: time="2025-02-13T19:49:34.476113302Z" level=info msg="StartContainer for \"8f162e8b696b1a78fb092417f52df9382fc5eda424b9e49753871f53557ef000\" returns successfully" Feb 13 19:49:35.935532 kubelet[3070]: I0213 19:49:35.935475 3070 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-210" Feb 13 19:49:38.208301 kubelet[3070]: E0213 19:49:38.208229 3070 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-210\" not found" node="ip-172-31-20-210" Feb 13 19:49:38.291020 kubelet[3070]: I0213 19:49:38.290953 3070 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-210" Feb 13 19:49:38.339817 kubelet[3070]: E0213 19:49:38.339656 3070 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-210.1823dc5cede7f365 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-210,UID:ip-172-31-20-210,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-210,},FirstTimestamp:2025-02-13 19:49:32.781646693 +0000 UTC m=+1.222736707,LastTimestamp:2025-02-13 19:49:32.781646693 +0000 UTC m=+1.222736707,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-210,}" Feb 13 19:49:38.431519 kubelet[3070]: E0213 19:49:38.431356 3070 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-210.1823dc5cef880d61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-210,UID:ip-172-31-20-210,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-20-210,},FirstTimestamp:2025-02-13 19:49:32.808916321 +0000 UTC m=+1.250006371,LastTimestamp:2025-02-13 19:49:32.808916321 +0000 UTC m=+1.250006371,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-210,}" Feb 13 19:49:38.781405 kubelet[3070]: I0213 19:49:38.781332 3070 apiserver.go:52] "Watching apiserver" Feb 13 19:49:38.804104 kubelet[3070]: I0213 19:49:38.804052 3070 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:49:40.509126 systemd[1]: Reloading requested from client PID 3351 ('systemctl') (unit session-7.scope)... Feb 13 19:49:40.509152 systemd[1]: Reloading... Feb 13 19:49:40.671127 zram_generator::config[3391]: No configuration found. Feb 13 19:49:40.922204 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:41.122299 systemd[1]: Reloading finished in 612 ms. Feb 13 19:49:41.196828 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:41.213518 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:49:41.214122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:41.231146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:41.543481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:41.555807 (kubelet)[3461]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:41.653608 kubelet[3461]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:41.653608 kubelet[3461]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:49:41.653608 kubelet[3461]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:41.655550 kubelet[3461]: I0213 19:49:41.654340 3461 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:49:41.666445 kubelet[3461]: I0213 19:49:41.666395 3461 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:49:41.666445 kubelet[3461]: I0213 19:49:41.666438 3461 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:49:41.667346 kubelet[3461]: I0213 19:49:41.666902 3461 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:49:41.670693 kubelet[3461]: I0213 19:49:41.670217 3461 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:49:41.673243 kubelet[3461]: I0213 19:49:41.673050 3461 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:49:41.697116 kubelet[3461]: I0213 19:49:41.696747 3461 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:49:41.697994 kubelet[3461]: I0213 19:49:41.697938 3461 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:49:41.698665 kubelet[3461]: I0213 19:49:41.698299 3461 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-210","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:49:41.699027 kubelet[3461]: I0213 19:49:41.699001 3461 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:49:41.699200 kubelet[3461]: I0213 19:49:41.699182 3461 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:49:41.699364 kubelet[3461]: I0213 19:49:41.699344 3461 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:41.699777 kubelet[3461]: I0213 19:49:41.699686 3461 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:49:41.699777 kubelet[3461]: I0213 19:49:41.699716 3461 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:49:41.700481 kubelet[3461]: I0213 19:49:41.700245 3461 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:49:41.700843 kubelet[3461]: I0213 19:49:41.700598 3461 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:49:41.702506 kubelet[3461]: I0213 19:49:41.702469 3461 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:49:41.702968 kubelet[3461]: I0213 19:49:41.702943 3461 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:49:41.703834 kubelet[3461]: I0213 19:49:41.703784 3461 server.go:1264] "Started kubelet" Feb 13 19:49:41.704331 sudo[3474]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:49:41.704991 sudo[3474]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:49:41.723165 kubelet[3461]: I0213 19:49:41.715138 3461 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:49:41.723165 kubelet[3461]: I0213 19:49:41.718340 3461 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:49:41.723165 kubelet[3461]: I0213 19:49:41.718810 3461 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:49:41.728037 kubelet[3461]: I0213 19:49:41.727973 3461 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:49:41.738651 kubelet[3461]: I0213 19:49:41.738604 3461 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:49:41.765280 kubelet[3461]: I0213 19:49:41.764144 3461 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:49:41.769499 kubelet[3461]: I0213 19:49:41.769449 3461 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:49:41.769770 kubelet[3461]: I0213 19:49:41.769753 3461 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:49:41.802440 kubelet[3461]: I0213 19:49:41.800791 3461 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:49:41.804282 kubelet[3461]: I0213 19:49:41.804249 3461 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:49:41.806114 kubelet[3461]: I0213 19:49:41.805346 3461 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:49:41.811013 kubelet[3461]: I0213 19:49:41.810948 3461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:49:41.815455 kubelet[3461]: I0213 19:49:41.815402 3461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:49:41.815577 kubelet[3461]: I0213 19:49:41.815468 3461 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:49:41.815577 kubelet[3461]: I0213 19:49:41.815501 3461 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:49:41.815696 kubelet[3461]: E0213 19:49:41.815568 3461 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:49:41.893102 kubelet[3461]: I0213 19:49:41.893044 3461 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-210" Feb 13 19:49:41.922666 kubelet[3461]: E0213 19:49:41.922614 3461 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:49:41.923483 kubelet[3461]: I0213 19:49:41.923442 3461 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-20-210" Feb 13 19:49:41.923708 kubelet[3461]: I0213 19:49:41.923571 3461 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-210" Feb 13 19:49:42.072392 kubelet[3461]: I0213 19:49:42.071800 3461 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:49:42.072392 kubelet[3461]: I0213 19:49:42.071834 3461 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:49:42.072392 kubelet[3461]: I0213 19:49:42.071870 3461 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:42.072392 kubelet[3461]: I0213 19:49:42.072135 3461 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:49:42.072392 kubelet[3461]: I0213 19:49:42.072156 3461 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:49:42.072392 kubelet[3461]: I0213 19:49:42.072192 3461 policy_none.go:49] "None policy: Start" Feb 13 19:49:42.076132 kubelet[3461]: I0213 19:49:42.075982 3461 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:49:42.076281 kubelet[3461]: I0213 19:49:42.076139 3461 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:49:42.076794 kubelet[3461]: I0213 19:49:42.076760 3461 state_mem.go:75] "Updated machine memory state" Feb 13 19:49:42.082375 kubelet[3461]: I0213 19:49:42.082325 3461 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:49:42.084004 kubelet[3461]: I0213 19:49:42.082981 3461 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:49:42.084535 kubelet[3461]: I0213 19:49:42.084495 3461 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:49:42.124636 kubelet[3461]: I0213 19:49:42.124096 3461 topology_manager.go:215] "Topology Admit Handler" podUID="e758b84d99a39aacb9ad87ba8f95df68" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-210" Feb 13 19:49:42.124636 kubelet[3461]: I0213 19:49:42.124336 3461 topology_manager.go:215] "Topology Admit Handler" podUID="4c23436c5d01a804db9bacc495002d6d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:42.126133 kubelet[3461]: I0213 19:49:42.125621 3461 topology_manager.go:215] "Topology Admit Handler" podUID="d894a7582a23c2e824f1b85d6a89d6e5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-210" Feb 13 19:49:42.178981 kubelet[3461]: I0213 19:49:42.178834 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d894a7582a23c2e824f1b85d6a89d6e5-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-210\" (UID: \"d894a7582a23c2e824f1b85d6a89d6e5\") " pod="kube-system/kube-scheduler-ip-172-31-20-210" Feb 13 19:49:42.178981 kubelet[3461]: I0213 19:49:42.178917 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e758b84d99a39aacb9ad87ba8f95df68-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-210\" (UID: \"e758b84d99a39aacb9ad87ba8f95df68\") " pod="kube-system/kube-apiserver-ip-172-31-20-210" Feb 13 19:49:42.179271 kubelet[3461]: I0213 19:49:42.179028 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c23436c5d01a804db9bacc495002d6d-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-210\" (UID: \"4c23436c5d01a804db9bacc495002d6d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:42.179271 kubelet[3461]: I0213 19:49:42.179116 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c23436c5d01a804db9bacc495002d6d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-210\" (UID: \"4c23436c5d01a804db9bacc495002d6d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:42.179271 kubelet[3461]: I0213 19:49:42.179155 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c23436c5d01a804db9bacc495002d6d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-210\" (UID: \"4c23436c5d01a804db9bacc495002d6d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:42.181046 kubelet[3461]: I0213 19:49:42.179193 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c23436c5d01a804db9bacc495002d6d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-210\" (UID: \"4c23436c5d01a804db9bacc495002d6d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:42.181674 kubelet[3461]: I0213 19:49:42.181411 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e758b84d99a39aacb9ad87ba8f95df68-ca-certs\") pod \"kube-apiserver-ip-172-31-20-210\" (UID: \"e758b84d99a39aacb9ad87ba8f95df68\") " pod="kube-system/kube-apiserver-ip-172-31-20-210" Feb 13 19:49:42.181892 kubelet[3461]: I0213 19:49:42.181809 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e758b84d99a39aacb9ad87ba8f95df68-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-210\" (UID: \"e758b84d99a39aacb9ad87ba8f95df68\") " pod="kube-system/kube-apiserver-ip-172-31-20-210" Feb 13 19:49:42.181990 kubelet[3461]: I0213 19:49:42.181925 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c23436c5d01a804db9bacc495002d6d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-210\" (UID: \"4c23436c5d01a804db9bacc495002d6d\") " pod="kube-system/kube-controller-manager-ip-172-31-20-210" Feb 13 19:49:42.594412 update_engine[2116]: I20250213 19:49:42.594120 2116 update_attempter.cc:509] Updating boot flags... Feb 13 19:49:42.700722 sudo[3474]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:42.702349 kubelet[3461]: I0213 19:49:42.701942 3461 apiserver.go:52] "Watching apiserver" Feb 13 19:49:42.707139 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3510) Feb 13 19:49:42.771302 kubelet[3461]: I0213 19:49:42.771245 3461 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:49:42.976680 kubelet[3461]: E0213 19:49:42.976619 3461 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-210\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-210" Feb 13 19:49:43.066620 kubelet[3461]: I0213 19:49:43.066530 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-210" podStartSLOduration=1.066507528 podStartE2EDuration="1.066507528s" podCreationTimestamp="2025-02-13 19:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:43.044671092 +0000 UTC m=+1.477095692" watchObservedRunningTime="2025-02-13 19:49:43.066507528 +0000 UTC m=+1.498932116" Feb 13 19:49:43.090694 kubelet[3461]: I0213 19:49:43.090380 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-210" podStartSLOduration=1.090357564 podStartE2EDuration="1.090357564s" podCreationTimestamp="2025-02-13 19:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:43.066911316 +0000 UTC m=+1.499335892" watchObservedRunningTime="2025-02-13 19:49:43.090357564 +0000 UTC m=+1.522782164" Feb 13 19:49:45.111299 sudo[2492]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:45.135656 sshd[2488]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:45.143145 systemd-logind[2111]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:49:45.144096 systemd[1]: sshd@6-172.31.20.210:22-139.178.89.65:50960.service: Deactivated successfully. Feb 13 19:49:45.152019 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:49:45.156731 systemd-logind[2111]: Removed session 7. Feb 13 19:49:47.485140 kubelet[3461]: I0213 19:49:47.485027 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-210" podStartSLOduration=5.484984098 podStartE2EDuration="5.484984098s" podCreationTimestamp="2025-02-13 19:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:43.092650956 +0000 UTC m=+1.525075568" watchObservedRunningTime="2025-02-13 19:49:47.484984098 +0000 UTC m=+5.917408662" Feb 13 19:49:55.046101 kubelet[3461]: I0213 19:49:55.045817 3461 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:49:55.049185 containerd[2141]: time="2025-02-13T19:49:55.048931008Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:49:55.052216 kubelet[3461]: I0213 19:49:55.050858 3461 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:49:55.131129 kubelet[3461]: I0213 19:49:55.129800 3461 topology_manager.go:215] "Topology Admit Handler" podUID="11f0e9d6-21cf-4eef-af87-01fc702026f2" podNamespace="kube-system" podName="kube-proxy-fqcc8" Feb 13 19:49:55.146141 kubelet[3461]: W0213 19:49:55.145369 3461 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-20-210" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-210' and this object Feb 13 19:49:55.146141 kubelet[3461]: E0213 19:49:55.145461 3461 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-20-210" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-210' and this object Feb 13 19:49:55.166131 kubelet[3461]: I0213 19:49:55.164765 3461 topology_manager.go:215] "Topology Admit Handler" podUID="cdbe95b8-cf2c-41f5-a31c-06225bc35243" podNamespace="kube-system" podName="cilium-qscnh" Feb 13 19:49:55.172162 kubelet[3461]: I0213 19:49:55.170140 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-xtables-lock\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.172162 kubelet[3461]: I0213 19:49:55.170339 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11f0e9d6-21cf-4eef-af87-01fc702026f2-lib-modules\") pod \"kube-proxy-fqcc8\" (UID: \"11f0e9d6-21cf-4eef-af87-01fc702026f2\") " pod="kube-system/kube-proxy-fqcc8" Feb 13 19:49:55.172162 kubelet[3461]: I0213 19:49:55.170422 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11f0e9d6-21cf-4eef-af87-01fc702026f2-xtables-lock\") pod \"kube-proxy-fqcc8\" (UID: \"11f0e9d6-21cf-4eef-af87-01fc702026f2\") " pod="kube-system/kube-proxy-fqcc8" Feb 13 19:49:55.172162 kubelet[3461]: I0213 19:49:55.170502 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-config-path\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.172162 kubelet[3461]: I0213 19:49:55.170576 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-host-proc-sys-net\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.172662 kubelet[3461]: I0213 19:49:55.170678 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdbe95b8-cf2c-41f5-a31c-06225bc35243-clustermesh-secrets\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.172662 kubelet[3461]: I0213 19:49:55.170729 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hn8f\" (UniqueName: \"kubernetes.io/projected/cdbe95b8-cf2c-41f5-a31c-06225bc35243-kube-api-access-5hn8f\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.172662 kubelet[3461]: I0213 19:49:55.170801 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd2d4\" (UniqueName: \"kubernetes.io/projected/11f0e9d6-21cf-4eef-af87-01fc702026f2-kube-api-access-kd2d4\") pod \"kube-proxy-fqcc8\" (UID: \"11f0e9d6-21cf-4eef-af87-01fc702026f2\") " pod="kube-system/kube-proxy-fqcc8" Feb 13 19:49:55.172662 kubelet[3461]: I0213 19:49:55.170870 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11f0e9d6-21cf-4eef-af87-01fc702026f2-kube-proxy\") pod \"kube-proxy-fqcc8\" (UID: \"11f0e9d6-21cf-4eef-af87-01fc702026f2\") " pod="kube-system/kube-proxy-fqcc8" Feb 13 19:49:55.172662 kubelet[3461]: I0213 19:49:55.170944 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-hostproc\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.172970 kubelet[3461]: I0213 19:49:55.171058 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-lib-modules\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.172970 kubelet[3461]: I0213 19:49:55.171173 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-bpf-maps\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.172970 kubelet[3461]: I0213 19:49:55.171242 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-cgroup\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.172970 kubelet[3461]: I0213 19:49:55.171306 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-etc-cni-netd\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.172970 kubelet[3461]: I0213 19:49:55.171345 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-host-proc-sys-kernel\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.172970 kubelet[3461]: I0213 19:49:55.171413 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdbe95b8-cf2c-41f5-a31c-06225bc35243-hubble-tls\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.174628 kubelet[3461]: I0213 19:49:55.171501 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-run\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.174628 kubelet[3461]: I0213 19:49:55.171571 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cni-path\") pod \"cilium-qscnh\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " pod="kube-system/cilium-qscnh" Feb 13 19:49:55.195716 kubelet[3461]: W0213 19:49:55.194836 3461 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-20-210" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-210' and this object Feb 13 19:49:55.195716 kubelet[3461]: E0213 19:49:55.194902 3461 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-20-210" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-210' and this object Feb 13 19:49:55.195716 kubelet[3461]: W0213 19:49:55.194996 3461 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-20-210" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-210' and this object Feb 13 19:49:55.195716 kubelet[3461]: E0213 19:49:55.195051 3461 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-20-210" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-210' and this object Feb 13 19:49:55.269511 kubelet[3461]: I0213 19:49:55.269276 3461 topology_manager.go:215] "Topology Admit Handler" podUID="7c00f9b0-6593-4ac6-bc4e-60d7933bd48a" podNamespace="kube-system" podName="cilium-operator-599987898-7sckp" Feb 13 19:49:55.375682 kubelet[3461]: I0213 19:49:55.375310 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqv2k\" (UniqueName: \"kubernetes.io/projected/7c00f9b0-6593-4ac6-bc4e-60d7933bd48a-kube-api-access-qqv2k\") pod \"cilium-operator-599987898-7sckp\" (UID: \"7c00f9b0-6593-4ac6-bc4e-60d7933bd48a\") " pod="kube-system/cilium-operator-599987898-7sckp" Feb 13 19:49:55.375682 kubelet[3461]: I0213 19:49:55.375409 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c00f9b0-6593-4ac6-bc4e-60d7933bd48a-cilium-config-path\") pod \"cilium-operator-599987898-7sckp\" (UID: \"7c00f9b0-6593-4ac6-bc4e-60d7933bd48a\") " pod="kube-system/cilium-operator-599987898-7sckp" Feb 13 19:49:56.199818 containerd[2141]: time="2025-02-13T19:49:56.199411429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7sckp,Uid:7c00f9b0-6593-4ac6-bc4e-60d7933bd48a,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:56.258977 containerd[2141]: time="2025-02-13T19:49:56.258388250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:56.258977 containerd[2141]: time="2025-02-13T19:49:56.258501746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:56.258977 containerd[2141]: time="2025-02-13T19:49:56.258564650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:56.258977 containerd[2141]: time="2025-02-13T19:49:56.258784478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:56.276140 kubelet[3461]: E0213 19:49:56.275716 3461 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 19:49:56.276140 kubelet[3461]: E0213 19:49:56.275774 3461 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-qscnh: failed to sync secret cache: timed out waiting for the condition Feb 13 19:49:56.276140 kubelet[3461]: E0213 19:49:56.275878 3461 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdbe95b8-cf2c-41f5-a31c-06225bc35243-hubble-tls podName:cdbe95b8-cf2c-41f5-a31c-06225bc35243 nodeName:}" failed. No retries permitted until 2025-02-13 19:49:56.77584591 +0000 UTC m=+15.208270486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/cdbe95b8-cf2c-41f5-a31c-06225bc35243-hubble-tls") pod "cilium-qscnh" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243") : failed to sync secret cache: timed out waiting for the condition Feb 13 19:49:56.282950 kubelet[3461]: E0213 19:49:56.280375 3461 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 13 19:49:56.282950 kubelet[3461]: E0213 19:49:56.280488 3461 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdbe95b8-cf2c-41f5-a31c-06225bc35243-clustermesh-secrets podName:cdbe95b8-cf2c-41f5-a31c-06225bc35243 nodeName:}" failed. No retries permitted until 2025-02-13 19:49:56.780456994 +0000 UTC m=+15.212881570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/cdbe95b8-cf2c-41f5-a31c-06225bc35243-clustermesh-secrets") pod "cilium-qscnh" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243") : failed to sync secret cache: timed out waiting for the condition Feb 13 19:49:56.352145 containerd[2141]: time="2025-02-13T19:49:56.352028726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fqcc8,Uid:11f0e9d6-21cf-4eef-af87-01fc702026f2,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:56.372783 containerd[2141]: time="2025-02-13T19:49:56.371878790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7sckp,Uid:7c00f9b0-6593-4ac6-bc4e-60d7933bd48a,Namespace:kube-system,Attempt:0,} returns sandbox id \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\"" Feb 13 19:49:56.376063 containerd[2141]: time="2025-02-13T19:49:56.375762350Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:49:56.404680 containerd[2141]: time="2025-02-13T19:49:56.404010542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:56.404680 containerd[2141]: time="2025-02-13T19:49:56.404163626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:56.404680 containerd[2141]: time="2025-02-13T19:49:56.404202902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:56.404680 containerd[2141]: time="2025-02-13T19:49:56.404388446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:56.475258 containerd[2141]: time="2025-02-13T19:49:56.475042815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fqcc8,Uid:11f0e9d6-21cf-4eef-af87-01fc702026f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"696da995e90d9afdc605a1728d831d3157cd65a0866d1724aaecc0fdf8f4a3c5\"" Feb 13 19:49:56.485041 containerd[2141]: time="2025-02-13T19:49:56.484811979Z" level=info msg="CreateContainer within sandbox \"696da995e90d9afdc605a1728d831d3157cd65a0866d1724aaecc0fdf8f4a3c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:49:56.518642 containerd[2141]: time="2025-02-13T19:49:56.518551203Z" level=info msg="CreateContainer within sandbox \"696da995e90d9afdc605a1728d831d3157cd65a0866d1724aaecc0fdf8f4a3c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b4369358964ffb97f992052d7c4b3e806a4b44563ffd9c2d379c8299b7e367ba\"" Feb 13 19:49:56.522129 containerd[2141]: time="2025-02-13T19:49:56.520539483Z" level=info msg="StartContainer for \"b4369358964ffb97f992052d7c4b3e806a4b44563ffd9c2d379c8299b7e367ba\"" Feb 13 19:49:56.639234 containerd[2141]: time="2025-02-13T19:49:56.639062884Z" level=info msg="StartContainer for \"b4369358964ffb97f992052d7c4b3e806a4b44563ffd9c2d379c8299b7e367ba\" returns successfully" Feb 13 19:49:56.988830 containerd[2141]: time="2025-02-13T19:49:56.988738445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qscnh,Uid:cdbe95b8-cf2c-41f5-a31c-06225bc35243,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:57.026462 kubelet[3461]: I0213 19:49:57.025825 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fqcc8" podStartSLOduration=2.02580167 podStartE2EDuration="2.02580167s" podCreationTimestamp="2025-02-13 19:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:57.025701314 +0000 UTC m=+15.458125938" watchObservedRunningTime="2025-02-13 19:49:57.02580167 +0000 UTC m=+15.458226258" Feb 13 19:49:57.123389 containerd[2141]: time="2025-02-13T19:49:57.120686558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:57.125380 containerd[2141]: time="2025-02-13T19:49:57.125261954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:57.126249 containerd[2141]: time="2025-02-13T19:49:57.126058454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:57.127797 containerd[2141]: time="2025-02-13T19:49:57.127565150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:57.236175 containerd[2141]: time="2025-02-13T19:49:57.236056023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qscnh,Uid:cdbe95b8-cf2c-41f5-a31c-06225bc35243,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\"" Feb 13 19:49:58.096021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407631036.mount: Deactivated successfully. Feb 13 19:49:59.009122 containerd[2141]: time="2025-02-13T19:49:59.008343531Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:59.011327 containerd[2141]: time="2025-02-13T19:49:59.011233899Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:49:59.014019 containerd[2141]: time="2025-02-13T19:49:59.013908507Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:59.018450 containerd[2141]: time="2025-02-13T19:49:59.018241983Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.642411785s" Feb 13 19:49:59.018450 containerd[2141]: time="2025-02-13T19:49:59.018309267Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:49:59.024279 containerd[2141]: time="2025-02-13T19:49:59.021354303Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:49:59.025888 containerd[2141]: time="2025-02-13T19:49:59.025772871Z" level=info msg="CreateContainer within sandbox \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:49:59.056574 containerd[2141]: time="2025-02-13T19:49:59.056508148Z" level=info msg="CreateContainer within sandbox \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e\"" Feb 13 19:49:59.057690 containerd[2141]: time="2025-02-13T19:49:59.057624664Z" level=info msg="StartContainer for \"15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e\"" Feb 13 19:49:59.181540 containerd[2141]: time="2025-02-13T19:49:59.181452952Z" level=info msg="StartContainer for \"15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e\" returns successfully" Feb 13 19:50:00.076118 kubelet[3461]: I0213 19:50:00.075766 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-7sckp" podStartSLOduration=2.427622212 podStartE2EDuration="5.073435985s" podCreationTimestamp="2025-02-13 19:49:55 +0000 UTC" firstStartedPulling="2025-02-13 19:49:56.37481231 +0000 UTC m=+14.807236874" lastFinishedPulling="2025-02-13 19:49:59.020626083 +0000 UTC m=+17.453050647" observedRunningTime="2025-02-13 19:50:00.072924029 +0000 UTC m=+18.505348689" watchObservedRunningTime="2025-02-13 19:50:00.073435985 +0000 UTC m=+18.505860657" Feb 13 19:50:05.207584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521167710.mount: Deactivated successfully. Feb 13 19:50:08.083895 containerd[2141]: time="2025-02-13T19:50:08.083787408Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:08.086039 containerd[2141]: time="2025-02-13T19:50:08.085936116Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:50:08.087227 containerd[2141]: time="2025-02-13T19:50:08.087032665Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:50:08.093326 containerd[2141]: time="2025-02-13T19:50:08.093245473Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.071823358s" Feb 13 19:50:08.093622 containerd[2141]: time="2025-02-13T19:50:08.093331573Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:50:08.098209 containerd[2141]: time="2025-02-13T19:50:08.097941517Z" level=info msg="CreateContainer within sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:50:08.124630 containerd[2141]: time="2025-02-13T19:50:08.124430893Z" level=info msg="CreateContainer within sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c8079bbf93ec9012e0bfee07ab594fad5fe81f32964a04e01da29474402470c7\"" Feb 13 19:50:08.126258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1716458725.mount: Deactivated successfully. Feb 13 19:50:08.127654 containerd[2141]: time="2025-02-13T19:50:08.127450177Z" level=info msg="StartContainer for \"c8079bbf93ec9012e0bfee07ab594fad5fe81f32964a04e01da29474402470c7\"" Feb 13 19:50:08.255369 containerd[2141]: time="2025-02-13T19:50:08.255284281Z" level=info msg="StartContainer for \"c8079bbf93ec9012e0bfee07ab594fad5fe81f32964a04e01da29474402470c7\" returns successfully" Feb 13 19:50:09.112005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8079bbf93ec9012e0bfee07ab594fad5fe81f32964a04e01da29474402470c7-rootfs.mount: Deactivated successfully. Feb 13 19:50:09.161339 containerd[2141]: time="2025-02-13T19:50:09.161250722Z" level=info msg="shim disconnected" id=c8079bbf93ec9012e0bfee07ab594fad5fe81f32964a04e01da29474402470c7 namespace=k8s.io Feb 13 19:50:09.163385 containerd[2141]: time="2025-02-13T19:50:09.161881802Z" level=warning msg="cleaning up after shim disconnected" id=c8079bbf93ec9012e0bfee07ab594fad5fe81f32964a04e01da29474402470c7 namespace=k8s.io Feb 13 19:50:09.163385 containerd[2141]: time="2025-02-13T19:50:09.161954198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:10.075809 containerd[2141]: time="2025-02-13T19:50:10.075445874Z" level=info msg="CreateContainer within sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:50:10.103956 containerd[2141]: time="2025-02-13T19:50:10.103745871Z" level=info msg="CreateContainer within sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f9a0640393cc0983c37d2d3f218021c3f0e452410d87bb7ce9062797509338e8\"" Feb 13 19:50:10.109494 containerd[2141]: time="2025-02-13T19:50:10.108577239Z" level=info msg="StartContainer for \"f9a0640393cc0983c37d2d3f218021c3f0e452410d87bb7ce9062797509338e8\"" Feb 13 19:50:10.190583 systemd[1]: run-containerd-runc-k8s.io-f9a0640393cc0983c37d2d3f218021c3f0e452410d87bb7ce9062797509338e8-runc.VrfB9b.mount: Deactivated successfully. Feb 13 19:50:10.246347 containerd[2141]: time="2025-02-13T19:50:10.246276867Z" level=info msg="StartContainer for \"f9a0640393cc0983c37d2d3f218021c3f0e452410d87bb7ce9062797509338e8\" returns successfully" Feb 13 19:50:10.268899 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:50:10.269669 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:10.272462 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:50:10.285733 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:50:10.344164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9a0640393cc0983c37d2d3f218021c3f0e452410d87bb7ce9062797509338e8-rootfs.mount: Deactivated successfully. Feb 13 19:50:10.353651 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:10.355450 containerd[2141]: time="2025-02-13T19:50:10.353352844Z" level=info msg="shim disconnected" id=f9a0640393cc0983c37d2d3f218021c3f0e452410d87bb7ce9062797509338e8 namespace=k8s.io Feb 13 19:50:10.355450 containerd[2141]: time="2025-02-13T19:50:10.353999560Z" level=warning msg="cleaning up after shim disconnected" id=f9a0640393cc0983c37d2d3f218021c3f0e452410d87bb7ce9062797509338e8 namespace=k8s.io Feb 13 19:50:10.355450 containerd[2141]: time="2025-02-13T19:50:10.354158776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:11.083457 containerd[2141]: time="2025-02-13T19:50:11.083273847Z" level=info msg="CreateContainer within sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:50:11.111142 containerd[2141]: time="2025-02-13T19:50:11.110518996Z" level=info msg="CreateContainer within sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"44f0fa52a946f8f25e5f92b038a26948b5df5f49d45772baea86f1a0a04560e6\"" Feb 13 19:50:11.120325 containerd[2141]: time="2025-02-13T19:50:11.113784940Z" level=info msg="StartContainer for \"44f0fa52a946f8f25e5f92b038a26948b5df5f49d45772baea86f1a0a04560e6\"" Feb 13 19:50:11.145020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728557458.mount: Deactivated successfully. Feb 13 19:50:11.249158 containerd[2141]: time="2025-02-13T19:50:11.249022444Z" level=info msg="StartContainer for \"44f0fa52a946f8f25e5f92b038a26948b5df5f49d45772baea86f1a0a04560e6\" returns successfully" Feb 13 19:50:11.298367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44f0fa52a946f8f25e5f92b038a26948b5df5f49d45772baea86f1a0a04560e6-rootfs.mount: Deactivated successfully. Feb 13 19:50:11.304038 containerd[2141]: time="2025-02-13T19:50:11.303852076Z" level=info msg="shim disconnected" id=44f0fa52a946f8f25e5f92b038a26948b5df5f49d45772baea86f1a0a04560e6 namespace=k8s.io Feb 13 19:50:11.304038 containerd[2141]: time="2025-02-13T19:50:11.303935896Z" level=warning msg="cleaning up after shim disconnected" id=44f0fa52a946f8f25e5f92b038a26948b5df5f49d45772baea86f1a0a04560e6 namespace=k8s.io Feb 13 19:50:11.304038 containerd[2141]: time="2025-02-13T19:50:11.303959404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:12.091259 containerd[2141]: time="2025-02-13T19:50:12.091160380Z" level=info msg="CreateContainer within sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:50:12.127682 containerd[2141]: time="2025-02-13T19:50:12.127553993Z" level=info msg="CreateContainer within sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5cf62719f252078ab6b5da76396588d2017d298b4e58b8d6335a634b77dff819\"" Feb 13 19:50:12.128503 containerd[2141]: time="2025-02-13T19:50:12.128354621Z" level=info msg="StartContainer for \"5cf62719f252078ab6b5da76396588d2017d298b4e58b8d6335a634b77dff819\"" Feb 13 19:50:12.146454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2874192162.mount: Deactivated successfully. Feb 13 19:50:12.242292 containerd[2141]: time="2025-02-13T19:50:12.242225801Z" level=info msg="StartContainer for \"5cf62719f252078ab6b5da76396588d2017d298b4e58b8d6335a634b77dff819\" returns successfully" Feb 13 19:50:12.297445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cf62719f252078ab6b5da76396588d2017d298b4e58b8d6335a634b77dff819-rootfs.mount: Deactivated successfully. Feb 13 19:50:12.298509 containerd[2141]: time="2025-02-13T19:50:12.298254473Z" level=info msg="shim disconnected" id=5cf62719f252078ab6b5da76396588d2017d298b4e58b8d6335a634b77dff819 namespace=k8s.io Feb 13 19:50:12.298509 containerd[2141]: time="2025-02-13T19:50:12.298418513Z" level=warning msg="cleaning up after shim disconnected" id=5cf62719f252078ab6b5da76396588d2017d298b4e58b8d6335a634b77dff819 namespace=k8s.io Feb 13 19:50:12.298509 containerd[2141]: time="2025-02-13T19:50:12.298452521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:50:13.098206 containerd[2141]: time="2025-02-13T19:50:13.096305741Z" level=info msg="CreateContainer within sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:50:13.124151 containerd[2141]: time="2025-02-13T19:50:13.123444702Z" level=info msg="CreateContainer within sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c\"" Feb 13 19:50:13.127188 containerd[2141]: time="2025-02-13T19:50:13.126173898Z" level=info msg="StartContainer for \"bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c\"" Feb 13 19:50:13.147477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88344939.mount: Deactivated successfully. Feb 13 19:50:13.261010 containerd[2141]: time="2025-02-13T19:50:13.260909778Z" level=info msg="StartContainer for \"bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c\" returns successfully" Feb 13 19:50:13.316278 systemd[1]: run-containerd-runc-k8s.io-bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c-runc.md5hxW.mount: Deactivated successfully. Feb 13 19:50:13.439714 kubelet[3461]: I0213 19:50:13.439627 3461 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:50:13.496640 kubelet[3461]: I0213 19:50:13.496555 3461 topology_manager.go:215] "Topology Admit Handler" podUID="0d5c9f79-39d3-4126-9c5d-4b7c7f4d1db8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sxhsj" Feb 13 19:50:13.501352 kubelet[3461]: I0213 19:50:13.501277 3461 topology_manager.go:215] "Topology Admit Handler" podUID="3f456e4c-2bc6-4f21-8c97-0b034f913878" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tvg59" Feb 13 19:50:13.541348 kubelet[3461]: I0213 19:50:13.539718 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d5c9f79-39d3-4126-9c5d-4b7c7f4d1db8-config-volume\") pod \"coredns-7db6d8ff4d-sxhsj\" (UID: \"0d5c9f79-39d3-4126-9c5d-4b7c7f4d1db8\") " pod="kube-system/coredns-7db6d8ff4d-sxhsj" Feb 13 19:50:13.541348 kubelet[3461]: I0213 19:50:13.539807 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4f4s\" (UniqueName: \"kubernetes.io/projected/3f456e4c-2bc6-4f21-8c97-0b034f913878-kube-api-access-q4f4s\") pod \"coredns-7db6d8ff4d-tvg59\" (UID: \"3f456e4c-2bc6-4f21-8c97-0b034f913878\") " pod="kube-system/coredns-7db6d8ff4d-tvg59" Feb 13 19:50:13.541348 kubelet[3461]: I0213 19:50:13.539866 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s9dj\" (UniqueName: \"kubernetes.io/projected/0d5c9f79-39d3-4126-9c5d-4b7c7f4d1db8-kube-api-access-4s9dj\") pod \"coredns-7db6d8ff4d-sxhsj\" (UID: \"0d5c9f79-39d3-4126-9c5d-4b7c7f4d1db8\") " pod="kube-system/coredns-7db6d8ff4d-sxhsj" Feb 13 19:50:13.541348 kubelet[3461]: I0213 19:50:13.539910 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f456e4c-2bc6-4f21-8c97-0b034f913878-config-volume\") pod \"coredns-7db6d8ff4d-tvg59\" (UID: \"3f456e4c-2bc6-4f21-8c97-0b034f913878\") " pod="kube-system/coredns-7db6d8ff4d-tvg59" Feb 13 19:50:13.828880 containerd[2141]: time="2025-02-13T19:50:13.827032257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sxhsj,Uid:0d5c9f79-39d3-4126-9c5d-4b7c7f4d1db8,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:13.840853 containerd[2141]: time="2025-02-13T19:50:13.839924001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tvg59,Uid:3f456e4c-2bc6-4f21-8c97-0b034f913878,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:16.068029 systemd-networkd[1683]: cilium_host: Link UP Feb 13 19:50:16.070336 systemd-networkd[1683]: cilium_net: Link UP Feb 13 19:50:16.070752 systemd-networkd[1683]: cilium_net: Gained carrier Feb 13 19:50:16.071161 systemd-networkd[1683]: cilium_host: Gained carrier Feb 13 19:50:16.075913 (udev-worker)[4373]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:16.079532 (udev-worker)[4375]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:16.254665 systemd-networkd[1683]: cilium_vxlan: Link UP Feb 13 19:50:16.254686 systemd-networkd[1683]: cilium_vxlan: Gained carrier Feb 13 19:50:16.738135 kernel: NET: Registered PF_ALG protocol family Feb 13 19:50:16.777313 systemd-networkd[1683]: cilium_net: Gained IPv6LL Feb 13 19:50:16.777755 systemd-networkd[1683]: cilium_host: Gained IPv6LL Feb 13 19:50:17.864664 systemd-networkd[1683]: cilium_vxlan: Gained IPv6LL Feb 13 19:50:18.092949 (udev-worker)[4339]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:18.097128 systemd-networkd[1683]: lxc_health: Link UP Feb 13 19:50:18.113024 systemd-networkd[1683]: lxc_health: Gained carrier Feb 13 19:50:18.492931 systemd-networkd[1683]: lxca28c100274e7: Link UP Feb 13 19:50:18.498119 kernel: eth0: renamed from tmpfc278 Feb 13 19:50:18.506264 systemd-networkd[1683]: lxc2d407d7502f5: Link UP Feb 13 19:50:18.513157 kernel: eth0: renamed from tmpcd1ce Feb 13 19:50:18.517253 (udev-worker)[4708]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:18.527230 systemd-networkd[1683]: lxca28c100274e7: Gained carrier Feb 13 19:50:18.533050 systemd-networkd[1683]: lxc2d407d7502f5: Gained carrier Feb 13 19:50:19.030163 kubelet[3461]: I0213 19:50:19.027914 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qscnh" podStartSLOduration=13.173094793 podStartE2EDuration="24.027888527s" podCreationTimestamp="2025-02-13 19:49:55 +0000 UTC" firstStartedPulling="2025-02-13 19:49:57.240191679 +0000 UTC m=+15.672616255" lastFinishedPulling="2025-02-13 19:50:08.094985413 +0000 UTC m=+26.527409989" observedRunningTime="2025-02-13 19:50:14.152890207 +0000 UTC m=+32.585314807" watchObservedRunningTime="2025-02-13 19:50:19.027888527 +0000 UTC m=+37.460313103" Feb 13 19:50:19.720352 systemd-networkd[1683]: lxc_health: Gained IPv6LL Feb 13 19:50:19.913420 systemd-networkd[1683]: lxca28c100274e7: Gained IPv6LL Feb 13 19:50:20.488391 systemd-networkd[1683]: lxc2d407d7502f5: Gained IPv6LL Feb 13 19:50:22.765318 ntpd[2091]: Listen normally on 6 cilium_host 192.168.0.178:123 Feb 13 19:50:22.766719 ntpd[2091]: 13 Feb 19:50:22 ntpd[2091]: Listen normally on 6 cilium_host 192.168.0.178:123 Feb 13 19:50:22.766719 ntpd[2091]: 13 Feb 19:50:22 ntpd[2091]: Listen normally on 7 cilium_net [fe80::18fb:80ff:feb1:a20f%4]:123 Feb 13 19:50:22.766719 ntpd[2091]: 13 Feb 19:50:22 ntpd[2091]: Listen normally on 8 cilium_host [fe80::1ce1:dbff:fedb:4538%5]:123 Feb 13 19:50:22.766719 ntpd[2091]: 13 Feb 19:50:22 ntpd[2091]: Listen normally on 9 cilium_vxlan [fe80::3cab:bdff:fe7f:c7dd%6]:123 Feb 13 19:50:22.766719 ntpd[2091]: 13 Feb 19:50:22 ntpd[2091]: Listen normally on 10 lxc_health [fe80::1452:2dff:feee:512a%8]:123 Feb 13 19:50:22.766719 ntpd[2091]: 13 Feb 19:50:22 ntpd[2091]: Listen normally on 11 lxca28c100274e7 [fe80::301a:cbff:fe46:37a3%10]:123 Feb 13 19:50:22.766719 ntpd[2091]: 13 Feb 19:50:22 ntpd[2091]: Listen normally on 12 lxc2d407d7502f5 [fe80::a0c3:22ff:fe65:4b0c%12]:123 Feb 13 19:50:22.765474 ntpd[2091]: Listen normally on 7 cilium_net [fe80::18fb:80ff:feb1:a20f%4]:123 Feb 13 19:50:22.765567 ntpd[2091]: Listen normally on 8 cilium_host [fe80::1ce1:dbff:fedb:4538%5]:123 Feb 13 19:50:22.765650 ntpd[2091]: Listen normally on 9 cilium_vxlan [fe80::3cab:bdff:fe7f:c7dd%6]:123 Feb 13 19:50:22.765720 ntpd[2091]: Listen normally on 10 lxc_health [fe80::1452:2dff:feee:512a%8]:123 Feb 13 19:50:22.765793 ntpd[2091]: Listen normally on 11 lxca28c100274e7 [fe80::301a:cbff:fe46:37a3%10]:123 Feb 13 19:50:22.765876 ntpd[2091]: Listen normally on 12 lxc2d407d7502f5 [fe80::a0c3:22ff:fe65:4b0c%12]:123 Feb 13 19:50:24.804640 systemd[1]: Started sshd@7-172.31.20.210:22-139.178.89.65:40480.service - OpenSSH per-connection server daemon (139.178.89.65:40480). Feb 13 19:50:24.986406 sshd[4738]: Accepted publickey for core from 139.178.89.65 port 40480 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:24.989544 sshd[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:25.000334 systemd-logind[2111]: New session 8 of user core. Feb 13 19:50:25.011671 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:50:25.351483 sshd[4738]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:25.362230 systemd[1]: sshd@7-172.31.20.210:22-139.178.89.65:40480.service: Deactivated successfully. Feb 13 19:50:25.375467 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:50:25.378523 systemd-logind[2111]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:50:25.382509 systemd-logind[2111]: Removed session 8. Feb 13 19:50:27.615939 containerd[2141]: time="2025-02-13T19:50:27.611987241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:27.615939 containerd[2141]: time="2025-02-13T19:50:27.612325593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:27.615939 containerd[2141]: time="2025-02-13T19:50:27.612436473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:27.615939 containerd[2141]: time="2025-02-13T19:50:27.613152897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:27.713624 containerd[2141]: time="2025-02-13T19:50:27.709011262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:27.715005 containerd[2141]: time="2025-02-13T19:50:27.713215138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:27.715005 containerd[2141]: time="2025-02-13T19:50:27.713281978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:27.715005 containerd[2141]: time="2025-02-13T19:50:27.713505334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:27.779619 containerd[2141]: time="2025-02-13T19:50:27.778959142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sxhsj,Uid:0d5c9f79-39d3-4126-9c5d-4b7c7f4d1db8,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd1ce9e5bf8d33267c7c9b9e4a055496159d99a3e1d48417a7335d4a71349a3d\"" Feb 13 19:50:27.796863 containerd[2141]: time="2025-02-13T19:50:27.796570654Z" level=info msg="CreateContainer within sandbox \"cd1ce9e5bf8d33267c7c9b9e4a055496159d99a3e1d48417a7335d4a71349a3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:50:27.872486 containerd[2141]: time="2025-02-13T19:50:27.872313431Z" level=info msg="CreateContainer within sandbox \"cd1ce9e5bf8d33267c7c9b9e4a055496159d99a3e1d48417a7335d4a71349a3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cfffb85606b5904b465c094a754a50201df41151d8f1c892dcfb18c6c3de7564\"" Feb 13 19:50:27.880271 containerd[2141]: time="2025-02-13T19:50:27.879832451Z" level=info msg="StartContainer for \"cfffb85606b5904b465c094a754a50201df41151d8f1c892dcfb18c6c3de7564\"" Feb 13 19:50:28.008358 containerd[2141]: time="2025-02-13T19:50:28.008229871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tvg59,Uid:3f456e4c-2bc6-4f21-8c97-0b034f913878,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc278b658a3ca2b7c26e76181eec44abf5117147ac89f679ddbe664b9015a496\"" Feb 13 19:50:28.026613 containerd[2141]: time="2025-02-13T19:50:28.026548472Z" level=info msg="CreateContainer within sandbox \"fc278b658a3ca2b7c26e76181eec44abf5117147ac89f679ddbe664b9015a496\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:50:28.076919 containerd[2141]: time="2025-02-13T19:50:28.076807892Z" level=info msg="CreateContainer within sandbox \"fc278b658a3ca2b7c26e76181eec44abf5117147ac89f679ddbe664b9015a496\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26a24bb65ebedeae4827bc4b4abb7e3d842a163c44fc41f24693ede8c55d388d\"" Feb 13 19:50:28.079113 containerd[2141]: time="2025-02-13T19:50:28.077916488Z" level=info msg="StartContainer for \"26a24bb65ebedeae4827bc4b4abb7e3d842a163c44fc41f24693ede8c55d388d\"" Feb 13 19:50:28.116197 containerd[2141]: time="2025-02-13T19:50:28.116114012Z" level=info msg="StartContainer for \"cfffb85606b5904b465c094a754a50201df41151d8f1c892dcfb18c6c3de7564\" returns successfully" Feb 13 19:50:28.239690 containerd[2141]: time="2025-02-13T19:50:28.238961817Z" level=info msg="StartContainer for \"26a24bb65ebedeae4827bc4b4abb7e3d842a163c44fc41f24693ede8c55d388d\" returns successfully" Feb 13 19:50:29.212708 kubelet[3461]: I0213 19:50:29.212584 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sxhsj" podStartSLOduration=34.212558865 podStartE2EDuration="34.212558865s" podCreationTimestamp="2025-02-13 19:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:28.250959777 +0000 UTC m=+46.683384461" watchObservedRunningTime="2025-02-13 19:50:29.212558865 +0000 UTC m=+47.644983465" Feb 13 19:50:29.219122 kubelet[3461]: I0213 19:50:29.216690 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tvg59" podStartSLOduration=34.216182973 podStartE2EDuration="34.216182973s" podCreationTimestamp="2025-02-13 19:49:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:29.206829549 +0000 UTC m=+47.639254161" watchObservedRunningTime="2025-02-13 19:50:29.216182973 +0000 UTC m=+47.648607633" Feb 13 19:50:30.385704 systemd[1]: Started sshd@8-172.31.20.210:22-139.178.89.65:40490.service - OpenSSH per-connection server daemon (139.178.89.65:40490). Feb 13 19:50:30.568224 sshd[4927]: Accepted publickey for core from 139.178.89.65 port 40490 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:30.571984 sshd[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:30.584699 systemd-logind[2111]: New session 9 of user core. Feb 13 19:50:30.594706 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:50:30.843101 sshd[4927]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:30.849521 systemd[1]: sshd@8-172.31.20.210:22-139.178.89.65:40490.service: Deactivated successfully. Feb 13 19:50:30.858537 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:50:30.860018 systemd-logind[2111]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:50:30.861809 systemd-logind[2111]: Removed session 9. Feb 13 19:50:35.875652 systemd[1]: Started sshd@9-172.31.20.210:22-139.178.89.65:44304.service - OpenSSH per-connection server daemon (139.178.89.65:44304). Feb 13 19:50:36.058284 sshd[4942]: Accepted publickey for core from 139.178.89.65 port 44304 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:36.061138 sshd[4942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:36.072637 systemd-logind[2111]: New session 10 of user core. Feb 13 19:50:36.078438 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:50:36.342193 sshd[4942]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:36.348608 systemd-logind[2111]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:50:36.351823 systemd[1]: sshd@9-172.31.20.210:22-139.178.89.65:44304.service: Deactivated successfully. Feb 13 19:50:36.359881 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:50:36.363167 systemd-logind[2111]: Removed session 10. Feb 13 19:50:41.369581 systemd[1]: Started sshd@10-172.31.20.210:22-139.178.89.65:44316.service - OpenSSH per-connection server daemon (139.178.89.65:44316). Feb 13 19:50:41.549747 sshd[4957]: Accepted publickey for core from 139.178.89.65 port 44316 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:41.553157 sshd[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:41.561144 systemd-logind[2111]: New session 11 of user core. Feb 13 19:50:41.566625 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:50:41.812762 sshd[4957]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:41.820382 systemd-logind[2111]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:50:41.825462 systemd[1]: sshd@10-172.31.20.210:22-139.178.89.65:44316.service: Deactivated successfully. Feb 13 19:50:41.833458 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:50:41.835808 systemd-logind[2111]: Removed session 11. Feb 13 19:50:46.841686 systemd[1]: Started sshd@11-172.31.20.210:22-139.178.89.65:50708.service - OpenSSH per-connection server daemon (139.178.89.65:50708). Feb 13 19:50:47.016655 sshd[4974]: Accepted publickey for core from 139.178.89.65 port 50708 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:47.019339 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:47.027214 systemd-logind[2111]: New session 12 of user core. Feb 13 19:50:47.034977 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:50:47.271783 sshd[4974]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:47.278760 systemd[1]: sshd@11-172.31.20.210:22-139.178.89.65:50708.service: Deactivated successfully. Feb 13 19:50:47.286602 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:50:47.286876 systemd-logind[2111]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:50:47.291188 systemd-logind[2111]: Removed session 12. Feb 13 19:50:47.301576 systemd[1]: Started sshd@12-172.31.20.210:22-139.178.89.65:50722.service - OpenSSH per-connection server daemon (139.178.89.65:50722). Feb 13 19:50:47.475705 sshd[4989]: Accepted publickey for core from 139.178.89.65 port 50722 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:47.479034 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:47.487577 systemd-logind[2111]: New session 13 of user core. Feb 13 19:50:47.491726 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:50:47.824134 sshd[4989]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:47.845531 systemd[1]: sshd@12-172.31.20.210:22-139.178.89.65:50722.service: Deactivated successfully. Feb 13 19:50:47.858809 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:50:47.869047 systemd-logind[2111]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:50:47.877792 systemd[1]: Started sshd@13-172.31.20.210:22-139.178.89.65:50728.service - OpenSSH per-connection server daemon (139.178.89.65:50728). Feb 13 19:50:47.880165 systemd-logind[2111]: Removed session 13. Feb 13 19:50:48.051408 sshd[5001]: Accepted publickey for core from 139.178.89.65 port 50728 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:48.054231 sshd[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:48.063359 systemd-logind[2111]: New session 14 of user core. Feb 13 19:50:48.076198 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:50:48.326403 sshd[5001]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:48.333547 systemd-logind[2111]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:50:48.335179 systemd[1]: sshd@13-172.31.20.210:22-139.178.89.65:50728.service: Deactivated successfully. Feb 13 19:50:48.344125 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:50:48.346956 systemd-logind[2111]: Removed session 14. Feb 13 19:50:53.359132 systemd[1]: Started sshd@14-172.31.20.210:22-139.178.89.65:50744.service - OpenSSH per-connection server daemon (139.178.89.65:50744). Feb 13 19:50:53.542964 sshd[5016]: Accepted publickey for core from 139.178.89.65 port 50744 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:53.545862 sshd[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:53.556520 systemd-logind[2111]: New session 15 of user core. Feb 13 19:50:53.562799 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:50:53.815435 sshd[5016]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:53.823342 systemd[1]: sshd@14-172.31.20.210:22-139.178.89.65:50744.service: Deactivated successfully. Feb 13 19:50:53.833653 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:50:53.840524 systemd-logind[2111]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:50:53.843998 systemd-logind[2111]: Removed session 15. Feb 13 19:50:58.848674 systemd[1]: Started sshd@15-172.31.20.210:22-139.178.89.65:44762.service - OpenSSH per-connection server daemon (139.178.89.65:44762). Feb 13 19:50:59.029073 sshd[5034]: Accepted publickey for core from 139.178.89.65 port 44762 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:59.034656 sshd[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:59.044757 systemd-logind[2111]: New session 16 of user core. Feb 13 19:50:59.054059 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:50:59.304998 sshd[5034]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:59.312982 systemd[1]: sshd@15-172.31.20.210:22-139.178.89.65:44762.service: Deactivated successfully. Feb 13 19:50:59.322143 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:50:59.325469 systemd-logind[2111]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:50:59.327291 systemd-logind[2111]: Removed session 16. Feb 13 19:51:04.338169 systemd[1]: Started sshd@16-172.31.20.210:22-139.178.89.65:44776.service - OpenSSH per-connection server daemon (139.178.89.65:44776). Feb 13 19:51:04.523118 sshd[5048]: Accepted publickey for core from 139.178.89.65 port 44776 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:04.525842 sshd[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:04.536694 systemd-logind[2111]: New session 17 of user core. Feb 13 19:51:04.542717 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:51:04.798071 sshd[5048]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:04.804430 systemd[1]: sshd@16-172.31.20.210:22-139.178.89.65:44776.service: Deactivated successfully. Feb 13 19:51:04.813947 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:51:04.817166 systemd-logind[2111]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:51:04.831554 systemd[1]: Started sshd@17-172.31.20.210:22-139.178.89.65:32824.service - OpenSSH per-connection server daemon (139.178.89.65:32824). Feb 13 19:51:04.834202 systemd-logind[2111]: Removed session 17. Feb 13 19:51:04.995874 sshd[5062]: Accepted publickey for core from 139.178.89.65 port 32824 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:04.998592 sshd[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:05.007685 systemd-logind[2111]: New session 18 of user core. Feb 13 19:51:05.015770 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:51:05.315592 sshd[5062]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:05.324632 systemd[1]: sshd@17-172.31.20.210:22-139.178.89.65:32824.service: Deactivated successfully. Feb 13 19:51:05.334991 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:51:05.337260 systemd-logind[2111]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:51:05.347606 systemd[1]: Started sshd@18-172.31.20.210:22-139.178.89.65:32840.service - OpenSSH per-connection server daemon (139.178.89.65:32840). Feb 13 19:51:05.349718 systemd-logind[2111]: Removed session 18. Feb 13 19:51:05.523883 sshd[5073]: Accepted publickey for core from 139.178.89.65 port 32840 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:05.527579 sshd[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:05.535731 systemd-logind[2111]: New session 19 of user core. Feb 13 19:51:05.541758 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:51:08.135373 sshd[5073]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:08.147819 systemd[1]: sshd@18-172.31.20.210:22-139.178.89.65:32840.service: Deactivated successfully. Feb 13 19:51:08.161816 systemd-logind[2111]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:51:08.163868 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:51:08.176598 systemd[1]: Started sshd@19-172.31.20.210:22-139.178.89.65:32854.service - OpenSSH per-connection server daemon (139.178.89.65:32854). Feb 13 19:51:08.177580 systemd-logind[2111]: Removed session 19. Feb 13 19:51:08.355734 sshd[5093]: Accepted publickey for core from 139.178.89.65 port 32854 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:08.358416 sshd[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:08.366461 systemd-logind[2111]: New session 20 of user core. Feb 13 19:51:08.374740 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:51:08.902070 sshd[5093]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:08.909828 systemd[1]: sshd@19-172.31.20.210:22-139.178.89.65:32854.service: Deactivated successfully. Feb 13 19:51:08.918059 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:51:08.919939 systemd-logind[2111]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:51:08.922637 systemd-logind[2111]: Removed session 20. Feb 13 19:51:08.934632 systemd[1]: Started sshd@20-172.31.20.210:22-139.178.89.65:32862.service - OpenSSH per-connection server daemon (139.178.89.65:32862). Feb 13 19:51:09.122208 sshd[5105]: Accepted publickey for core from 139.178.89.65 port 32862 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:09.125012 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:09.133652 systemd-logind[2111]: New session 21 of user core. Feb 13 19:51:09.143815 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:51:09.389341 sshd[5105]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:09.397278 systemd-logind[2111]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:51:09.397654 systemd[1]: sshd@20-172.31.20.210:22-139.178.89.65:32862.service: Deactivated successfully. Feb 13 19:51:09.404436 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:51:09.406847 systemd-logind[2111]: Removed session 21. Feb 13 19:51:13.597289 update_engine[2116]: I20250213 19:51:13.597199 2116 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 19:51:13.597289 update_engine[2116]: I20250213 19:51:13.597283 2116 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 19:51:13.598004 update_engine[2116]: I20250213 19:51:13.597572 2116 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 19:51:13.598674 update_engine[2116]: I20250213 19:51:13.598593 2116 omaha_request_params.cc:62] Current group set to lts Feb 13 19:51:13.598994 update_engine[2116]: I20250213 19:51:13.598779 2116 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 19:51:13.598994 update_engine[2116]: I20250213 19:51:13.598834 2116 update_attempter.cc:643] Scheduling an action processor start. Feb 13 19:51:13.598994 update_engine[2116]: I20250213 19:51:13.598875 2116 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:51:13.598994 update_engine[2116]: I20250213 19:51:13.598948 2116 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 19:51:13.599730 update_engine[2116]: I20250213 19:51:13.599438 2116 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:51:13.599730 update_engine[2116]: I20250213 19:51:13.599475 2116 omaha_request_action.cc:272] Request: Feb 13 19:51:13.599730 update_engine[2116]: Feb 13 19:51:13.599730 update_engine[2116]: Feb 13 19:51:13.599730 update_engine[2116]: Feb 13 19:51:13.599730 update_engine[2116]: Feb 13 19:51:13.599730 update_engine[2116]: Feb 13 19:51:13.599730 update_engine[2116]: Feb 13 19:51:13.599730 update_engine[2116]: Feb 13 19:51:13.599730 update_engine[2116]: Feb 13 19:51:13.599730 update_engine[2116]: I20250213 19:51:13.599493 2116 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:51:13.600283 locksmithd[2169]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 19:51:13.601754 update_engine[2116]: I20250213 19:51:13.601657 2116 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:51:13.602293 update_engine[2116]: I20250213 19:51:13.602212 2116 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:51:13.636391 update_engine[2116]: E20250213 19:51:13.636154 2116 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:51:13.636391 update_engine[2116]: I20250213 19:51:13.636332 2116 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 19:51:14.423602 systemd[1]: Started sshd@21-172.31.20.210:22-139.178.89.65:32866.service - OpenSSH per-connection server daemon (139.178.89.65:32866). Feb 13 19:51:14.611856 sshd[5119]: Accepted publickey for core from 139.178.89.65 port 32866 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:14.614903 sshd[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:14.623576 systemd-logind[2111]: New session 22 of user core. Feb 13 19:51:14.629709 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:51:14.904781 sshd[5119]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:14.911676 systemd-logind[2111]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:51:14.913679 systemd[1]: sshd@21-172.31.20.210:22-139.178.89.65:32866.service: Deactivated successfully. Feb 13 19:51:14.921753 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:51:14.923787 systemd-logind[2111]: Removed session 22. Feb 13 19:51:19.945729 systemd[1]: Started sshd@22-172.31.20.210:22-139.178.89.65:36248.service - OpenSSH per-connection server daemon (139.178.89.65:36248). Feb 13 19:51:20.124002 sshd[5136]: Accepted publickey for core from 139.178.89.65 port 36248 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:20.126661 sshd[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:20.135934 systemd-logind[2111]: New session 23 of user core. Feb 13 19:51:20.142624 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:51:20.374434 sshd[5136]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:20.382913 systemd[1]: sshd@22-172.31.20.210:22-139.178.89.65:36248.service: Deactivated successfully. Feb 13 19:51:20.389153 systemd-logind[2111]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:51:20.390138 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:51:20.393203 systemd-logind[2111]: Removed session 23. Feb 13 19:51:23.593548 update_engine[2116]: I20250213 19:51:23.593446 2116 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:51:23.594183 update_engine[2116]: I20250213 19:51:23.593806 2116 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:51:23.594289 update_engine[2116]: I20250213 19:51:23.594161 2116 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:51:23.594628 update_engine[2116]: E20250213 19:51:23.594570 2116 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:51:23.594707 update_engine[2116]: I20250213 19:51:23.594659 2116 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 19:51:25.405583 systemd[1]: Started sshd@23-172.31.20.210:22-139.178.89.65:36476.service - OpenSSH per-connection server daemon (139.178.89.65:36476). Feb 13 19:51:25.589761 sshd[5149]: Accepted publickey for core from 139.178.89.65 port 36476 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:25.592450 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:25.601472 systemd-logind[2111]: New session 24 of user core. Feb 13 19:51:25.607604 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:51:25.876737 sshd[5149]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:25.886830 systemd[1]: sshd@23-172.31.20.210:22-139.178.89.65:36476.service: Deactivated successfully. Feb 13 19:51:25.895188 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:51:25.897528 systemd-logind[2111]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:51:25.900174 systemd-logind[2111]: Removed session 24. Feb 13 19:51:30.908667 systemd[1]: Started sshd@24-172.31.20.210:22-139.178.89.65:36478.service - OpenSSH per-connection server daemon (139.178.89.65:36478). Feb 13 19:51:31.088860 sshd[5165]: Accepted publickey for core from 139.178.89.65 port 36478 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:31.091670 sshd[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:31.100427 systemd-logind[2111]: New session 25 of user core. Feb 13 19:51:31.105665 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:51:31.345392 sshd[5165]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:31.353470 systemd[1]: sshd@24-172.31.20.210:22-139.178.89.65:36478.service: Deactivated successfully. Feb 13 19:51:31.360454 systemd-logind[2111]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:51:31.361417 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:51:31.364812 systemd-logind[2111]: Removed session 25. Feb 13 19:51:31.376615 systemd[1]: Started sshd@25-172.31.20.210:22-139.178.89.65:36492.service - OpenSSH per-connection server daemon (139.178.89.65:36492). Feb 13 19:51:31.557296 sshd[5179]: Accepted publickey for core from 139.178.89.65 port 36492 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:31.559909 sshd[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:31.568131 systemd-logind[2111]: New session 26 of user core. Feb 13 19:51:31.575338 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:51:33.595209 update_engine[2116]: I20250213 19:51:33.595108 2116 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:51:33.595830 update_engine[2116]: I20250213 19:51:33.595470 2116 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:51:33.595830 update_engine[2116]: I20250213 19:51:33.595755 2116 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:51:33.596457 update_engine[2116]: E20250213 19:51:33.596369 2116 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:51:33.596538 update_engine[2116]: I20250213 19:51:33.596468 2116 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 19:51:34.056424 containerd[2141]: time="2025-02-13T19:51:34.056333844Z" level=info msg="StopContainer for \"15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e\" with timeout 30 (s)" Feb 13 19:51:34.062488 containerd[2141]: time="2025-02-13T19:51:34.061573800Z" level=info msg="Stop container \"15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e\" with signal terminated" Feb 13 19:51:34.141519 containerd[2141]: time="2025-02-13T19:51:34.140827404Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:51:34.157014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e-rootfs.mount: Deactivated successfully. Feb 13 19:51:34.166522 containerd[2141]: time="2025-02-13T19:51:34.166356324Z" level=info msg="StopContainer for \"bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c\" with timeout 2 (s)" Feb 13 19:51:34.168465 containerd[2141]: time="2025-02-13T19:51:34.168208368Z" level=info msg="Stop container \"bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c\" with signal terminated" Feb 13 19:51:34.184405 containerd[2141]: time="2025-02-13T19:51:34.184250616Z" level=info msg="shim disconnected" id=15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e namespace=k8s.io Feb 13 19:51:34.184405 containerd[2141]: time="2025-02-13T19:51:34.184341036Z" level=warning msg="cleaning up after shim disconnected" id=15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e namespace=k8s.io Feb 13 19:51:34.184405 containerd[2141]: time="2025-02-13T19:51:34.184374132Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:34.196495 systemd-networkd[1683]: lxc_health: Link DOWN Feb 13 19:51:34.196508 systemd-networkd[1683]: lxc_health: Lost carrier Feb 13 19:51:34.244833 containerd[2141]: time="2025-02-13T19:51:34.244498644Z" level=info msg="StopContainer for \"15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e\" returns successfully" Feb 13 19:51:34.246012 containerd[2141]: time="2025-02-13T19:51:34.245652684Z" level=info msg="StopPodSandbox for \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\"" Feb 13 19:51:34.246012 containerd[2141]: time="2025-02-13T19:51:34.245755080Z" level=info msg="Container to stop \"15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:34.252004 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e-shm.mount: Deactivated successfully. Feb 13 19:51:34.274788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c-rootfs.mount: Deactivated successfully. Feb 13 19:51:34.285805 containerd[2141]: time="2025-02-13T19:51:34.285301465Z" level=info msg="shim disconnected" id=bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c namespace=k8s.io Feb 13 19:51:34.285805 containerd[2141]: time="2025-02-13T19:51:34.285394333Z" level=warning msg="cleaning up after shim disconnected" id=bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c namespace=k8s.io Feb 13 19:51:34.285805 containerd[2141]: time="2025-02-13T19:51:34.285419101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:34.330165 containerd[2141]: time="2025-02-13T19:51:34.329831161Z" level=info msg="StopContainer for \"bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c\" returns successfully" Feb 13 19:51:34.336198 containerd[2141]: time="2025-02-13T19:51:34.333417085Z" level=info msg="StopPodSandbox for \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\"" Feb 13 19:51:34.336198 containerd[2141]: time="2025-02-13T19:51:34.333865501Z" level=info msg="Container to stop \"f9a0640393cc0983c37d2d3f218021c3f0e452410d87bb7ce9062797509338e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:34.336198 containerd[2141]: time="2025-02-13T19:51:34.334222741Z" level=info msg="Container to stop \"44f0fa52a946f8f25e5f92b038a26948b5df5f49d45772baea86f1a0a04560e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:34.336482 containerd[2141]: time="2025-02-13T19:51:34.336197869Z" level=info msg="Container to stop \"5cf62719f252078ab6b5da76396588d2017d298b4e58b8d6335a634b77dff819\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:34.336482 containerd[2141]: time="2025-02-13T19:51:34.336278041Z" level=info msg="Container to stop \"bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:34.336482 containerd[2141]: time="2025-02-13T19:51:34.336332845Z" level=info msg="Container to stop \"c8079bbf93ec9012e0bfee07ab594fad5fe81f32964a04e01da29474402470c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:34.344880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e-rootfs.mount: Deactivated successfully. Feb 13 19:51:34.350996 containerd[2141]: time="2025-02-13T19:51:34.350903149Z" level=info msg="shim disconnected" id=34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e namespace=k8s.io Feb 13 19:51:34.350996 containerd[2141]: time="2025-02-13T19:51:34.350994157Z" level=warning msg="cleaning up after shim disconnected" id=34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e namespace=k8s.io Feb 13 19:51:34.351455 containerd[2141]: time="2025-02-13T19:51:34.351018037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:34.405076 containerd[2141]: time="2025-02-13T19:51:34.404862445Z" level=info msg="TearDown network for sandbox \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\" successfully" Feb 13 19:51:34.405076 containerd[2141]: time="2025-02-13T19:51:34.404913337Z" level=info msg="StopPodSandbox for \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\" returns successfully" Feb 13 19:51:34.446881 containerd[2141]: time="2025-02-13T19:51:34.445626649Z" level=info msg="shim disconnected" id=b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758 namespace=k8s.io Feb 13 19:51:34.446881 containerd[2141]: time="2025-02-13T19:51:34.446749153Z" level=warning msg="cleaning up after shim disconnected" id=b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758 namespace=k8s.io Feb 13 19:51:34.446881 containerd[2141]: time="2025-02-13T19:51:34.446780101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:34.470468 containerd[2141]: time="2025-02-13T19:51:34.470060234Z" level=info msg="TearDown network for sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" successfully" Feb 13 19:51:34.470468 containerd[2141]: time="2025-02-13T19:51:34.470421158Z" level=info msg="StopPodSandbox for \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" returns successfully" Feb 13 19:51:34.516640 kubelet[3461]: I0213 19:51:34.516116 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdbe95b8-cf2c-41f5-a31c-06225bc35243-hubble-tls\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.516640 kubelet[3461]: I0213 19:51:34.516194 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c00f9b0-6593-4ac6-bc4e-60d7933bd48a-cilium-config-path\") pod \"7c00f9b0-6593-4ac6-bc4e-60d7933bd48a\" (UID: \"7c00f9b0-6593-4ac6-bc4e-60d7933bd48a\") " Feb 13 19:51:34.516640 kubelet[3461]: I0213 19:51:34.516237 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-host-proc-sys-net\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.516640 kubelet[3461]: I0213 19:51:34.516280 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hn8f\" (UniqueName: \"kubernetes.io/projected/cdbe95b8-cf2c-41f5-a31c-06225bc35243-kube-api-access-5hn8f\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.516640 kubelet[3461]: I0213 19:51:34.516318 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-bpf-maps\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.516640 kubelet[3461]: I0213 19:51:34.516352 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-cgroup\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.517630 kubelet[3461]: I0213 19:51:34.516384 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cni-path\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.517630 kubelet[3461]: I0213 19:51:34.516420 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-host-proc-sys-kernel\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.517630 kubelet[3461]: I0213 19:51:34.516452 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-run\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.518882 kubelet[3461]: I0213 19:51:34.516574 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:34.522117 kubelet[3461]: I0213 19:51:34.517890 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:34.528291 kubelet[3461]: I0213 19:51:34.517924 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cni-path" (OuterVolumeSpecName: "cni-path") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:34.528291 kubelet[3461]: I0213 19:51:34.517951 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:34.528291 kubelet[3461]: I0213 19:51:34.517977 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:34.528291 kubelet[3461]: I0213 19:51:34.518388 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:34.528291 kubelet[3461]: I0213 19:51:34.516685 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdbe95b8-cf2c-41f5-a31c-06225bc35243-clustermesh-secrets\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.528701 kubelet[3461]: I0213 19:51:34.522535 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-lib-modules\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.528701 kubelet[3461]: I0213 19:51:34.522577 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-xtables-lock\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.528701 kubelet[3461]: I0213 19:51:34.522624 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-config-path\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.528701 kubelet[3461]: I0213 19:51:34.522665 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqv2k\" (UniqueName: \"kubernetes.io/projected/7c00f9b0-6593-4ac6-bc4e-60d7933bd48a-kube-api-access-qqv2k\") pod \"7c00f9b0-6593-4ac6-bc4e-60d7933bd48a\" (UID: \"7c00f9b0-6593-4ac6-bc4e-60d7933bd48a\") " Feb 13 19:51:34.528701 kubelet[3461]: I0213 19:51:34.522703 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-hostproc\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.528701 kubelet[3461]: I0213 19:51:34.522752 3461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-etc-cni-netd\") pod \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\" (UID: \"cdbe95b8-cf2c-41f5-a31c-06225bc35243\") " Feb 13 19:51:34.529055 kubelet[3461]: I0213 19:51:34.522835 3461 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-run\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.529055 kubelet[3461]: I0213 19:51:34.522860 3461 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-host-proc-sys-kernel\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.529055 kubelet[3461]: I0213 19:51:34.522887 3461 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-host-proc-sys-net\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.529055 kubelet[3461]: I0213 19:51:34.522908 3461 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-bpf-maps\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.529055 kubelet[3461]: I0213 19:51:34.522928 3461 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-cgroup\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.529055 kubelet[3461]: I0213 19:51:34.522948 3461 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cni-path\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.529055 kubelet[3461]: I0213 19:51:34.523008 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:34.529561 kubelet[3461]: I0213 19:51:34.523118 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:34.529561 kubelet[3461]: I0213 19:51:34.523173 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:34.529561 kubelet[3461]: I0213 19:51:34.526244 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-hostproc" (OuterVolumeSpecName: "hostproc") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:51:34.545854 kubelet[3461]: I0213 19:51:34.545787 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:51:34.546479 kubelet[3461]: I0213 19:51:34.546371 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c00f9b0-6593-4ac6-bc4e-60d7933bd48a-kube-api-access-qqv2k" (OuterVolumeSpecName: "kube-api-access-qqv2k") pod "7c00f9b0-6593-4ac6-bc4e-60d7933bd48a" (UID: "7c00f9b0-6593-4ac6-bc4e-60d7933bd48a"). InnerVolumeSpecName "kube-api-access-qqv2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:51:34.547471 kubelet[3461]: I0213 19:51:34.547413 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbe95b8-cf2c-41f5-a31c-06225bc35243-kube-api-access-5hn8f" (OuterVolumeSpecName: "kube-api-access-5hn8f") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "kube-api-access-5hn8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:51:34.548535 kubelet[3461]: I0213 19:51:34.548481 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdbe95b8-cf2c-41f5-a31c-06225bc35243-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:51:34.548768 kubelet[3461]: I0213 19:51:34.548624 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbe95b8-cf2c-41f5-a31c-06225bc35243-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cdbe95b8-cf2c-41f5-a31c-06225bc35243" (UID: "cdbe95b8-cf2c-41f5-a31c-06225bc35243"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:51:34.550677 kubelet[3461]: I0213 19:51:34.550563 3461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c00f9b0-6593-4ac6-bc4e-60d7933bd48a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c00f9b0-6593-4ac6-bc4e-60d7933bd48a" (UID: "7c00f9b0-6593-4ac6-bc4e-60d7933bd48a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:51:34.624131 kubelet[3461]: I0213 19:51:34.623904 3461 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-lib-modules\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.624131 kubelet[3461]: I0213 19:51:34.623968 3461 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdbe95b8-cf2c-41f5-a31c-06225bc35243-clustermesh-secrets\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.624131 kubelet[3461]: I0213 19:51:34.624002 3461 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-xtables-lock\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.624131 kubelet[3461]: I0213 19:51:34.624023 3461 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdbe95b8-cf2c-41f5-a31c-06225bc35243-cilium-config-path\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.624131 kubelet[3461]: I0213 19:51:34.624045 3461 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qqv2k\" (UniqueName: \"kubernetes.io/projected/7c00f9b0-6593-4ac6-bc4e-60d7933bd48a-kube-api-access-qqv2k\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.624131 kubelet[3461]: I0213 19:51:34.624065 3461 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-hostproc\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.624131 kubelet[3461]: I0213 19:51:34.624129 3461 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdbe95b8-cf2c-41f5-a31c-06225bc35243-etc-cni-netd\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.624634 kubelet[3461]: I0213 19:51:34.624151 3461 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdbe95b8-cf2c-41f5-a31c-06225bc35243-hubble-tls\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.624634 kubelet[3461]: I0213 19:51:34.624171 3461 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c00f9b0-6593-4ac6-bc4e-60d7933bd48a-cilium-config-path\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:34.624634 kubelet[3461]: I0213 19:51:34.624192 3461 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-5hn8f\" (UniqueName: \"kubernetes.io/projected/cdbe95b8-cf2c-41f5-a31c-06225bc35243-kube-api-access-5hn8f\") on node \"ip-172-31-20-210\" DevicePath \"\"" Feb 13 19:51:35.095744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758-rootfs.mount: Deactivated successfully. Feb 13 19:51:35.096026 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758-shm.mount: Deactivated successfully. Feb 13 19:51:35.096274 systemd[1]: var-lib-kubelet-pods-cdbe95b8\x2dcf2c\x2d41f5\x2da31c\x2d06225bc35243-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:51:35.096487 systemd[1]: var-lib-kubelet-pods-cdbe95b8\x2dcf2c\x2d41f5\x2da31c\x2d06225bc35243-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:51:35.096710 systemd[1]: var-lib-kubelet-pods-cdbe95b8\x2dcf2c\x2d41f5\x2da31c\x2d06225bc35243-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5hn8f.mount: Deactivated successfully. Feb 13 19:51:35.096917 systemd[1]: var-lib-kubelet-pods-7c00f9b0\x2d6593\x2d4ac6\x2dbc4e\x2d60d7933bd48a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqqv2k.mount: Deactivated successfully. Feb 13 19:51:35.394498 kubelet[3461]: I0213 19:51:35.394439 3461 scope.go:117] "RemoveContainer" containerID="15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e" Feb 13 19:51:35.401131 containerd[2141]: time="2025-02-13T19:51:35.401041802Z" level=info msg="RemoveContainer for \"15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e\"" Feb 13 19:51:35.415335 containerd[2141]: time="2025-02-13T19:51:35.415014110Z" level=info msg="RemoveContainer for \"15fafb3abbcd896b1c47dae5a5d48e6d2700c99c02c88339c208e15d426a898e\" returns successfully" Feb 13 19:51:35.420159 kubelet[3461]: I0213 19:51:35.418415 3461 scope.go:117] "RemoveContainer" containerID="bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c" Feb 13 19:51:35.423640 containerd[2141]: time="2025-02-13T19:51:35.423501722Z" level=info msg="RemoveContainer for \"bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c\"" Feb 13 19:51:35.431774 containerd[2141]: time="2025-02-13T19:51:35.430824134Z" level=info msg="RemoveContainer for \"bc11aa12cca352c47cea69609def6ab2a27604af3a16bda83d9d6216e196555c\" returns successfully" Feb 13 19:51:35.433021 kubelet[3461]: I0213 19:51:35.432723 3461 scope.go:117] "RemoveContainer" containerID="5cf62719f252078ab6b5da76396588d2017d298b4e58b8d6335a634b77dff819" Feb 13 19:51:35.435417 containerd[2141]: time="2025-02-13T19:51:35.435233366Z" level=info msg="RemoveContainer for \"5cf62719f252078ab6b5da76396588d2017d298b4e58b8d6335a634b77dff819\"" Feb 13 19:51:35.441728 containerd[2141]: time="2025-02-13T19:51:35.441666158Z" level=info msg="RemoveContainer for \"5cf62719f252078ab6b5da76396588d2017d298b4e58b8d6335a634b77dff819\" returns successfully" Feb 13 19:51:35.442522 kubelet[3461]: I0213 19:51:35.442288 3461 scope.go:117] "RemoveContainer" containerID="44f0fa52a946f8f25e5f92b038a26948b5df5f49d45772baea86f1a0a04560e6" Feb 13 19:51:35.445774 containerd[2141]: time="2025-02-13T19:51:35.445714466Z" level=info msg="RemoveContainer for \"44f0fa52a946f8f25e5f92b038a26948b5df5f49d45772baea86f1a0a04560e6\"" Feb 13 19:51:35.453984 containerd[2141]: time="2025-02-13T19:51:35.453905066Z" level=info msg="RemoveContainer for \"44f0fa52a946f8f25e5f92b038a26948b5df5f49d45772baea86f1a0a04560e6\" returns successfully" Feb 13 19:51:35.454492 kubelet[3461]: I0213 19:51:35.454352 3461 scope.go:117] "RemoveContainer" containerID="f9a0640393cc0983c37d2d3f218021c3f0e452410d87bb7ce9062797509338e8" Feb 13 19:51:35.456565 containerd[2141]: time="2025-02-13T19:51:35.456435614Z" level=info msg="RemoveContainer for \"f9a0640393cc0983c37d2d3f218021c3f0e452410d87bb7ce9062797509338e8\"" Feb 13 19:51:35.464514 containerd[2141]: time="2025-02-13T19:51:35.464388267Z" level=info msg="RemoveContainer for \"f9a0640393cc0983c37d2d3f218021c3f0e452410d87bb7ce9062797509338e8\" returns successfully" Feb 13 19:51:35.464768 kubelet[3461]: I0213 19:51:35.464715 3461 scope.go:117] "RemoveContainer" containerID="c8079bbf93ec9012e0bfee07ab594fad5fe81f32964a04e01da29474402470c7" Feb 13 19:51:35.466920 containerd[2141]: time="2025-02-13T19:51:35.466869579Z" level=info msg="RemoveContainer for \"c8079bbf93ec9012e0bfee07ab594fad5fe81f32964a04e01da29474402470c7\"" Feb 13 19:51:35.472711 containerd[2141]: time="2025-02-13T19:51:35.472643163Z" level=info msg="RemoveContainer for \"c8079bbf93ec9012e0bfee07ab594fad5fe81f32964a04e01da29474402470c7\" returns successfully" Feb 13 19:51:35.820880 kubelet[3461]: I0213 19:51:35.820751 3461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c00f9b0-6593-4ac6-bc4e-60d7933bd48a" path="/var/lib/kubelet/pods/7c00f9b0-6593-4ac6-bc4e-60d7933bd48a/volumes" Feb 13 19:51:35.822361 kubelet[3461]: I0213 19:51:35.822311 3461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbe95b8-cf2c-41f5-a31c-06225bc35243" path="/var/lib/kubelet/pods/cdbe95b8-cf2c-41f5-a31c-06225bc35243/volumes" Feb 13 19:51:35.990750 sshd[5179]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:35.999891 systemd[1]: sshd@25-172.31.20.210:22-139.178.89.65:36492.service: Deactivated successfully. Feb 13 19:51:36.007217 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:51:36.008891 systemd-logind[2111]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:51:36.015302 systemd-logind[2111]: Removed session 26. Feb 13 19:51:36.020634 systemd[1]: Started sshd@26-172.31.20.210:22-139.178.89.65:35882.service - OpenSSH per-connection server daemon (139.178.89.65:35882). Feb 13 19:51:36.193502 sshd[5349]: Accepted publickey for core from 139.178.89.65 port 35882 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:36.196194 sshd[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:36.203777 systemd-logind[2111]: New session 27 of user core. Feb 13 19:51:36.210895 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:51:36.765263 ntpd[2091]: Deleting interface #10 lxc_health, fe80::1452:2dff:feee:512a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Feb 13 19:51:36.765783 ntpd[2091]: 13 Feb 19:51:36 ntpd[2091]: Deleting interface #10 lxc_health, fe80::1452:2dff:feee:512a%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Feb 13 19:51:37.135070 kubelet[3461]: E0213 19:51:37.134903 3461 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:51:37.575114 sshd[5349]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:37.582786 systemd[1]: sshd@26-172.31.20.210:22-139.178.89.65:35882.service: Deactivated successfully. Feb 13 19:51:37.599461 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:51:37.606939 systemd-logind[2111]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:51:37.620632 systemd[1]: Started sshd@27-172.31.20.210:22-139.178.89.65:35892.service - OpenSSH per-connection server daemon (139.178.89.65:35892). Feb 13 19:51:37.629402 systemd-logind[2111]: Removed session 27. Feb 13 19:51:37.648146 kubelet[3461]: I0213 19:51:37.645535 3461 topology_manager.go:215] "Topology Admit Handler" podUID="45019692-bc30-4e7c-9051-283f538fa034" podNamespace="kube-system" podName="cilium-gsfc2" Feb 13 19:51:37.648146 kubelet[3461]: E0213 19:51:37.646602 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c00f9b0-6593-4ac6-bc4e-60d7933bd48a" containerName="cilium-operator" Feb 13 19:51:37.648146 kubelet[3461]: E0213 19:51:37.646638 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdbe95b8-cf2c-41f5-a31c-06225bc35243" containerName="apply-sysctl-overwrites" Feb 13 19:51:37.648146 kubelet[3461]: E0213 19:51:37.646654 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdbe95b8-cf2c-41f5-a31c-06225bc35243" containerName="clean-cilium-state" Feb 13 19:51:37.648146 kubelet[3461]: E0213 19:51:37.646669 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdbe95b8-cf2c-41f5-a31c-06225bc35243" containerName="cilium-agent" Feb 13 19:51:37.648146 kubelet[3461]: E0213 19:51:37.646687 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdbe95b8-cf2c-41f5-a31c-06225bc35243" containerName="mount-cgroup" Feb 13 19:51:37.648146 kubelet[3461]: E0213 19:51:37.646702 3461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdbe95b8-cf2c-41f5-a31c-06225bc35243" containerName="mount-bpf-fs" Feb 13 19:51:37.648146 kubelet[3461]: I0213 19:51:37.646841 3461 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c00f9b0-6593-4ac6-bc4e-60d7933bd48a" containerName="cilium-operator" Feb 13 19:51:37.648146 kubelet[3461]: I0213 19:51:37.646860 3461 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbe95b8-cf2c-41f5-a31c-06225bc35243" containerName="cilium-agent" Feb 13 19:51:37.743649 kubelet[3461]: I0213 19:51:37.743578 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45019692-bc30-4e7c-9051-283f538fa034-bpf-maps\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.743833 kubelet[3461]: I0213 19:51:37.743655 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45019692-bc30-4e7c-9051-283f538fa034-cni-path\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.743833 kubelet[3461]: I0213 19:51:37.743704 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45019692-bc30-4e7c-9051-283f538fa034-host-proc-sys-kernel\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.743833 kubelet[3461]: I0213 19:51:37.743748 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45019692-bc30-4e7c-9051-283f538fa034-host-proc-sys-net\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.743833 kubelet[3461]: I0213 19:51:37.743790 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45019692-bc30-4e7c-9051-283f538fa034-etc-cni-netd\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.743833 kubelet[3461]: I0213 19:51:37.743824 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45019692-bc30-4e7c-9051-283f538fa034-cilium-config-path\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.744147 kubelet[3461]: I0213 19:51:37.743861 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45019692-bc30-4e7c-9051-283f538fa034-hubble-tls\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.744147 kubelet[3461]: I0213 19:51:37.743899 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45019692-bc30-4e7c-9051-283f538fa034-cilium-run\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.744147 kubelet[3461]: I0213 19:51:37.743933 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45019692-bc30-4e7c-9051-283f538fa034-lib-modules\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.744147 kubelet[3461]: I0213 19:51:37.743970 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45019692-bc30-4e7c-9051-283f538fa034-xtables-lock\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.744147 kubelet[3461]: I0213 19:51:37.744008 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45019692-bc30-4e7c-9051-283f538fa034-clustermesh-secrets\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.744147 kubelet[3461]: I0213 19:51:37.744045 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45019692-bc30-4e7c-9051-283f538fa034-cilium-ipsec-secrets\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.744472 kubelet[3461]: I0213 19:51:37.744102 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45019692-bc30-4e7c-9051-283f538fa034-cilium-cgroup\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.744472 kubelet[3461]: I0213 19:51:37.744148 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45019692-bc30-4e7c-9051-283f538fa034-hostproc\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.744472 kubelet[3461]: I0213 19:51:37.744188 3461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk25r\" (UniqueName: \"kubernetes.io/projected/45019692-bc30-4e7c-9051-283f538fa034-kube-api-access-rk25r\") pod \"cilium-gsfc2\" (UID: \"45019692-bc30-4e7c-9051-283f538fa034\") " pod="kube-system/cilium-gsfc2" Feb 13 19:51:37.880053 sshd[5361]: Accepted publickey for core from 139.178.89.65 port 35892 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:37.885258 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:37.928981 systemd-logind[2111]: New session 28 of user core. Feb 13 19:51:37.933721 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:51:37.986581 containerd[2141]: time="2025-02-13T19:51:37.986490739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsfc2,Uid:45019692-bc30-4e7c-9051-283f538fa034,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:38.038453 containerd[2141]: time="2025-02-13T19:51:38.037760919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:38.038453 containerd[2141]: time="2025-02-13T19:51:38.037894467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:38.039451 containerd[2141]: time="2025-02-13T19:51:38.037942011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.039451 containerd[2141]: time="2025-02-13T19:51:38.038216739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:38.073886 sshd[5361]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:38.099291 systemd[1]: sshd@27-172.31.20.210:22-139.178.89.65:35892.service: Deactivated successfully. Feb 13 19:51:38.113234 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:51:38.128844 systemd-logind[2111]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:51:38.138002 systemd[1]: Started sshd@28-172.31.20.210:22-139.178.89.65:35896.service - OpenSSH per-connection server daemon (139.178.89.65:35896). Feb 13 19:51:38.148529 systemd-logind[2111]: Removed session 28. Feb 13 19:51:38.163753 containerd[2141]: time="2025-02-13T19:51:38.163298920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsfc2,Uid:45019692-bc30-4e7c-9051-283f538fa034,Namespace:kube-system,Attempt:0,} returns sandbox id \"c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c\"" Feb 13 19:51:38.176114 containerd[2141]: time="2025-02-13T19:51:38.174809656Z" level=info msg="CreateContainer within sandbox \"c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:51:38.200054 containerd[2141]: time="2025-02-13T19:51:38.199892548Z" level=info msg="CreateContainer within sandbox \"c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"247928dc3cc7092d37cd8bb37dbfb4fb2d5dca9ec90d38c5f995744783254be1\"" Feb 13 19:51:38.202535 containerd[2141]: time="2025-02-13T19:51:38.202459720Z" level=info msg="StartContainer for \"247928dc3cc7092d37cd8bb37dbfb4fb2d5dca9ec90d38c5f995744783254be1\"" Feb 13 19:51:38.329675 containerd[2141]: time="2025-02-13T19:51:38.328461377Z" level=info msg="StartContainer for \"247928dc3cc7092d37cd8bb37dbfb4fb2d5dca9ec90d38c5f995744783254be1\" returns successfully" Feb 13 19:51:38.340934 sshd[5411]: Accepted publickey for core from 139.178.89.65 port 35896 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:38.345967 sshd[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:38.367396 systemd-logind[2111]: New session 29 of user core. Feb 13 19:51:38.372687 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:51:38.422973 containerd[2141]: time="2025-02-13T19:51:38.422217161Z" level=info msg="shim disconnected" id=247928dc3cc7092d37cd8bb37dbfb4fb2d5dca9ec90d38c5f995744783254be1 namespace=k8s.io Feb 13 19:51:38.422973 containerd[2141]: time="2025-02-13T19:51:38.422335565Z" level=warning msg="cleaning up after shim disconnected" id=247928dc3cc7092d37cd8bb37dbfb4fb2d5dca9ec90d38c5f995744783254be1 namespace=k8s.io Feb 13 19:51:38.422973 containerd[2141]: time="2025-02-13T19:51:38.422513621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:38.476413 containerd[2141]: time="2025-02-13T19:51:38.476333885Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:51:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:51:38.867201 systemd[1]: run-containerd-runc-k8s.io-c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c-runc.Djwe23.mount: Deactivated successfully. Feb 13 19:51:39.442931 containerd[2141]: time="2025-02-13T19:51:39.442838058Z" level=info msg="CreateContainer within sandbox \"c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:51:39.482866 containerd[2141]: time="2025-02-13T19:51:39.482328858Z" level=info msg="CreateContainer within sandbox \"c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1710dec9b950a9b8beb9ed9a32b7b4d4c02d6c63f4591c47f490b0dfa2b91ddb\"" Feb 13 19:51:39.487852 containerd[2141]: time="2025-02-13T19:51:39.487758690Z" level=info msg="StartContainer for \"1710dec9b950a9b8beb9ed9a32b7b4d4c02d6c63f4591c47f490b0dfa2b91ddb\"" Feb 13 19:51:39.590514 containerd[2141]: time="2025-02-13T19:51:39.590249743Z" level=info msg="StartContainer for \"1710dec9b950a9b8beb9ed9a32b7b4d4c02d6c63f4591c47f490b0dfa2b91ddb\" returns successfully" Feb 13 19:51:39.654922 containerd[2141]: time="2025-02-13T19:51:39.654616723Z" level=info msg="shim disconnected" id=1710dec9b950a9b8beb9ed9a32b7b4d4c02d6c63f4591c47f490b0dfa2b91ddb namespace=k8s.io Feb 13 19:51:39.654922 containerd[2141]: time="2025-02-13T19:51:39.654744043Z" level=warning msg="cleaning up after shim disconnected" id=1710dec9b950a9b8beb9ed9a32b7b4d4c02d6c63f4591c47f490b0dfa2b91ddb namespace=k8s.io Feb 13 19:51:39.654922 containerd[2141]: time="2025-02-13T19:51:39.654770827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:39.867521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1710dec9b950a9b8beb9ed9a32b7b4d4c02d6c63f4591c47f490b0dfa2b91ddb-rootfs.mount: Deactivated successfully. Feb 13 19:51:40.449754 containerd[2141]: time="2025-02-13T19:51:40.449653963Z" level=info msg="CreateContainer within sandbox \"c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:51:40.494350 containerd[2141]: time="2025-02-13T19:51:40.488331427Z" level=info msg="CreateContainer within sandbox \"c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3acfab975e48876cbaee935dd70c9b9a6bd162a75d3f09ef4db69fb98a999a73\"" Feb 13 19:51:40.494350 containerd[2141]: time="2025-02-13T19:51:40.493594279Z" level=info msg="StartContainer for \"3acfab975e48876cbaee935dd70c9b9a6bd162a75d3f09ef4db69fb98a999a73\"" Feb 13 19:51:40.493575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134191625.mount: Deactivated successfully. Feb 13 19:51:40.620232 containerd[2141]: time="2025-02-13T19:51:40.620148464Z" level=info msg="StartContainer for \"3acfab975e48876cbaee935dd70c9b9a6bd162a75d3f09ef4db69fb98a999a73\" returns successfully" Feb 13 19:51:40.711637 containerd[2141]: time="2025-02-13T19:51:40.711155757Z" level=info msg="shim disconnected" id=3acfab975e48876cbaee935dd70c9b9a6bd162a75d3f09ef4db69fb98a999a73 namespace=k8s.io Feb 13 19:51:40.711637 containerd[2141]: time="2025-02-13T19:51:40.711257097Z" level=warning msg="cleaning up after shim disconnected" id=3acfab975e48876cbaee935dd70c9b9a6bd162a75d3f09ef4db69fb98a999a73 namespace=k8s.io Feb 13 19:51:40.711637 containerd[2141]: time="2025-02-13T19:51:40.711280101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:40.869628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3acfab975e48876cbaee935dd70c9b9a6bd162a75d3f09ef4db69fb98a999a73-rootfs.mount: Deactivated successfully. Feb 13 19:51:41.453985 containerd[2141]: time="2025-02-13T19:51:41.453624080Z" level=info msg="CreateContainer within sandbox \"c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:51:41.489782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4140323178.mount: Deactivated successfully. Feb 13 19:51:41.492141 containerd[2141]: time="2025-02-13T19:51:41.491926412Z" level=info msg="CreateContainer within sandbox \"c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dbc2d55896450c3413e4132e6ece72df5d42707d786734728b915e21aab5bd60\"" Feb 13 19:51:41.494562 containerd[2141]: time="2025-02-13T19:51:41.494443796Z" level=info msg="StartContainer for \"dbc2d55896450c3413e4132e6ece72df5d42707d786734728b915e21aab5bd60\"" Feb 13 19:51:41.594252 containerd[2141]: time="2025-02-13T19:51:41.593816085Z" level=info msg="StartContainer for \"dbc2d55896450c3413e4132e6ece72df5d42707d786734728b915e21aab5bd60\" returns successfully" Feb 13 19:51:41.633529 containerd[2141]: time="2025-02-13T19:51:41.633249321Z" level=info msg="shim disconnected" id=dbc2d55896450c3413e4132e6ece72df5d42707d786734728b915e21aab5bd60 namespace=k8s.io Feb 13 19:51:41.633529 containerd[2141]: time="2025-02-13T19:51:41.633404721Z" level=warning msg="cleaning up after shim disconnected" id=dbc2d55896450c3413e4132e6ece72df5d42707d786734728b915e21aab5bd60 namespace=k8s.io Feb 13 19:51:41.633529 containerd[2141]: time="2025-02-13T19:51:41.633425109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:41.804789 containerd[2141]: time="2025-02-13T19:51:41.804328114Z" level=info msg="StopPodSandbox for \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\"" Feb 13 19:51:41.804789 containerd[2141]: time="2025-02-13T19:51:41.804589462Z" level=info msg="TearDown network for sandbox \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\" successfully" Feb 13 19:51:41.804789 containerd[2141]: time="2025-02-13T19:51:41.804616798Z" level=info msg="StopPodSandbox for \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\" returns successfully" Feb 13 19:51:41.806139 containerd[2141]: time="2025-02-13T19:51:41.806073070Z" level=info msg="RemovePodSandbox for \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\"" Feb 13 19:51:41.806268 containerd[2141]: time="2025-02-13T19:51:41.806151142Z" level=info msg="Forcibly stopping sandbox \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\"" Feb 13 19:51:41.806268 containerd[2141]: time="2025-02-13T19:51:41.806249926Z" level=info msg="TearDown network for sandbox \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\" successfully" Feb 13 19:51:41.812558 containerd[2141]: time="2025-02-13T19:51:41.812470894Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:51:41.812713 containerd[2141]: time="2025-02-13T19:51:41.812572282Z" level=info msg="RemovePodSandbox \"34929c430a10e411378b7f7e486f52a80bbcc8ee093595c6a26d5560d59b793e\" returns successfully" Feb 13 19:51:41.813437 containerd[2141]: time="2025-02-13T19:51:41.813385234Z" level=info msg="StopPodSandbox for \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\"" Feb 13 19:51:41.813760 containerd[2141]: time="2025-02-13T19:51:41.813631474Z" level=info msg="TearDown network for sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" successfully" Feb 13 19:51:41.813760 containerd[2141]: time="2025-02-13T19:51:41.813758890Z" level=info msg="StopPodSandbox for \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" returns successfully" Feb 13 19:51:41.814811 containerd[2141]: time="2025-02-13T19:51:41.814382554Z" level=info msg="RemovePodSandbox for \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\"" Feb 13 19:51:41.814811 containerd[2141]: time="2025-02-13T19:51:41.814429654Z" level=info msg="Forcibly stopping sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\"" Feb 13 19:51:41.814811 containerd[2141]: time="2025-02-13T19:51:41.814521286Z" level=info msg="TearDown network for sandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" successfully" Feb 13 19:51:41.820893 containerd[2141]: time="2025-02-13T19:51:41.820800298Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:51:41.820893 containerd[2141]: time="2025-02-13T19:51:41.820886098Z" level=info msg="RemovePodSandbox \"b5ee7a2239d7d2b142481572fa36beb332d02e4771bc576b75c582da6c397758\" returns successfully" Feb 13 19:51:41.868952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbc2d55896450c3413e4132e6ece72df5d42707d786734728b915e21aab5bd60-rootfs.mount: Deactivated successfully. Feb 13 19:51:42.136919 kubelet[3461]: E0213 19:51:42.136734 3461 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:51:42.461683 containerd[2141]: time="2025-02-13T19:51:42.461220213Z" level=info msg="CreateContainer within sandbox \"c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:51:42.502126 containerd[2141]: time="2025-02-13T19:51:42.499324593Z" level=info msg="CreateContainer within sandbox \"c627ace40073b8c71dda19701fe5667815f201c3918169b54244d7c150adb30c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b83c9833bf745a52b221314aba256b559f20ed27c00c3750b8dc2204a5f77ab5\"" Feb 13 19:51:42.505674 containerd[2141]: time="2025-02-13T19:51:42.504811665Z" level=info msg="StartContainer for \"b83c9833bf745a52b221314aba256b559f20ed27c00c3750b8dc2204a5f77ab5\"" Feb 13 19:51:42.511319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3876007770.mount: Deactivated successfully. Feb 13 19:51:42.630133 containerd[2141]: time="2025-02-13T19:51:42.629614222Z" level=info msg="StartContainer for \"b83c9833bf745a52b221314aba256b559f20ed27c00c3750b8dc2204a5f77ab5\" returns successfully" Feb 13 19:51:43.464121 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:51:43.599009 update_engine[2116]: I20250213 19:51:43.598161 2116 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:51:43.599009 update_engine[2116]: I20250213 19:51:43.598593 2116 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:51:43.599009 update_engine[2116]: I20250213 19:51:43.598938 2116 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:51:43.600179 update_engine[2116]: E20250213 19:51:43.600104 2116 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:51:43.600400 update_engine[2116]: I20250213 19:51:43.600366 2116 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:51:43.600895 update_engine[2116]: I20250213 19:51:43.600505 2116 omaha_request_action.cc:617] Omaha request response: Feb 13 19:51:43.600895 update_engine[2116]: E20250213 19:51:43.600628 2116 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 19:51:43.600895 update_engine[2116]: I20250213 19:51:43.600662 2116 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 19:51:43.600895 update_engine[2116]: I20250213 19:51:43.600677 2116 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:51:43.600895 update_engine[2116]: I20250213 19:51:43.600692 2116 update_attempter.cc:306] Processing Done. Feb 13 19:51:43.600895 update_engine[2116]: E20250213 19:51:43.600719 2116 update_attempter.cc:619] Update failed. Feb 13 19:51:43.600895 update_engine[2116]: I20250213 19:51:43.600735 2116 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 19:51:43.600895 update_engine[2116]: I20250213 19:51:43.600750 2116 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 19:51:43.600895 update_engine[2116]: I20250213 19:51:43.600766 2116 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 19:51:43.601989 update_engine[2116]: I20250213 19:51:43.601464 2116 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:51:43.601989 update_engine[2116]: I20250213 19:51:43.601516 2116 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:51:43.601989 update_engine[2116]: I20250213 19:51:43.601533 2116 omaha_request_action.cc:272] Request: Feb 13 19:51:43.601989 update_engine[2116]: Feb 13 19:51:43.601989 update_engine[2116]: Feb 13 19:51:43.601989 update_engine[2116]: Feb 13 19:51:43.601989 update_engine[2116]: Feb 13 19:51:43.601989 update_engine[2116]: Feb 13 19:51:43.601989 update_engine[2116]: Feb 13 19:51:43.601989 update_engine[2116]: I20250213 19:51:43.601550 2116 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:51:43.601989 update_engine[2116]: I20250213 19:51:43.601835 2116 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:51:43.602775 locksmithd[2169]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 19:51:43.603736 update_engine[2116]: I20250213 19:51:43.603550 2116 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:51:43.604404 update_engine[2116]: E20250213 19:51:43.603894 2116 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:51:43.604404 update_engine[2116]: I20250213 19:51:43.604007 2116 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:51:43.604404 update_engine[2116]: I20250213 19:51:43.604027 2116 omaha_request_action.cc:617] Omaha request response: Feb 13 19:51:43.604404 update_engine[2116]: I20250213 19:51:43.604046 2116 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:51:43.604404 update_engine[2116]: I20250213 19:51:43.604063 2116 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:51:43.604404 update_engine[2116]: I20250213 19:51:43.604106 2116 update_attempter.cc:306] Processing Done. Feb 13 19:51:43.604404 update_engine[2116]: I20250213 19:51:43.604130 2116 update_attempter.cc:310] Error event sent. Feb 13 19:51:43.604404 update_engine[2116]: I20250213 19:51:43.604151 2116 update_check_scheduler.cc:74] Next update check in 48m20s Feb 13 19:51:43.604925 locksmithd[2169]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 19:51:43.816670 kubelet[3461]: E0213 19:51:43.816147 3461 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-tvg59" podUID="3f456e4c-2bc6-4f21-8c97-0b034f913878" Feb 13 19:51:44.975698 systemd[1]: run-containerd-runc-k8s.io-b83c9833bf745a52b221314aba256b559f20ed27c00c3750b8dc2204a5f77ab5-runc.M2gO9F.mount: Deactivated successfully. Feb 13 19:51:45.288286 kubelet[3461]: I0213 19:51:45.286516 3461 setters.go:580] "Node became not ready" node="ip-172-31-20-210" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:51:45Z","lastTransitionTime":"2025-02-13T19:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:51:45.818340 kubelet[3461]: E0213 19:51:45.816068 3461 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-tvg59" podUID="3f456e4c-2bc6-4f21-8c97-0b034f913878" Feb 13 19:51:47.266566 systemd[1]: run-containerd-runc-k8s.io-b83c9833bf745a52b221314aba256b559f20ed27c00c3750b8dc2204a5f77ab5-runc.Dc8ku6.mount: Deactivated successfully. Feb 13 19:51:47.978049 systemd-networkd[1683]: lxc_health: Link UP Feb 13 19:51:48.000666 (udev-worker)[6209]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:48.007694 systemd-networkd[1683]: lxc_health: Gained carrier Feb 13 19:51:48.103634 kubelet[3461]: I0213 19:51:48.101607 3461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gsfc2" podStartSLOduration=11.101583265 podStartE2EDuration="11.101583265s" podCreationTimestamp="2025-02-13 19:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:43.527309075 +0000 UTC m=+121.959733723" watchObservedRunningTime="2025-02-13 19:51:48.101583265 +0000 UTC m=+126.534007853" Feb 13 19:51:49.194513 systemd-networkd[1683]: lxc_health: Gained IPv6LL Feb 13 19:51:51.765570 ntpd[2091]: Listen normally on 13 lxc_health [fe80::648c:32ff:fe74:9c22%14]:123 Feb 13 19:51:51.766869 ntpd[2091]: 13 Feb 19:51:51 ntpd[2091]: Listen normally on 13 lxc_health [fe80::648c:32ff:fe74:9c22%14]:123 Feb 13 19:51:54.657558 sshd[5411]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:54.667888 systemd[1]: sshd@28-172.31.20.210:22-139.178.89.65:35896.service: Deactivated successfully. Feb 13 19:51:54.684148 systemd-logind[2111]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:51:54.688336 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:51:54.695791 systemd-logind[2111]: Removed session 29. Feb 13 19:52:09.551393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19bf5070011a6a9e530d0d1d0455c9b70798db85ee1f0df7ca7887cec436738e-rootfs.mount: Deactivated successfully. Feb 13 19:52:09.595624 containerd[2141]: time="2025-02-13T19:52:09.595513164Z" level=info msg="shim disconnected" id=19bf5070011a6a9e530d0d1d0455c9b70798db85ee1f0df7ca7887cec436738e namespace=k8s.io Feb 13 19:52:09.595624 containerd[2141]: time="2025-02-13T19:52:09.595589532Z" level=warning msg="cleaning up after shim disconnected" id=19bf5070011a6a9e530d0d1d0455c9b70798db85ee1f0df7ca7887cec436738e namespace=k8s.io Feb 13 19:52:09.595624 containerd[2141]: time="2025-02-13T19:52:09.595611888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:52:10.578123 kubelet[3461]: I0213 19:52:10.577952 3461 scope.go:117] "RemoveContainer" containerID="19bf5070011a6a9e530d0d1d0455c9b70798db85ee1f0df7ca7887cec436738e" Feb 13 19:52:10.583274 containerd[2141]: time="2025-02-13T19:52:10.583203481Z" level=info msg="CreateContainer within sandbox \"3c463d57dd15d7199826ba6735e9ade235b0d5e675b7791938c109cba78572c4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:52:10.609612 containerd[2141]: time="2025-02-13T19:52:10.609519889Z" level=info msg="CreateContainer within sandbox \"3c463d57dd15d7199826ba6735e9ade235b0d5e675b7791938c109cba78572c4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0fa5e112aed34d15545bb7f13cecea69c543ceda6e743a513853bcad2fed6143\"" Feb 13 19:52:10.610979 containerd[2141]: time="2025-02-13T19:52:10.610919473Z" level=info msg="StartContainer for \"0fa5e112aed34d15545bb7f13cecea69c543ceda6e743a513853bcad2fed6143\"" Feb 13 19:52:10.732625 containerd[2141]: time="2025-02-13T19:52:10.732363710Z" level=info msg="StartContainer for \"0fa5e112aed34d15545bb7f13cecea69c543ceda6e743a513853bcad2fed6143\" returns successfully" Feb 13 19:52:13.307705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d0e9f31805727a501e664f5c04710f2994ef15b14034bdc18aa2b72cd38c054-rootfs.mount: Deactivated successfully. Feb 13 19:52:13.314986 containerd[2141]: time="2025-02-13T19:52:13.314498499Z" level=info msg="shim disconnected" id=5d0e9f31805727a501e664f5c04710f2994ef15b14034bdc18aa2b72cd38c054 namespace=k8s.io Feb 13 19:52:13.316168 containerd[2141]: time="2025-02-13T19:52:13.314892615Z" level=warning msg="cleaning up after shim disconnected" id=5d0e9f31805727a501e664f5c04710f2994ef15b14034bdc18aa2b72cd38c054 namespace=k8s.io Feb 13 19:52:13.316168 containerd[2141]: time="2025-02-13T19:52:13.315754155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:52:13.343342 containerd[2141]: time="2025-02-13T19:52:13.343181115Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:52:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:52:13.594291 kubelet[3461]: I0213 19:52:13.594029 3461 scope.go:117] "RemoveContainer" containerID="5d0e9f31805727a501e664f5c04710f2994ef15b14034bdc18aa2b72cd38c054" Feb 13 19:52:13.602387 containerd[2141]: time="2025-02-13T19:52:13.600380392Z" level=info msg="CreateContainer within sandbox \"01ce4274fabf3ba1b3579e4172d2789a6f78914cb32779190c311365c842c8e7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:52:13.636512 containerd[2141]: time="2025-02-13T19:52:13.635857420Z" level=info msg="CreateContainer within sandbox \"01ce4274fabf3ba1b3579e4172d2789a6f78914cb32779190c311365c842c8e7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"fee6328272dceac2b103a88eac37cef561176fd2848183343ce6b02befdf652b\"" Feb 13 19:52:13.637382 containerd[2141]: time="2025-02-13T19:52:13.637341328Z" level=info msg="StartContainer for \"fee6328272dceac2b103a88eac37cef561176fd2848183343ce6b02befdf652b\"" Feb 13 19:52:13.819454 containerd[2141]: time="2025-02-13T19:52:13.818827745Z" level=info msg="StartContainer for \"fee6328272dceac2b103a88eac37cef561176fd2848183343ce6b02befdf652b\" returns successfully" Feb 13 19:52:14.971038 kubelet[3461]: E0213 19:52:14.968792 3461 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-210?timeout=10s\": context deadline exceeded"