Feb 13 19:48:41.204667 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:48:41.204714 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:48:41.204739 kernel: KASLR disabled due to lack of seed Feb 13 19:48:41.204756 kernel: efi: EFI v2.7 by EDK II Feb 13 19:48:41.204772 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 19:48:41.204788 kernel: ACPI: Early table checksum verification disabled Feb 13 19:48:41.204807 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:48:41.204824 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:48:41.204841 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:48:41.204857 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:48:41.204878 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:48:41.204894 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:48:41.204911 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:48:41.204927 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:48:41.204947 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:48:41.204968 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:48:41.204986 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:48:41.205003 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:48:41.205020 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:48:41.205037 kernel: printk: bootconsole [uart0] enabled Feb 13 19:48:41.205054 kernel: NUMA: Failed to initialise from firmware Feb 13 19:48:41.205096 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:48:41.205114 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:48:41.205131 kernel: Zone ranges: Feb 13 19:48:41.205148 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:48:41.205165 kernel: DMA32 empty Feb 13 19:48:41.205188 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:48:41.205206 kernel: Movable zone start for each node Feb 13 19:48:41.205222 kernel: Early memory node ranges Feb 13 19:48:41.205240 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:48:41.205257 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:48:41.205274 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:48:41.205290 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:48:41.205307 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:48:41.205324 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:48:41.205341 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:48:41.205358 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:48:41.205374 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:48:41.205396 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:48:41.205414 kernel: psci: probing for conduit method from ACPI. Feb 13 19:48:41.205438 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:48:41.205456 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:48:41.205474 kernel: psci: Trusted OS migration not required Feb 13 19:48:41.205496 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:48:41.205514 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:48:41.205532 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:48:41.205551 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:48:41.205568 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:48:41.205587 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:48:41.205604 kernel: CPU features: detected: Spectre-v2 Feb 13 19:48:41.205622 kernel: CPU features: detected: Spectre-v3a Feb 13 19:48:41.205639 kernel: CPU features: detected: Spectre-BHB Feb 13 19:48:41.205657 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:48:41.205675 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:48:41.205696 kernel: alternatives: applying boot alternatives Feb 13 19:48:41.205717 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:48:41.205736 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:48:41.205754 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:48:41.205772 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:48:41.205789 kernel: Fallback order for Node 0: 0 Feb 13 19:48:41.205807 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:48:41.205824 kernel: Policy zone: Normal Feb 13 19:48:41.205842 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:48:41.205860 kernel: software IO TLB: area num 2. Feb 13 19:48:41.205877 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:48:41.205901 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 19:48:41.205919 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:48:41.205936 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:48:41.205955 kernel: rcu: RCU event tracing is enabled. Feb 13 19:48:41.205973 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:48:41.205991 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:48:41.206009 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:48:41.206027 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:48:41.206045 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:48:41.206079 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:48:41.206102 kernel: GICv3: 96 SPIs implemented Feb 13 19:48:41.206125 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:48:41.206143 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:48:41.206161 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:48:41.206179 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:48:41.206196 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:48:41.206214 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:48:41.206232 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:48:41.206250 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:48:41.206267 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:48:41.206285 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:48:41.206303 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:48:41.206321 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:48:41.206343 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:48:41.206361 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:48:41.206379 kernel: Console: colour dummy device 80x25 Feb 13 19:48:41.206398 kernel: printk: console [tty1] enabled Feb 13 19:48:41.206416 kernel: ACPI: Core revision 20230628 Feb 13 19:48:41.206434 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:48:41.206452 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:48:41.206471 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:48:41.206489 kernel: landlock: Up and running. Feb 13 19:48:41.206511 kernel: SELinux: Initializing. Feb 13 19:48:41.206529 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:48:41.206548 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:48:41.206566 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:48:41.206584 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:48:41.206602 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:48:41.206621 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:48:41.206639 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:48:41.206657 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:48:41.206680 kernel: Remapping and enabling EFI services. Feb 13 19:48:41.206698 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:48:41.206716 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:48:41.206734 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:48:41.206752 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:48:41.206770 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:48:41.206788 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:48:41.206806 kernel: SMP: Total of 2 processors activated. Feb 13 19:48:41.206824 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:48:41.206846 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:48:41.206864 kernel: CPU features: detected: CRC32 instructions Feb 13 19:48:41.206882 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:48:41.206912 kernel: alternatives: applying system-wide alternatives Feb 13 19:48:41.206935 kernel: devtmpfs: initialized Feb 13 19:48:41.206954 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:48:41.206973 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:48:41.206991 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:48:41.207010 kernel: SMBIOS 3.0.0 present. Feb 13 19:48:41.207029 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:48:41.207052 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:48:41.207560 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:48:41.207585 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:48:41.207604 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:48:41.207624 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:48:41.207643 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Feb 13 19:48:41.207662 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:48:41.207690 kernel: cpuidle: using governor menu Feb 13 19:48:41.207709 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:48:41.207728 kernel: ASID allocator initialised with 65536 entries Feb 13 19:48:41.207747 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:48:41.207766 kernel: Serial: AMBA PL011 UART driver Feb 13 19:48:41.207785 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 19:48:41.207804 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:48:41.207823 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:48:41.207842 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:48:41.207866 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:48:41.207885 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:48:41.207904 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:48:41.207922 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:48:41.207941 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:48:41.207960 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:48:41.207979 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:48:41.207998 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:48:41.208017 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:48:41.208040 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:48:41.208107 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:48:41.208132 kernel: ACPI: Interpreter enabled Feb 13 19:48:41.208152 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:48:41.208170 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:48:41.208207 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:48:41.208516 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:48:41.208735 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:48:41.208950 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:48:41.210746 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:48:41.210962 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:48:41.210989 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:48:41.211009 kernel: acpiphp: Slot [1] registered Feb 13 19:48:41.211028 kernel: acpiphp: Slot [2] registered Feb 13 19:48:41.211047 kernel: acpiphp: Slot [3] registered Feb 13 19:48:41.211169 kernel: acpiphp: Slot [4] registered Feb 13 19:48:41.211216 kernel: acpiphp: Slot [5] registered Feb 13 19:48:41.211237 kernel: acpiphp: Slot [6] registered Feb 13 19:48:41.211256 kernel: acpiphp: Slot [7] registered Feb 13 19:48:41.211275 kernel: acpiphp: Slot [8] registered Feb 13 19:48:41.211294 kernel: acpiphp: Slot [9] registered Feb 13 19:48:41.211312 kernel: acpiphp: Slot [10] registered Feb 13 19:48:41.211331 kernel: acpiphp: Slot [11] registered Feb 13 19:48:41.211350 kernel: acpiphp: Slot [12] registered Feb 13 19:48:41.211369 kernel: acpiphp: Slot [13] registered Feb 13 19:48:41.211388 kernel: acpiphp: Slot [14] registered Feb 13 19:48:41.211412 kernel: acpiphp: Slot [15] registered Feb 13 19:48:41.211431 kernel: acpiphp: Slot [16] registered Feb 13 19:48:41.211449 kernel: acpiphp: Slot [17] registered Feb 13 19:48:41.211468 kernel: acpiphp: Slot [18] registered Feb 13 19:48:41.211486 kernel: acpiphp: Slot [19] registered Feb 13 19:48:41.211505 kernel: acpiphp: Slot [20] registered Feb 13 19:48:41.211524 kernel: acpiphp: Slot [21] registered Feb 13 19:48:41.211542 kernel: acpiphp: Slot [22] registered Feb 13 19:48:41.211561 kernel: acpiphp: Slot [23] registered Feb 13 19:48:41.211584 kernel: acpiphp: Slot [24] registered Feb 13 19:48:41.211603 kernel: acpiphp: Slot [25] registered Feb 13 19:48:41.211621 kernel: acpiphp: Slot [26] registered Feb 13 19:48:41.211640 kernel: acpiphp: Slot [27] registered Feb 13 19:48:41.211659 kernel: acpiphp: Slot [28] registered Feb 13 19:48:41.211678 kernel: acpiphp: Slot [29] registered Feb 13 19:48:41.211697 kernel: acpiphp: Slot [30] registered Feb 13 19:48:41.211715 kernel: acpiphp: Slot [31] registered Feb 13 19:48:41.211734 kernel: PCI host bridge to bus 0000:00 Feb 13 19:48:41.211948 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:48:41.214464 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:48:41.214674 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:48:41.214867 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:48:41.215140 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:48:41.215400 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:48:41.215614 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:48:41.215845 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:48:41.216082 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:48:41.217676 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:48:41.217927 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:48:41.218204 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:48:41.218433 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:48:41.218668 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:48:41.218878 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:48:41.219161 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:48:41.219397 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:48:41.219604 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:48:41.219808 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:48:41.220020 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:48:41.220278 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:48:41.220469 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:48:41.220666 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:48:41.220695 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:48:41.220717 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:48:41.220738 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:48:41.220759 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:48:41.220779 kernel: iommu: Default domain type: Translated Feb 13 19:48:41.220799 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:48:41.220829 kernel: efivars: Registered efivars operations Feb 13 19:48:41.220849 kernel: vgaarb: loaded Feb 13 19:48:41.220869 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:48:41.220917 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:48:41.220938 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:48:41.220975 kernel: pnp: PnP ACPI init Feb 13 19:48:41.221399 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:48:41.221441 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:48:41.221473 kernel: NET: Registered PF_INET protocol family Feb 13 19:48:41.221494 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:48:41.221514 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:48:41.221535 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:48:41.221557 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:48:41.221579 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:48:41.221598 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:48:41.221618 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:48:41.221637 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:48:41.221661 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:48:41.221680 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:48:41.221699 kernel: kvm [1]: HYP mode not available Feb 13 19:48:41.221718 kernel: Initialise system trusted keyrings Feb 13 19:48:41.221738 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:48:41.221757 kernel: Key type asymmetric registered Feb 13 19:48:41.221776 kernel: Asymmetric key parser 'x509' registered Feb 13 19:48:41.221795 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:48:41.221815 kernel: io scheduler mq-deadline registered Feb 13 19:48:41.221840 kernel: io scheduler kyber registered Feb 13 19:48:41.221860 kernel: io scheduler bfq registered Feb 13 19:48:41.222198 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:48:41.222234 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:48:41.222255 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:48:41.222275 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:48:41.222294 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:48:41.222314 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:48:41.222343 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:48:41.222574 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:48:41.222603 kernel: printk: console [ttyS0] disabled Feb 13 19:48:41.222622 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:48:41.222641 kernel: printk: console [ttyS0] enabled Feb 13 19:48:41.222661 kernel: printk: bootconsole [uart0] disabled Feb 13 19:48:41.222680 kernel: thunder_xcv, ver 1.0 Feb 13 19:48:41.222700 kernel: thunder_bgx, ver 1.0 Feb 13 19:48:41.222719 kernel: nicpf, ver 1.0 Feb 13 19:48:41.222745 kernel: nicvf, ver 1.0 Feb 13 19:48:41.223123 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:48:41.224200 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:48:40 UTC (1739476120) Feb 13 19:48:41.224245 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:48:41.224265 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:48:41.224285 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:48:41.224305 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:48:41.224324 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:48:41.224352 kernel: Segment Routing with IPv6 Feb 13 19:48:41.224371 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:48:41.224390 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:48:41.224409 kernel: Key type dns_resolver registered Feb 13 19:48:41.224428 kernel: registered taskstats version 1 Feb 13 19:48:41.224448 kernel: Loading compiled-in X.509 certificates Feb 13 19:48:41.224468 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:48:41.224487 kernel: Key type .fscrypt registered Feb 13 19:48:41.224506 kernel: Key type fscrypt-provisioning registered Feb 13 19:48:41.224530 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:48:41.224550 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:48:41.224569 kernel: ima: No architecture policies found Feb 13 19:48:41.224588 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:48:41.224607 kernel: clk: Disabling unused clocks Feb 13 19:48:41.224629 kernel: Freeing unused kernel memory: 39360K Feb 13 19:48:41.224648 kernel: Run /init as init process Feb 13 19:48:41.224667 kernel: with arguments: Feb 13 19:48:41.224686 kernel: /init Feb 13 19:48:41.224705 kernel: with environment: Feb 13 19:48:41.224728 kernel: HOME=/ Feb 13 19:48:41.224747 kernel: TERM=linux Feb 13 19:48:41.224766 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:48:41.224789 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:48:41.224814 systemd[1]: Detected virtualization amazon. Feb 13 19:48:41.224835 systemd[1]: Detected architecture arm64. Feb 13 19:48:41.224856 systemd[1]: Running in initrd. Feb 13 19:48:41.224881 systemd[1]: No hostname configured, using default hostname. Feb 13 19:48:41.224902 systemd[1]: Hostname set to . Feb 13 19:48:41.224923 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:48:41.224960 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:48:41.224989 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:41.225011 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:41.225035 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:48:41.225058 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:48:41.226147 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:48:41.226175 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:48:41.226200 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:48:41.226221 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:48:41.226243 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:41.226263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:41.226284 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:48:41.226310 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:48:41.226331 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:48:41.226351 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:48:41.226372 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:48:41.226393 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:48:41.226414 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:48:41.226435 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:48:41.226456 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:41.226477 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:41.226502 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:41.226523 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:48:41.226544 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:48:41.226565 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:48:41.226585 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:48:41.226606 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:48:41.226626 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:48:41.226647 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:48:41.226672 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:41.226694 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:48:41.226714 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:41.226735 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:48:41.226757 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:48:41.226827 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 19:48:41.226873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:41.226894 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:48:41.226919 systemd-journald[251]: Journal started Feb 13 19:48:41.226958 systemd-journald[251]: Runtime Journal (/run/log/journal/ec25641d80dd511a05ac746e93e4a9e8) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:48:41.232298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:41.189186 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 19:48:41.234912 kernel: Bridge firewalling registered Feb 13 19:48:41.233271 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 19:48:41.244741 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:48:41.245531 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:41.250473 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:48:41.263255 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:48:41.268050 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:48:41.272171 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:48:41.307447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:41.328106 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:41.339437 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:48:41.341818 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:41.353170 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:41.365366 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:48:41.390676 dracut-cmdline[283]: dracut-dracut-053 Feb 13 19:48:41.399132 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:48:41.460740 systemd-resolved[287]: Positive Trust Anchors: Feb 13 19:48:41.460777 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:48:41.460839 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:48:41.535103 kernel: SCSI subsystem initialized Feb 13 19:48:41.542092 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:48:41.555107 kernel: iscsi: registered transport (tcp) Feb 13 19:48:41.577364 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:48:41.577450 kernel: QLogic iSCSI HBA Driver Feb 13 19:48:41.678103 kernel: random: crng init done Feb 13 19:48:41.678442 systemd-resolved[287]: Defaulting to hostname 'linux'. Feb 13 19:48:41.681892 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:48:41.685982 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:41.709407 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:48:41.722385 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:48:41.755928 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:48:41.756051 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:48:41.758110 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:48:41.823110 kernel: raid6: neonx8 gen() 6698 MB/s Feb 13 19:48:41.840100 kernel: raid6: neonx4 gen() 6521 MB/s Feb 13 19:48:41.857096 kernel: raid6: neonx2 gen() 5431 MB/s Feb 13 19:48:41.874096 kernel: raid6: neonx1 gen() 3950 MB/s Feb 13 19:48:41.891095 kernel: raid6: int64x8 gen() 3823 MB/s Feb 13 19:48:41.908095 kernel: raid6: int64x4 gen() 3704 MB/s Feb 13 19:48:41.925094 kernel: raid6: int64x2 gen() 3604 MB/s Feb 13 19:48:41.942890 kernel: raid6: int64x1 gen() 2767 MB/s Feb 13 19:48:41.942925 kernel: raid6: using algorithm neonx8 gen() 6698 MB/s Feb 13 19:48:41.960877 kernel: raid6: .... xor() 4876 MB/s, rmw enabled Feb 13 19:48:41.960917 kernel: raid6: using neon recovery algorithm Feb 13 19:48:41.969363 kernel: xor: measuring software checksum speed Feb 13 19:48:41.969416 kernel: 8regs : 10970 MB/sec Feb 13 19:48:41.970464 kernel: 32regs : 11943 MB/sec Feb 13 19:48:41.971641 kernel: arm64_neon : 9580 MB/sec Feb 13 19:48:41.971674 kernel: xor: using function: 32regs (11943 MB/sec) Feb 13 19:48:42.055115 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:48:42.074801 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:48:42.084451 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:42.118389 systemd-udevd[470]: Using default interface naming scheme 'v255'. Feb 13 19:48:42.127806 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:42.145354 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:48:42.179655 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Feb 13 19:48:42.238137 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:48:42.252484 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:48:42.365477 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:42.379714 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:48:42.417472 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:48:42.422468 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:48:42.425037 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:42.429046 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:48:42.443388 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:48:42.491690 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:48:42.572839 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:48:42.572907 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:48:42.588889 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:48:42.591292 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:48:42.591552 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:9c:60:31:7c:c3 Feb 13 19:48:42.580352 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:48:42.580573 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:42.588927 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:42.591281 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:42.591573 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:42.593909 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:42.599787 (udev-worker)[536]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:48:42.617742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:42.630116 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:48:42.632117 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:48:42.640240 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:48:42.647994 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:48:42.648103 kernel: GPT:9289727 != 16777215 Feb 13 19:48:42.648143 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:48:42.648170 kernel: GPT:9289727 != 16777215 Feb 13 19:48:42.649086 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:48:42.650508 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:42.653212 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:42.668422 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:42.718276 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:42.737172 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (522) Feb 13 19:48:42.779112 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (545) Feb 13 19:48:42.817526 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:48:42.878481 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:48:42.896623 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:48:42.912492 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:48:42.917308 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:48:42.935417 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:48:42.949123 disk-uuid[662]: Primary Header is updated. Feb 13 19:48:42.949123 disk-uuid[662]: Secondary Entries is updated. Feb 13 19:48:42.949123 disk-uuid[662]: Secondary Header is updated. Feb 13 19:48:42.959096 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:42.966296 kernel: GPT:disk_guids don't match. Feb 13 19:48:42.966359 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:48:42.966385 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:43.975134 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:43.977203 disk-uuid[663]: The operation has completed successfully. Feb 13 19:48:44.164086 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:48:44.164672 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:48:44.201379 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:48:44.223128 sh[1007]: Success Feb 13 19:48:44.248233 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:48:44.354972 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:48:44.370287 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:48:44.376408 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:48:44.414503 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:48:44.414568 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:44.414596 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:48:44.415881 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:48:44.416978 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:48:44.532086 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:48:44.561238 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:48:44.565049 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:48:44.580428 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:48:44.592362 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:48:44.618266 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:44.618926 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:44.618956 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:44.627100 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:44.643121 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:48:44.646135 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:44.656165 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:48:44.666413 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:48:44.773642 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:48:44.784362 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:48:44.848382 systemd-networkd[1200]: lo: Link UP Feb 13 19:48:44.850004 systemd-networkd[1200]: lo: Gained carrier Feb 13 19:48:44.853837 systemd-networkd[1200]: Enumeration completed Feb 13 19:48:44.854586 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:44.854593 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:48:44.856136 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:48:44.865701 systemd[1]: Reached target network.target - Network. Feb 13 19:48:44.871710 systemd-networkd[1200]: eth0: Link UP Feb 13 19:48:44.871723 systemd-networkd[1200]: eth0: Gained carrier Feb 13 19:48:44.871741 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:44.892159 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.26.215/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:48:45.090724 ignition[1112]: Ignition 2.19.0 Feb 13 19:48:45.091284 ignition[1112]: Stage: fetch-offline Feb 13 19:48:45.091840 ignition[1112]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:45.091865 ignition[1112]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:45.092356 ignition[1112]: Ignition finished successfully Feb 13 19:48:45.101930 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:48:45.118358 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:48:45.141447 ignition[1210]: Ignition 2.19.0 Feb 13 19:48:45.141479 ignition[1210]: Stage: fetch Feb 13 19:48:45.142972 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:45.142997 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:45.143238 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:45.164942 ignition[1210]: PUT result: OK Feb 13 19:48:45.167883 ignition[1210]: parsed url from cmdline: "" Feb 13 19:48:45.167900 ignition[1210]: no config URL provided Feb 13 19:48:45.167917 ignition[1210]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:48:45.167944 ignition[1210]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:48:45.167977 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:45.170947 ignition[1210]: PUT result: OK Feb 13 19:48:45.171027 ignition[1210]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:48:45.175620 ignition[1210]: GET result: OK Feb 13 19:48:45.175755 ignition[1210]: parsing config with SHA512: 1992e45aaeffea6e551e244aa9238a4b3fe15d30254455c518800e1358c14538efd8cae1a8239614d86eacecee5f7f040a9923c5f33e70f41a2187fb084f7277 Feb 13 19:48:45.192647 unknown[1210]: fetched base config from "system" Feb 13 19:48:45.192676 unknown[1210]: fetched base config from "system" Feb 13 19:48:45.192692 unknown[1210]: fetched user config from "aws" Feb 13 19:48:45.194993 ignition[1210]: fetch: fetch complete Feb 13 19:48:45.195005 ignition[1210]: fetch: fetch passed Feb 13 19:48:45.195440 ignition[1210]: Ignition finished successfully Feb 13 19:48:45.204703 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:48:45.217479 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:48:45.244139 ignition[1217]: Ignition 2.19.0 Feb 13 19:48:45.244166 ignition[1217]: Stage: kargs Feb 13 19:48:45.244790 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:45.244815 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:45.244959 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:45.246749 ignition[1217]: PUT result: OK Feb 13 19:48:45.256978 ignition[1217]: kargs: kargs passed Feb 13 19:48:45.257641 ignition[1217]: Ignition finished successfully Feb 13 19:48:45.264120 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:48:45.279008 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:48:45.302349 ignition[1223]: Ignition 2.19.0 Feb 13 19:48:45.302370 ignition[1223]: Stage: disks Feb 13 19:48:45.303055 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:45.303118 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:45.303288 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:45.304620 ignition[1223]: PUT result: OK Feb 13 19:48:45.313828 ignition[1223]: disks: disks passed Feb 13 19:48:45.313928 ignition[1223]: Ignition finished successfully Feb 13 19:48:45.321408 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:48:45.326564 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:48:45.328775 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:48:45.333629 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:48:45.337679 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:48:45.337934 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:48:45.356420 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:48:45.399864 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:48:45.404293 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:48:45.416237 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:48:45.503173 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:48:45.504284 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:48:45.507981 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:48:45.518263 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:48:45.530920 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:48:45.536368 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:48:45.537449 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:48:45.537504 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:48:45.554608 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:48:45.571354 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1251) Feb 13 19:48:45.572411 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:48:45.580650 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:45.580688 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:45.580715 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:45.590102 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:45.592517 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:48:45.950090 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:48:45.970637 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:48:45.990830 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:48:45.999809 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:48:46.219288 systemd-networkd[1200]: eth0: Gained IPv6LL Feb 13 19:48:46.295109 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:48:46.306270 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:48:46.310378 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:48:46.335100 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:48:46.336966 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:46.368571 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:48:46.383620 ignition[1364]: INFO : Ignition 2.19.0 Feb 13 19:48:46.383620 ignition[1364]: INFO : Stage: mount Feb 13 19:48:46.387206 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:46.387206 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:46.387206 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:46.395320 ignition[1364]: INFO : PUT result: OK Feb 13 19:48:46.399308 ignition[1364]: INFO : mount: mount passed Feb 13 19:48:46.401183 ignition[1364]: INFO : Ignition finished successfully Feb 13 19:48:46.406125 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:48:46.417252 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:48:46.512515 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:48:46.543137 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1375) Feb 13 19:48:46.547098 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:46.547149 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:46.547190 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:46.553104 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:46.556258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:48:46.592188 ignition[1392]: INFO : Ignition 2.19.0 Feb 13 19:48:46.592188 ignition[1392]: INFO : Stage: files Feb 13 19:48:46.595427 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:46.595427 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:46.599671 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:46.602013 ignition[1392]: INFO : PUT result: OK Feb 13 19:48:46.606702 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:48:46.609313 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:48:46.609313 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:48:46.643652 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:48:46.646342 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:48:46.649457 unknown[1392]: wrote ssh authorized keys file for user: core Feb 13 19:48:46.651747 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:48:46.665265 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:48:46.669006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 19:48:46.765718 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:48:46.941504 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:48:46.945330 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:48:46.945330 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:48:47.410100 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:48:47.552278 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:48:47.555607 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:48:47.555607 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:48:47.555607 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:48:47.570429 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:48:47.570429 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:48:47.570429 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:48:47.570429 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:48:47.570429 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:48:47.570429 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:48:47.570429 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:48:47.570429 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:48:47.570429 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:48:47.570429 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:48:47.570429 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 19:48:47.763035 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:48:48.084141 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:48:48.084141 ignition[1392]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:48:48.090658 ignition[1392]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:48:48.090658 ignition[1392]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:48:48.090658 ignition[1392]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:48:48.090658 ignition[1392]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:48:48.090658 ignition[1392]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:48:48.090658 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:48:48.109907 ignition[1392]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:48:48.109907 ignition[1392]: INFO : files: files passed Feb 13 19:48:48.109907 ignition[1392]: INFO : Ignition finished successfully Feb 13 19:48:48.112123 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:48:48.134014 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:48:48.140317 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:48:48.155813 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:48:48.156019 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:48:48.172567 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:48.172567 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:48.179126 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:48.187135 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:48:48.191668 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:48:48.203369 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:48:48.253582 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:48:48.253778 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:48:48.257660 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:48:48.265858 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:48:48.268252 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:48:48.288338 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:48:48.313829 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:48:48.330423 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:48:48.356817 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:48.358800 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:48.359205 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:48:48.359713 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:48:48.359944 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:48:48.361619 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:48:48.362333 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:48:48.362950 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:48:48.363870 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:48:48.364202 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:48:48.364824 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:48:48.365437 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:48:48.366085 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:48:48.366688 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:48:48.367339 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:48:48.367832 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:48:48.368133 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:48:48.369091 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:48.369494 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:48.369998 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:48:48.388631 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:48.393336 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:48:48.393826 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:48:48.427790 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:48:48.431910 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:48:48.444515 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:48:48.444930 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:48:48.457513 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:48:48.460373 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:48:48.460855 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:48.482535 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:48:48.486912 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:48:48.489910 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:48.495415 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:48:48.495697 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:48:48.514788 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:48:48.518349 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:48:48.531906 ignition[1444]: INFO : Ignition 2.19.0 Feb 13 19:48:48.531906 ignition[1444]: INFO : Stage: umount Feb 13 19:48:48.536790 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:48.536790 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:48.536790 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:48.543407 ignition[1444]: INFO : PUT result: OK Feb 13 19:48:48.549160 ignition[1444]: INFO : umount: umount passed Feb 13 19:48:48.550958 ignition[1444]: INFO : Ignition finished successfully Feb 13 19:48:48.554098 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:48:48.557953 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:48:48.558231 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:48:48.562271 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:48:48.562373 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:48:48.568525 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:48:48.568636 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:48:48.570940 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:48:48.571028 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:48:48.577786 systemd[1]: Stopped target network.target - Network. Feb 13 19:48:48.579929 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:48:48.580024 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:48:48.582483 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:48:48.582732 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:48:48.588722 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:48.591253 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:48:48.593009 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:48:48.594139 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:48:48.594216 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:48:48.594432 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:48:48.594496 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:48:48.594720 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:48:48.594799 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:48:48.595045 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:48:48.595340 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:48:48.609491 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:48:48.617786 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:48:48.620688 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:48:48.620864 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:48:48.624473 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:48:48.624640 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:48:48.652139 systemd-networkd[1200]: eth0: DHCPv6 lease lost Feb 13 19:48:48.655666 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:48:48.657874 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:48:48.662592 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:48:48.662900 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:48:48.671810 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:48:48.671919 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:48.687277 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:48:48.691941 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:48:48.692095 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:48:48.697178 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:48:48.697269 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:48.705657 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:48:48.705746 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:48.707959 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:48:48.708039 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:48.710907 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:48.743633 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:48:48.744206 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:48.754825 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:48:48.755242 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:48.761810 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:48:48.761883 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:48.764057 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:48:48.764179 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:48:48.773407 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:48:48.773527 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:48:48.782981 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:48:48.783159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:48.799314 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:48:48.801752 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:48:48.801859 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:48.806779 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:48.806882 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:48.825943 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:48:48.828144 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:48:48.832540 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:48:48.832880 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:48:48.842809 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:48:48.854426 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:48:48.875821 systemd[1]: Switching root. Feb 13 19:48:48.926707 systemd-journald[251]: Journal stopped Feb 13 19:48:51.623975 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 19:48:51.624148 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:48:51.624195 kernel: SELinux: policy capability open_perms=1 Feb 13 19:48:51.624227 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:48:51.624258 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:48:51.624289 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:48:51.624320 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:48:51.624355 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:48:51.624384 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:48:51.624420 kernel: audit: type=1403 audit(1739476129.601:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:48:51.624460 systemd[1]: Successfully loaded SELinux policy in 72.889ms. Feb 13 19:48:51.624505 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.357ms. Feb 13 19:48:51.624542 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:48:51.624575 systemd[1]: Detected virtualization amazon. Feb 13 19:48:51.624607 systemd[1]: Detected architecture arm64. Feb 13 19:48:51.624636 systemd[1]: Detected first boot. Feb 13 19:48:51.624669 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:48:51.624702 zram_generator::config[1486]: No configuration found. Feb 13 19:48:51.624740 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:48:51.624772 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:48:51.624804 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:48:51.624838 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:48:51.624870 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:48:51.624904 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:48:51.624936 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:48:51.624968 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:48:51.625003 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:48:51.625033 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:48:51.625086 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:48:51.625123 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:48:51.625156 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:51.625186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:51.625218 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:48:51.625250 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:48:51.625282 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:48:51.625318 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:48:51.625350 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:48:51.625381 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:51.625413 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:48:51.625442 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:48:51.625472 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:48:51.625502 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:48:51.625538 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:51.625571 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:48:51.625600 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:48:51.625631 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:48:51.625661 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:48:51.625691 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:48:51.625721 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:51.625753 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:51.625785 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:51.625814 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:48:51.625850 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:48:51.625880 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:48:51.625909 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:48:51.625941 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:48:51.625973 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:48:51.626003 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:48:51.626034 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:48:51.628114 systemd[1]: Reached target machines.target - Containers. Feb 13 19:48:51.628186 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:48:51.628221 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:51.628254 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:48:51.628284 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:48:51.628318 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:51.628361 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:48:51.628394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:51.628430 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:48:51.628460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:51.628495 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:48:51.628530 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:48:51.628562 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:48:51.628593 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:48:51.628626 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:48:51.628656 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:48:51.628686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:48:51.628716 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:48:51.628751 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:48:51.628781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:48:51.628811 kernel: fuse: init (API version 7.39) Feb 13 19:48:51.628844 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:48:51.628874 systemd[1]: Stopped verity-setup.service. Feb 13 19:48:51.628915 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:48:51.628946 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:48:51.628980 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:48:51.629011 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:48:51.629050 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:48:51.629108 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:48:51.629143 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:51.629257 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:48:51.629386 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:48:51.629431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:51.629463 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:51.629496 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:51.629527 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:51.629559 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:48:51.629589 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:48:51.629623 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:51.629655 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:48:51.629686 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:48:51.629722 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:48:51.629757 kernel: loop: module loaded Feb 13 19:48:51.629790 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:48:51.629820 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:48:51.629907 systemd-journald[1564]: Collecting audit messages is disabled. Feb 13 19:48:51.629981 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:48:51.630014 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:48:51.630047 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:48:51.632151 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:48:51.632201 systemd-journald[1564]: Journal started Feb 13 19:48:51.632252 systemd-journald[1564]: Runtime Journal (/run/log/journal/ec25641d80dd511a05ac746e93e4a9e8) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:48:50.975335 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:48:51.052363 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:48:51.053155 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:48:51.639988 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:48:51.643312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:51.646270 kernel: ACPI: bus type drm_connector registered Feb 13 19:48:51.659117 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:48:51.659221 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:48:51.675390 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:48:51.683178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:48:51.696095 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:48:51.703727 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:48:51.706515 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:48:51.706881 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:48:51.709813 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:51.710232 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:51.712903 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:48:51.716178 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:48:51.735730 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:48:51.781775 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:48:51.801400 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:48:51.803855 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:48:51.813362 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:48:51.820051 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:48:51.823636 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:48:51.841481 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:48:51.859191 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 19:48:51.883677 systemd-journald[1564]: Time spent on flushing to /var/log/journal/ec25641d80dd511a05ac746e93e4a9e8 is 91.186ms for 915 entries. Feb 13 19:48:51.883677 systemd-journald[1564]: System Journal (/var/log/journal/ec25641d80dd511a05ac746e93e4a9e8) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:48:52.005603 systemd-journald[1564]: Received client request to flush runtime journal. Feb 13 19:48:52.009896 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:48:51.940175 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:51.962607 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:51.973331 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:48:51.995309 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:48:51.998517 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:48:52.024559 kernel: loop1: detected capacity change from 0 to 52536 Feb 13 19:48:52.010248 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:48:52.019286 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:48:52.042376 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:48:52.055472 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:48:52.079115 kernel: loop2: detected capacity change from 0 to 201592 Feb 13 19:48:52.123039 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Feb 13 19:48:52.125099 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Feb 13 19:48:52.134824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:52.201105 kernel: loop3: detected capacity change from 0 to 114432 Feb 13 19:48:52.302128 kernel: loop4: detected capacity change from 0 to 114328 Feb 13 19:48:52.329097 kernel: loop5: detected capacity change from 0 to 52536 Feb 13 19:48:52.347138 kernel: loop6: detected capacity change from 0 to 201592 Feb 13 19:48:52.383118 kernel: loop7: detected capacity change from 0 to 114432 Feb 13 19:48:52.394705 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:48:52.395702 (sd-merge)[1641]: Merged extensions into '/usr'. Feb 13 19:48:52.403581 systemd[1]: Reloading requested from client PID 1596 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:48:52.403615 systemd[1]: Reloading... Feb 13 19:48:52.573854 zram_generator::config[1664]: No configuration found. Feb 13 19:48:52.905911 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:48:53.043899 systemd[1]: Reloading finished in 639 ms. Feb 13 19:48:53.080500 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:48:53.083779 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:48:53.099376 systemd[1]: Starting ensure-sysext.service... Feb 13 19:48:53.111415 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:48:53.118449 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:53.157271 systemd[1]: Reloading requested from client PID 1719 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:48:53.157304 systemd[1]: Reloading... Feb 13 19:48:53.166356 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:48:53.167618 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:48:53.169574 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:48:53.170423 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Feb 13 19:48:53.170658 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Feb 13 19:48:53.186904 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:48:53.187227 systemd-tmpfiles[1720]: Skipping /boot Feb 13 19:48:53.210431 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:48:53.210614 systemd-tmpfiles[1720]: Skipping /boot Feb 13 19:48:53.237690 systemd-udevd[1721]: Using default interface naming scheme 'v255'. Feb 13 19:48:53.334097 zram_generator::config[1749]: No configuration found. Feb 13 19:48:53.491513 (udev-worker)[1758]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:48:53.628935 ldconfig[1586]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:48:53.734814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:48:53.800090 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1765) Feb 13 19:48:53.894552 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:48:53.895382 systemd[1]: Reloading finished in 737 ms. Feb 13 19:48:53.928448 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:53.932620 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:48:53.937226 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:54.029973 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:48:54.042718 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:48:54.046180 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:54.050266 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:54.056669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:54.062129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:54.064401 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:54.068603 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:48:54.077754 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:48:54.085544 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:48:54.092309 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:48:54.097560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:54.118102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:54.124621 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:48:54.126664 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:54.127034 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:48:54.143727 systemd[1]: Finished ensure-sysext.service. Feb 13 19:48:54.166891 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:48:54.231346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:54.231770 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:54.246191 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:48:54.264435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:54.265075 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:54.268223 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:54.268539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:54.273851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:48:54.274053 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:48:54.277311 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:48:54.278232 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:48:54.295485 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:48:54.308575 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:48:54.361176 augenrules[1954]: No rules Feb 13 19:48:54.366911 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:48:54.371249 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:48:54.375318 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:48:54.378927 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:48:54.401354 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:48:54.413379 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:48:54.434435 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:48:54.441219 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:48:54.444871 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:48:54.454687 lvm[1962]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:48:54.490226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:54.502299 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:48:54.514747 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:48:54.520320 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:54.536405 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:48:54.547093 lvm[1977]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:48:54.582887 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:48:54.632623 systemd-resolved[1924]: Positive Trust Anchors: Feb 13 19:48:54.632660 systemd-resolved[1924]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:48:54.632724 systemd-resolved[1924]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:48:54.640837 systemd-resolved[1924]: Defaulting to hostname 'linux'. Feb 13 19:48:54.641691 systemd-networkd[1922]: lo: Link UP Feb 13 19:48:54.641706 systemd-networkd[1922]: lo: Gained carrier Feb 13 19:48:54.643580 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:48:54.645911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:54.648227 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:48:54.650560 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:48:54.652939 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:48:54.655555 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:48:54.657818 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:48:54.660297 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:48:54.662735 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:48:54.662790 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:48:54.664582 systemd-networkd[1922]: Enumeration completed Feb 13 19:48:54.664711 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:48:54.667136 systemd-networkd[1922]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:54.667158 systemd-networkd[1922]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:48:54.667826 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:48:54.672636 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:48:54.681302 systemd-networkd[1922]: eth0: Link UP Feb 13 19:48:54.681790 systemd-networkd[1922]: eth0: Gained carrier Feb 13 19:48:54.681828 systemd-networkd[1922]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:54.686341 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:48:54.689539 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:48:54.692162 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:48:54.694649 systemd[1]: Reached target network.target - Network. Feb 13 19:48:54.696558 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:48:54.698595 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:48:54.700466 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:48:54.700518 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:48:54.702594 systemd-networkd[1922]: eth0: DHCPv4 address 172.31.26.215/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:48:54.708316 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:48:54.717447 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:48:54.722818 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:48:54.728607 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:48:54.746324 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:48:54.748375 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:48:54.751528 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:48:54.764518 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:48:54.769953 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:48:54.795108 jq[1985]: false Feb 13 19:48:54.779392 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:48:54.785414 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:48:54.792390 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:48:54.803938 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:48:54.811631 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:48:54.814917 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:48:54.815806 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:48:54.819415 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:48:54.827317 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:48:54.835851 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:48:54.837209 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:48:54.898164 extend-filesystems[1986]: Found loop4 Feb 13 19:48:54.898164 extend-filesystems[1986]: Found loop5 Feb 13 19:48:54.898164 extend-filesystems[1986]: Found loop6 Feb 13 19:48:54.898164 extend-filesystems[1986]: Found loop7 Feb 13 19:48:54.898164 extend-filesystems[1986]: Found nvme0n1 Feb 13 19:48:54.898164 extend-filesystems[1986]: Found nvme0n1p1 Feb 13 19:48:54.898164 extend-filesystems[1986]: Found nvme0n1p2 Feb 13 19:48:54.898164 extend-filesystems[1986]: Found nvme0n1p3 Feb 13 19:48:54.898164 extend-filesystems[1986]: Found usr Feb 13 19:48:54.898164 extend-filesystems[1986]: Found nvme0n1p4 Feb 13 19:48:54.898164 extend-filesystems[1986]: Found nvme0n1p6 Feb 13 19:48:54.898164 extend-filesystems[1986]: Found nvme0n1p7 Feb 13 19:48:54.898164 extend-filesystems[1986]: Found nvme0n1p9 Feb 13 19:48:54.898164 extend-filesystems[1986]: Checking size of /dev/nvme0n1p9 Feb 13 19:48:54.968386 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:48:54.993914 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:48:55.010908 update_engine[1997]: I20250213 19:48:54.916954 1997 main.cc:92] Flatcar Update Engine starting Feb 13 19:48:55.011377 ntpd[1988]: 13 Feb 19:48:54 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:48:55.011377 ntpd[1988]: 13 Feb 19:48:54 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:48:55.011377 ntpd[1988]: 13 Feb 19:48:54 ntpd[1988]: ---------------------------------------------------- Feb 13 19:48:55.011377 ntpd[1988]: 13 Feb 19:48:54 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:48:55.011377 ntpd[1988]: 13 Feb 19:48:54 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:48:55.011377 ntpd[1988]: 13 Feb 19:48:54 ntpd[1988]: corporation. Support and training for ntp-4 are Feb 13 19:48:55.011377 ntpd[1988]: 13 Feb 19:48:54 ntpd[1988]: available at https://www.nwtime.org/support Feb 13 19:48:55.011377 ntpd[1988]: 13 Feb 19:48:54 ntpd[1988]: ---------------------------------------------------- Feb 13 19:48:55.011377 ntpd[1988]: 13 Feb 19:48:54 ntpd[1988]: proto: precision = 0.108 usec (-23) Feb 13 19:48:55.011377 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: basedate set to 2025-02-01 Feb 13 19:48:55.011377 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: gps base set to 2025-02-02 (week 2352) Feb 13 19:48:54.968437 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:48:54.996186 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:48:55.042487 extend-filesystems[1986]: Resized partition /dev/nvme0n1p9 Feb 13 19:48:55.062047 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:48:55.062627 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:48:55.062627 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:48:55.062627 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:48:55.062627 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: Listen normally on 3 eth0 172.31.26.215:123 Feb 13 19:48:55.062627 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: Listen normally on 4 lo [::1]:123 Feb 13 19:48:55.062627 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: bind(21) AF_INET6 fe80::49c:60ff:fe31:7cc3%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:48:55.062627 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: unable to create socket on eth0 (5) for fe80::49c:60ff:fe31:7cc3%2#123 Feb 13 19:48:55.062627 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: failed to init interface for address fe80::49c:60ff:fe31:7cc3%2 Feb 13 19:48:55.062627 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: Listening on routing socket on fd #21 for interface updates Feb 13 19:48:54.968459 ntpd[1988]: ---------------------------------------------------- Feb 13 19:48:55.034620 (ntainerd)[2015]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:48:55.094386 extend-filesystems[2024]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:48:55.099090 jq[1998]: true Feb 13 19:48:55.099535 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:55.099535 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:54.968479 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:48:55.034868 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:48:55.099868 update_engine[1997]: I20250213 19:48:55.093030 1997 update_check_scheduler.cc:74] Next update check in 9m41s Feb 13 19:48:54.968498 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:48:55.035301 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:48:54.968517 ntpd[1988]: corporation. Support and training for ntp-4 are Feb 13 19:48:55.038391 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:48:54.968537 ntpd[1988]: available at https://www.nwtime.org/support Feb 13 19:48:55.055657 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:48:54.968562 ntpd[1988]: ---------------------------------------------------- Feb 13 19:48:55.055708 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:48:55.136202 tar[2013]: linux-arm64/LICENSE Feb 13 19:48:55.136202 tar[2013]: linux-arm64/helm Feb 13 19:48:54.979532 ntpd[1988]: proto: precision = 0.108 usec (-23) Feb 13 19:48:55.064606 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:48:55.138056 jq[2014]: true Feb 13 19:48:55.001435 ntpd[1988]: basedate set to 2025-02-01 Feb 13 19:48:55.064647 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:48:55.001471 ntpd[1988]: gps base set to 2025-02-02 (week 2352) Feb 13 19:48:55.098865 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:48:55.022436 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:48:55.121421 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:48:55.022512 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:48:55.126424 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:48:55.028340 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:48:55.028413 ntpd[1988]: Listen normally on 3 eth0 172.31.26.215:123 Feb 13 19:48:55.028483 ntpd[1988]: Listen normally on 4 lo [::1]:123 Feb 13 19:48:55.028566 ntpd[1988]: bind(21) AF_INET6 fe80::49c:60ff:fe31:7cc3%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:48:55.028607 ntpd[1988]: unable to create socket on eth0 (5) for fe80::49c:60ff:fe31:7cc3%2#123 Feb 13 19:48:55.028635 ntpd[1988]: failed to init interface for address fe80::49c:60ff:fe31:7cc3%2 Feb 13 19:48:55.028696 ntpd[1988]: Listening on routing socket on fd #21 for interface updates Feb 13 19:48:55.029899 dbus-daemon[1984]: [system] SELinux support is enabled Feb 13 19:48:55.075560 dbus-daemon[1984]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1922 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:48:55.076155 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:55.076206 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:55.091252 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:48:55.172594 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:48:55.193744 extend-filesystems[2024]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:48:55.193744 extend-filesystems[2024]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:48:55.193744 extend-filesystems[2024]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:48:55.204705 extend-filesystems[1986]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:48:55.201703 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:48:55.202110 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:48:55.220133 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:48:55.226225 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:48:55.312527 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1765) Feb 13 19:48:55.335379 bash[2063]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:48:55.339176 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:48:55.367336 systemd[1]: Starting sshkeys.service... Feb 13 19:48:55.404963 systemd-logind[1994]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:48:55.407191 systemd-logind[1994]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:48:55.407651 systemd-logind[1994]: New seat seat0. Feb 13 19:48:55.412124 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:48:55.423835 coreos-metadata[1983]: Feb 13 19:48:55.423 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:48:55.425884 coreos-metadata[1983]: Feb 13 19:48:55.425 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:48:55.431403 coreos-metadata[1983]: Feb 13 19:48:55.431 INFO Fetch successful Feb 13 19:48:55.431403 coreos-metadata[1983]: Feb 13 19:48:55.431 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:48:55.434090 coreos-metadata[1983]: Feb 13 19:48:55.433 INFO Fetch successful Feb 13 19:48:55.434090 coreos-metadata[1983]: Feb 13 19:48:55.433 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:48:55.434483 coreos-metadata[1983]: Feb 13 19:48:55.434 INFO Fetch successful Feb 13 19:48:55.434483 coreos-metadata[1983]: Feb 13 19:48:55.434 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.445 INFO Fetch successful Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.445 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.445 INFO Fetch failed with 404: resource not found Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.445 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.445 INFO Fetch successful Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.445 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.451 INFO Fetch successful Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.451 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.451 INFO Fetch successful Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.451 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.451 INFO Fetch successful Feb 13 19:48:55.451811 coreos-metadata[1983]: Feb 13 19:48:55.451 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:48:55.447038 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:48:55.459397 coreos-metadata[1983]: Feb 13 19:48:55.459 INFO Fetch successful Feb 13 19:48:55.546222 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:48:55.735351 containerd[2015]: time="2025-02-13T19:48:55.735207815Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:48:55.794419 locksmithd[2036]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:48:55.801097 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:48:55.804209 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:48:55.930918 coreos-metadata[2095]: Feb 13 19:48:55.930 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:48:55.937246 coreos-metadata[2095]: Feb 13 19:48:55.936 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:48:55.939371 coreos-metadata[2095]: Feb 13 19:48:55.939 INFO Fetch successful Feb 13 19:48:55.939371 coreos-metadata[2095]: Feb 13 19:48:55.939 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:48:55.942478 coreos-metadata[2095]: Feb 13 19:48:55.942 INFO Fetch successful Feb 13 19:48:55.949486 unknown[2095]: wrote ssh authorized keys file for user: core Feb 13 19:48:55.973605 ntpd[1988]: bind(24) AF_INET6 fe80::49c:60ff:fe31:7cc3%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:48:55.976045 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: bind(24) AF_INET6 fe80::49c:60ff:fe31:7cc3%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:48:55.976045 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: unable to create socket on eth0 (6) for fe80::49c:60ff:fe31:7cc3%2#123 Feb 13 19:48:55.976045 ntpd[1988]: 13 Feb 19:48:55 ntpd[1988]: failed to init interface for address fe80::49c:60ff:fe31:7cc3%2 Feb 13 19:48:55.973674 ntpd[1988]: unable to create socket on eth0 (6) for fe80::49c:60ff:fe31:7cc3%2#123 Feb 13 19:48:55.973704 ntpd[1988]: failed to init interface for address fe80::49c:60ff:fe31:7cc3%2 Feb 13 19:48:55.978452 containerd[2015]: time="2025-02-13T19:48:55.976454568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:55.989240 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:48:55.998960 containerd[2015]: time="2025-02-13T19:48:55.992120172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:55.998960 containerd[2015]: time="2025-02-13T19:48:55.992189076Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:48:55.998960 containerd[2015]: time="2025-02-13T19:48:55.992236020Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:48:55.998960 containerd[2015]: time="2025-02-13T19:48:55.992536824Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:48:55.998960 containerd[2015]: time="2025-02-13T19:48:55.992571696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:55.998960 containerd[2015]: time="2025-02-13T19:48:55.992698152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:55.998960 containerd[2015]: time="2025-02-13T19:48:55.992728032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:55.993375 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:48:55.992786 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2034 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:48:56.001334 containerd[2015]: time="2025-02-13T19:48:55.993025644Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:56.001540 containerd[2015]: time="2025-02-13T19:48:56.001496540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:56.002093 containerd[2015]: time="2025-02-13T19:48:56.001722344Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:56.002093 containerd[2015]: time="2025-02-13T19:48:56.001758608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:56.002093 containerd[2015]: time="2025-02-13T19:48:56.001981064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:56.002757 containerd[2015]: time="2025-02-13T19:48:56.002721705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:56.003650 containerd[2015]: time="2025-02-13T19:48:56.003078129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:56.003650 containerd[2015]: time="2025-02-13T19:48:56.003117333Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:48:56.003650 containerd[2015]: time="2025-02-13T19:48:56.003300969Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:48:56.003650 containerd[2015]: time="2025-02-13T19:48:56.003411465Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:48:56.009395 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.015878889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.015970101Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.016004361Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.016039005Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.016100349Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.016351665Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.016743033Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.016923177Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.016955037Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.016989189Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.017033277Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.017085885Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.017125377Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:48:56.018688 containerd[2015]: time="2025-02-13T19:48:56.017156529Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017188437Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017218245Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017251869Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017281173Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017340861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017375061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017404461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017434989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017464941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017496033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017525145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017555337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017598561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.019427 containerd[2015]: time="2025-02-13T19:48:56.017634585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.020006 containerd[2015]: time="2025-02-13T19:48:56.017662941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.020006 containerd[2015]: time="2025-02-13T19:48:56.017695689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.020006 containerd[2015]: time="2025-02-13T19:48:56.017726565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.020006 containerd[2015]: time="2025-02-13T19:48:56.017760645Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:48:56.020006 containerd[2015]: time="2025-02-13T19:48:56.017803941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.020006 containerd[2015]: time="2025-02-13T19:48:56.017832465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.020006 containerd[2015]: time="2025-02-13T19:48:56.017859345Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:48:56.020006 containerd[2015]: time="2025-02-13T19:48:56.017960301Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:48:56.020006 containerd[2015]: time="2025-02-13T19:48:56.017995821Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:48:56.020006 containerd[2015]: time="2025-02-13T19:48:56.018022329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:48:56.025322 containerd[2015]: time="2025-02-13T19:48:56.018055521Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:48:56.025322 containerd[2015]: time="2025-02-13T19:48:56.023467461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.025322 containerd[2015]: time="2025-02-13T19:48:56.023516157Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:48:56.025322 containerd[2015]: time="2025-02-13T19:48:56.023568105Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:48:56.025322 containerd[2015]: time="2025-02-13T19:48:56.023597085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:48:56.027084 containerd[2015]: time="2025-02-13T19:48:56.026221005Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:48:56.027784 containerd[2015]: time="2025-02-13T19:48:56.027450657Z" level=info msg="Connect containerd service" Feb 13 19:48:56.027892 containerd[2015]: time="2025-02-13T19:48:56.027851613Z" level=info msg="using legacy CRI server" Feb 13 19:48:56.027948 containerd[2015]: time="2025-02-13T19:48:56.027887577Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:48:56.029279 containerd[2015]: time="2025-02-13T19:48:56.029210997Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:48:56.030469 update-ssh-keys[2175]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:48:56.033130 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:48:56.044633 containerd[2015]: time="2025-02-13T19:48:56.044563269Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:48:56.046797 containerd[2015]: time="2025-02-13T19:48:56.044887557Z" level=info msg="Start subscribing containerd event" Feb 13 19:48:56.046797 containerd[2015]: time="2025-02-13T19:48:56.044985441Z" level=info msg="Start recovering state" Feb 13 19:48:56.046797 containerd[2015]: time="2025-02-13T19:48:56.045162045Z" level=info msg="Start event monitor" Feb 13 19:48:56.046797 containerd[2015]: time="2025-02-13T19:48:56.045190005Z" level=info msg="Start snapshots syncer" Feb 13 19:48:56.046797 containerd[2015]: time="2025-02-13T19:48:56.045211665Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:48:56.046797 containerd[2015]: time="2025-02-13T19:48:56.045241485Z" level=info msg="Start streaming server" Feb 13 19:48:56.046797 containerd[2015]: time="2025-02-13T19:48:56.045972477Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:48:56.046797 containerd[2015]: time="2025-02-13T19:48:56.046244913Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:48:56.055352 containerd[2015]: time="2025-02-13T19:48:56.048179457Z" level=info msg="containerd successfully booted in 0.316492s" Feb 13 19:48:56.051200 systemd[1]: Finished sshkeys.service. Feb 13 19:48:56.053801 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:48:56.092215 polkitd[2177]: Started polkitd version 121 Feb 13 19:48:56.108785 polkitd[2177]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:48:56.108911 polkitd[2177]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:48:56.113331 polkitd[2177]: Finished loading, compiling and executing 2 rules Feb 13 19:48:56.114211 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:48:56.114502 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:48:56.119159 polkitd[2177]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:48:56.150823 systemd-hostnamed[2034]: Hostname set to (transient) Feb 13 19:48:56.150999 systemd-resolved[1924]: System hostname changed to 'ip-172-31-26-215'. Feb 13 19:48:56.459334 systemd-networkd[1922]: eth0: Gained IPv6LL Feb 13 19:48:56.467212 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:48:56.473763 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:48:56.489688 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:48:56.503439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:48:56.511617 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:48:56.614974 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:48:56.658823 amazon-ssm-agent[2190]: Initializing new seelog logger Feb 13 19:48:56.660083 amazon-ssm-agent[2190]: New Seelog Logger Creation Complete Feb 13 19:48:56.660194 amazon-ssm-agent[2190]: 2025/02/13 19:48:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:56.660194 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:56.660897 amazon-ssm-agent[2190]: 2025/02/13 19:48:56 processing appconfig overrides Feb 13 19:48:56.663832 amazon-ssm-agent[2190]: 2025/02/13 19:48:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:56.663832 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:56.663981 amazon-ssm-agent[2190]: 2025/02/13 19:48:56 processing appconfig overrides Feb 13 19:48:56.665107 amazon-ssm-agent[2190]: 2025/02/13 19:48:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:56.665107 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:56.665107 amazon-ssm-agent[2190]: 2025/02/13 19:48:56 processing appconfig overrides Feb 13 19:48:56.665107 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO Proxy environment variables: Feb 13 19:48:56.670401 amazon-ssm-agent[2190]: 2025/02/13 19:48:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:56.673087 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:56.673087 amazon-ssm-agent[2190]: 2025/02/13 19:48:56 processing appconfig overrides Feb 13 19:48:56.730192 tar[2013]: linux-arm64/README.md Feb 13 19:48:56.764747 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:48:56.771456 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO no_proxy: Feb 13 19:48:56.868096 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO https_proxy: Feb 13 19:48:56.967578 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO http_proxy: Feb 13 19:48:57.065745 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:48:57.168013 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:48:57.265913 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO Agent will take identity from EC2 Feb 13 19:48:57.365473 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:57.464692 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:57.563988 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:57.663207 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:48:57.765946 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:48:57.867082 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:48:57.966081 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:48:58.015277 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO [Registrar] Starting registrar module Feb 13 19:48:58.017164 amazon-ssm-agent[2190]: 2025-02-13 19:48:56 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:48:58.017283 amazon-ssm-agent[2190]: 2025-02-13 19:48:57 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:48:58.017283 amazon-ssm-agent[2190]: 2025-02-13 19:48:57 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:48:58.017283 amazon-ssm-agent[2190]: 2025-02-13 19:48:57 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:48:58.017407 amazon-ssm-agent[2190]: 2025-02-13 19:48:58 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:48:58.066357 amazon-ssm-agent[2190]: 2025-02-13 19:48:58 INFO [CredentialRefresher] Next credential rotation will be in 30.774957228683334 minutes Feb 13 19:48:58.274289 sshd_keygen[2018]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:48:58.315431 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:48:58.326711 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:48:58.338592 systemd[1]: Started sshd@0-172.31.26.215:22-139.178.89.65:36552.service - OpenSSH per-connection server daemon (139.178.89.65:36552). Feb 13 19:48:58.352992 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:48:58.353834 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:48:58.365567 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:48:58.404657 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:48:58.414719 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:48:58.424939 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:48:58.429161 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:48:58.544426 sshd[2221]: Accepted publickey for core from 139.178.89.65 port 36552 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:48:58.547130 sshd[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:58.567471 systemd-logind[1994]: New session 1 of user core. Feb 13 19:48:58.568596 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:48:58.579986 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:48:58.618174 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:48:58.629630 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:48:58.651052 (systemd)[2232]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:48:58.879033 systemd[2232]: Queued start job for default target default.target. Feb 13 19:48:58.887829 systemd[2232]: Created slice app.slice - User Application Slice. Feb 13 19:48:58.887894 systemd[2232]: Reached target paths.target - Paths. Feb 13 19:48:58.887928 systemd[2232]: Reached target timers.target - Timers. Feb 13 19:48:58.892328 systemd[2232]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:48:58.920231 systemd[2232]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:48:58.920365 systemd[2232]: Reached target sockets.target - Sockets. Feb 13 19:48:58.920398 systemd[2232]: Reached target basic.target - Basic System. Feb 13 19:48:58.920483 systemd[2232]: Reached target default.target - Main User Target. Feb 13 19:48:58.920546 systemd[2232]: Startup finished in 255ms. Feb 13 19:48:58.920731 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:48:58.932393 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:48:58.969286 ntpd[1988]: Listen normally on 7 eth0 [fe80::49c:60ff:fe31:7cc3%2]:123 Feb 13 19:48:58.978800 ntpd[1988]: 13 Feb 19:48:58 ntpd[1988]: Listen normally on 7 eth0 [fe80::49c:60ff:fe31:7cc3%2]:123 Feb 13 19:48:59.006882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:48:59.010428 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:48:59.017206 systemd[1]: Startup finished in 1.156s (kernel) + 8.777s (initrd) + 9.487s (userspace) = 19.421s. Feb 13 19:48:59.022794 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:48:59.087092 amazon-ssm-agent[2190]: 2025-02-13 19:48:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:48:59.113267 systemd[1]: Started sshd@1-172.31.26.215:22-139.178.89.65:51144.service - OpenSSH per-connection server daemon (139.178.89.65:51144). Feb 13 19:48:59.182857 amazon-ssm-agent[2190]: 2025-02-13 19:48:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2254) started Feb 13 19:48:59.283255 amazon-ssm-agent[2190]: 2025-02-13 19:48:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:48:59.320507 sshd[2255]: Accepted publickey for core from 139.178.89.65 port 51144 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:48:59.327617 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:59.343440 systemd-logind[1994]: New session 2 of user core. Feb 13 19:48:59.349408 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:48:59.481284 sshd[2255]: pam_unix(sshd:session): session closed for user core Feb 13 19:48:59.488499 systemd-logind[1994]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:48:59.489926 systemd[1]: sshd@1-172.31.26.215:22-139.178.89.65:51144.service: Deactivated successfully. Feb 13 19:48:59.493998 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:48:59.496040 systemd-logind[1994]: Removed session 2. Feb 13 19:48:59.516735 systemd[1]: Started sshd@2-172.31.26.215:22-139.178.89.65:51152.service - OpenSSH per-connection server daemon (139.178.89.65:51152). Feb 13 19:48:59.691433 sshd[2274]: Accepted publickey for core from 139.178.89.65 port 51152 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:48:59.695638 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:59.706462 systemd-logind[1994]: New session 3 of user core. Feb 13 19:48:59.713367 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:48:59.832774 sshd[2274]: pam_unix(sshd:session): session closed for user core Feb 13 19:48:59.838778 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:48:59.841506 systemd-logind[1994]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:48:59.842973 systemd[1]: sshd@2-172.31.26.215:22-139.178.89.65:51152.service: Deactivated successfully. Feb 13 19:48:59.850077 systemd-logind[1994]: Removed session 3. Feb 13 19:48:59.869599 systemd[1]: Started sshd@3-172.31.26.215:22-139.178.89.65:51162.service - OpenSSH per-connection server daemon (139.178.89.65:51162). Feb 13 19:49:00.047967 sshd[2281]: Accepted publickey for core from 139.178.89.65 port 51162 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:00.050493 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:00.062594 systemd-logind[1994]: New session 4 of user core. Feb 13 19:49:00.076377 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:49:00.207275 sshd[2281]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:00.213630 systemd[1]: sshd@3-172.31.26.215:22-139.178.89.65:51162.service: Deactivated successfully. Feb 13 19:49:00.217267 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:49:00.219541 systemd-logind[1994]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:49:00.223584 systemd-logind[1994]: Removed session 4. Feb 13 19:49:00.247438 systemd[1]: Started sshd@4-172.31.26.215:22-139.178.89.65:51164.service - OpenSSH per-connection server daemon (139.178.89.65:51164). Feb 13 19:49:00.340741 kubelet[2246]: E0213 19:49:00.340641 2246 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:00.345191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:00.345561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:00.346404 systemd[1]: kubelet.service: Consumed 1.305s CPU time. Feb 13 19:49:00.418681 sshd[2288]: Accepted publickey for core from 139.178.89.65 port 51164 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:00.421580 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:00.429104 systemd-logind[1994]: New session 5 of user core. Feb 13 19:49:00.440350 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:49:00.557611 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:49:00.558288 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:00.578585 sudo[2294]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:00.601775 sshd[2288]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:00.607997 systemd[1]: sshd@4-172.31.26.215:22-139.178.89.65:51164.service: Deactivated successfully. Feb 13 19:49:00.611891 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:49:00.613449 systemd-logind[1994]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:49:00.616053 systemd-logind[1994]: Removed session 5. Feb 13 19:49:00.647537 systemd[1]: Started sshd@5-172.31.26.215:22-139.178.89.65:51176.service - OpenSSH per-connection server daemon (139.178.89.65:51176). Feb 13 19:49:00.811150 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 51176 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:00.813481 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:00.820910 systemd-logind[1994]: New session 6 of user core. Feb 13 19:49:00.832348 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:49:00.935911 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:49:00.936579 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:00.942549 sudo[2303]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:00.952746 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 19:49:00.953489 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:00.976604 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 19:49:00.987791 auditctl[2306]: No rules Feb 13 19:49:00.988604 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:49:00.989043 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 19:49:00.999903 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:49:01.048738 augenrules[2324]: No rules Feb 13 19:49:01.052226 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:49:01.054375 sudo[2302]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:01.077575 sshd[2299]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:01.083165 systemd[1]: sshd@5-172.31.26.215:22-139.178.89.65:51176.service: Deactivated successfully. Feb 13 19:49:01.086270 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:49:01.090777 systemd-logind[1994]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:49:01.092589 systemd-logind[1994]: Removed session 6. Feb 13 19:49:01.125861 systemd[1]: Started sshd@6-172.31.26.215:22-139.178.89.65:51190.service - OpenSSH per-connection server daemon (139.178.89.65:51190). Feb 13 19:49:01.291137 sshd[2332]: Accepted publickey for core from 139.178.89.65 port 51190 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:49:01.294247 sshd[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:49:01.303161 systemd-logind[1994]: New session 7 of user core. Feb 13 19:49:01.309314 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:49:01.413475 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:49:01.414686 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:49:01.867554 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:49:01.880646 (dockerd)[2350]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:49:01.816622 systemd-resolved[1924]: Clock change detected. Flushing caches. Feb 13 19:49:01.829602 systemd-journald[1564]: Time jumped backwards, rotating. Feb 13 19:49:02.087667 dockerd[2350]: time="2025-02-13T19:49:02.087314113Z" level=info msg="Starting up" Feb 13 19:49:02.305821 dockerd[2350]: time="2025-02-13T19:49:02.305757938Z" level=info msg="Loading containers: start." Feb 13 19:49:02.459453 kernel: Initializing XFRM netlink socket Feb 13 19:49:02.491429 (udev-worker)[2376]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:49:02.579564 systemd-networkd[1922]: docker0: Link UP Feb 13 19:49:02.606738 dockerd[2350]: time="2025-02-13T19:49:02.606670275Z" level=info msg="Loading containers: done." Feb 13 19:49:02.630007 dockerd[2350]: time="2025-02-13T19:49:02.629927920Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:49:02.630164 dockerd[2350]: time="2025-02-13T19:49:02.630086752Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:49:02.630321 dockerd[2350]: time="2025-02-13T19:49:02.630271096Z" level=info msg="Daemon has completed initialization" Feb 13 19:49:02.695556 dockerd[2350]: time="2025-02-13T19:49:02.694470256Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:49:02.695265 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:49:03.837462 containerd[2015]: time="2025-02-13T19:49:03.836991702Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:49:04.508775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057449979.mount: Deactivated successfully. Feb 13 19:49:05.922858 containerd[2015]: time="2025-02-13T19:49:05.922757372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:05.924355 containerd[2015]: time="2025-02-13T19:49:05.924296624Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218236" Feb 13 19:49:05.926247 containerd[2015]: time="2025-02-13T19:49:05.926177144Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:05.936739 containerd[2015]: time="2025-02-13T19:49:05.936671708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:05.939065 containerd[2015]: time="2025-02-13T19:49:05.938999312Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.101945726s" Feb 13 19:49:05.939170 containerd[2015]: time="2025-02-13T19:49:05.939065276Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 19:49:05.940030 containerd[2015]: time="2025-02-13T19:49:05.939971228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:49:07.408480 containerd[2015]: time="2025-02-13T19:49:07.408416131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:07.410587 containerd[2015]: time="2025-02-13T19:49:07.410527651Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528145" Feb 13 19:49:07.411968 containerd[2015]: time="2025-02-13T19:49:07.411897379Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:07.417494 containerd[2015]: time="2025-02-13T19:49:07.417374551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:07.424443 containerd[2015]: time="2025-02-13T19:49:07.423136303Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.483102951s" Feb 13 19:49:07.424443 containerd[2015]: time="2025-02-13T19:49:07.423211087Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 19:49:07.426265 containerd[2015]: time="2025-02-13T19:49:07.426203107Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:49:08.708372 containerd[2015]: time="2025-02-13T19:49:08.708295390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:08.710891 containerd[2015]: time="2025-02-13T19:49:08.710440774Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480800" Feb 13 19:49:08.712437 containerd[2015]: time="2025-02-13T19:49:08.711932506Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:08.717839 containerd[2015]: time="2025-02-13T19:49:08.717772762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:08.720430 containerd[2015]: time="2025-02-13T19:49:08.720342682Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.294074043s" Feb 13 19:49:08.720608 containerd[2015]: time="2025-02-13T19:49:08.720576526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 19:49:08.721506 containerd[2015]: time="2025-02-13T19:49:08.721441198Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:49:09.985722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050675431.mount: Deactivated successfully. Feb 13 19:49:10.441142 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:49:10.449914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:10.635187 containerd[2015]: time="2025-02-13T19:49:10.635112299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:10.636750 containerd[2015]: time="2025-02-13T19:49:10.636593279Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363382" Feb 13 19:49:10.639445 containerd[2015]: time="2025-02-13T19:49:10.637232387Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:10.655375 containerd[2015]: time="2025-02-13T19:49:10.654020279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:10.656542 containerd[2015]: time="2025-02-13T19:49:10.656488523Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.934834697s" Feb 13 19:49:10.656702 containerd[2015]: time="2025-02-13T19:49:10.656670947Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 19:49:10.657611 containerd[2015]: time="2025-02-13T19:49:10.657547655Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:49:10.818661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:10.839217 (kubelet)[2572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:10.911690 kubelet[2572]: E0213 19:49:10.911623 2572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:10.919078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:10.919481 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:11.258447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1391643896.mount: Deactivated successfully. Feb 13 19:49:12.371664 containerd[2015]: time="2025-02-13T19:49:12.371605428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.375361 containerd[2015]: time="2025-02-13T19:49:12.375300204Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Feb 13 19:49:12.376338 containerd[2015]: time="2025-02-13T19:49:12.376249140Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.382464 containerd[2015]: time="2025-02-13T19:49:12.382379784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.385540 containerd[2015]: time="2025-02-13T19:49:12.385285656Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.727675145s" Feb 13 19:49:12.385540 containerd[2015]: time="2025-02-13T19:49:12.385346700Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 19:49:12.386430 containerd[2015]: time="2025-02-13T19:49:12.386131032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:49:12.888364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4197516532.mount: Deactivated successfully. Feb 13 19:49:12.895696 containerd[2015]: time="2025-02-13T19:49:12.895628331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.896850 containerd[2015]: time="2025-02-13T19:49:12.896778387Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 19:49:12.897790 containerd[2015]: time="2025-02-13T19:49:12.897700707Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.903109 containerd[2015]: time="2025-02-13T19:49:12.903011595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.905437 containerd[2015]: time="2025-02-13T19:49:12.904735047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 518.546367ms" Feb 13 19:49:12.905437 containerd[2015]: time="2025-02-13T19:49:12.904794591Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:49:12.906176 containerd[2015]: time="2025-02-13T19:49:12.906129999Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:49:13.447955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164562259.mount: Deactivated successfully. Feb 13 19:49:15.823249 containerd[2015]: time="2025-02-13T19:49:15.823178729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:15.854452 containerd[2015]: time="2025-02-13T19:49:15.854344157Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Feb 13 19:49:15.885981 containerd[2015]: time="2025-02-13T19:49:15.885803033Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:15.893418 containerd[2015]: time="2025-02-13T19:49:15.891809309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:15.896990 containerd[2015]: time="2025-02-13T19:49:15.896932373Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.990743646s" Feb 13 19:49:15.897169 containerd[2015]: time="2025-02-13T19:49:15.897140093Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 19:49:21.170309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:49:21.178772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:21.507918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:21.512826 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:21.592629 kubelet[2718]: E0213 19:49:21.592567 2718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:21.598698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:21.599048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:24.622103 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:24.643117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:24.702792 systemd[1]: Reloading requested from client PID 2732 ('systemctl') (unit session-7.scope)... Feb 13 19:49:24.702830 systemd[1]: Reloading... Feb 13 19:49:24.939431 zram_generator::config[2776]: No configuration found. Feb 13 19:49:25.168481 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:25.340656 systemd[1]: Reloading finished in 637 ms. Feb 13 19:49:25.437275 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:49:25.437572 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:49:25.439478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:25.448986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:25.728254 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:25.743935 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:25.816554 kubelet[2837]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:25.816554 kubelet[2837]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:49:25.816554 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:25.817078 kubelet[2837]: I0213 19:49:25.816693 2837 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:49:26.035615 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:49:27.069760 kubelet[2837]: I0213 19:49:27.069699 2837 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:49:27.070433 kubelet[2837]: I0213 19:49:27.070380 2837 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:49:27.071019 kubelet[2837]: I0213 19:49:27.070995 2837 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:49:27.125421 kubelet[2837]: E0213 19:49:27.125328 2837 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.215:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.215:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:27.129190 kubelet[2837]: I0213 19:49:27.129152 2837 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:49:27.145247 kubelet[2837]: E0213 19:49:27.145178 2837 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:49:27.145247 kubelet[2837]: I0213 19:49:27.145233 2837 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:49:27.151312 kubelet[2837]: I0213 19:49:27.151250 2837 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:49:27.152831 kubelet[2837]: I0213 19:49:27.152753 2837 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:49:27.153127 kubelet[2837]: I0213 19:49:27.152826 2837 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-215","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:49:27.153314 kubelet[2837]: I0213 19:49:27.153154 2837 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:49:27.153314 kubelet[2837]: I0213 19:49:27.153174 2837 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:49:27.153485 kubelet[2837]: I0213 19:49:27.153455 2837 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:27.160850 kubelet[2837]: I0213 19:49:27.160792 2837 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:49:27.161040 kubelet[2837]: I0213 19:49:27.160998 2837 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:49:27.161104 kubelet[2837]: I0213 19:49:27.161047 2837 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:49:27.161104 kubelet[2837]: I0213 19:49:27.161070 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:49:27.167970 kubelet[2837]: W0213 19:49:27.167548 2837 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-215&limit=500&resourceVersion=0": dial tcp 172.31.26.215:6443: connect: connection refused Feb 13 19:49:27.167970 kubelet[2837]: E0213 19:49:27.167642 2837 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-215&limit=500&resourceVersion=0\": dial tcp 172.31.26.215:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:27.168477 kubelet[2837]: W0213 19:49:27.168361 2837 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.215:6443: connect: connection refused Feb 13 19:49:27.169613 kubelet[2837]: E0213 19:49:27.168505 2837 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.215:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:27.169613 kubelet[2837]: I0213 19:49:27.168646 2837 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:49:27.169613 kubelet[2837]: I0213 19:49:27.169427 2837 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:49:27.169613 kubelet[2837]: W0213 19:49:27.169545 2837 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:49:27.173180 kubelet[2837]: I0213 19:49:27.173123 2837 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:49:27.173180 kubelet[2837]: I0213 19:49:27.173185 2837 server.go:1287] "Started kubelet" Feb 13 19:49:27.180155 kubelet[2837]: I0213 19:49:27.179283 2837 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:49:27.181183 kubelet[2837]: I0213 19:49:27.181080 2837 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:49:27.181763 kubelet[2837]: I0213 19:49:27.181713 2837 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:49:27.181872 kubelet[2837]: I0213 19:49:27.181143 2837 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:49:27.183481 kubelet[2837]: E0213 19:49:27.183191 2837 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.215:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.215:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-215.1823dc5b9f9d2a4d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-215,UID:ip-172-31-26-215,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-215,},FirstTimestamp:2025-02-13 19:49:27.173155405 +0000 UTC m=+1.423248032,LastTimestamp:2025-02-13 19:49:27.173155405 +0000 UTC m=+1.423248032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-215,}" Feb 13 19:49:27.190254 kubelet[2837]: E0213 19:49:27.189212 2837 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:49:27.190254 kubelet[2837]: I0213 19:49:27.189946 2837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:49:27.193803 kubelet[2837]: I0213 19:49:27.193731 2837 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:49:27.196476 kubelet[2837]: I0213 19:49:27.194610 2837 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:49:27.196476 kubelet[2837]: E0213 19:49:27.194986 2837 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-26-215\" not found" Feb 13 19:49:27.196476 kubelet[2837]: I0213 19:49:27.195872 2837 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:49:27.196476 kubelet[2837]: I0213 19:49:27.195976 2837 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:49:27.197977 kubelet[2837]: I0213 19:49:27.197940 2837 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:49:27.198283 kubelet[2837]: I0213 19:49:27.198250 2837 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:49:27.199307 kubelet[2837]: W0213 19:49:27.199235 2837 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.215:6443: connect: connection refused Feb 13 19:49:27.199560 kubelet[2837]: E0213 19:49:27.199521 2837 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.215:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:27.199792 kubelet[2837]: E0213 19:49:27.199743 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-215?timeout=10s\": dial tcp 172.31.26.215:6443: connect: connection refused" interval="200ms" Feb 13 19:49:27.202036 kubelet[2837]: I0213 19:49:27.201996 2837 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:49:27.235104 kubelet[2837]: I0213 19:49:27.235041 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:49:27.237307 kubelet[2837]: I0213 19:49:27.237245 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:49:27.237307 kubelet[2837]: I0213 19:49:27.237294 2837 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:49:27.237519 kubelet[2837]: I0213 19:49:27.237326 2837 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:49:27.237519 kubelet[2837]: I0213 19:49:27.237340 2837 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:49:27.237519 kubelet[2837]: E0213 19:49:27.237452 2837 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:49:27.247568 kubelet[2837]: W0213 19:49:27.247480 2837 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.215:6443: connect: connection refused Feb 13 19:49:27.247725 kubelet[2837]: E0213 19:49:27.247585 2837 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.215:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:27.249272 kubelet[2837]: I0213 19:49:27.249225 2837 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:49:27.249272 kubelet[2837]: I0213 19:49:27.249258 2837 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:49:27.249518 kubelet[2837]: I0213 19:49:27.249289 2837 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:27.253773 kubelet[2837]: I0213 19:49:27.253723 2837 policy_none.go:49] "None policy: Start" Feb 13 19:49:27.253773 kubelet[2837]: I0213 19:49:27.253765 2837 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:49:27.253969 kubelet[2837]: I0213 19:49:27.253791 2837 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:49:27.264667 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:49:27.289650 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:49:27.295191 kubelet[2837]: E0213 19:49:27.295125 2837 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-26-215\" not found" Feb 13 19:49:27.296669 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:49:27.306910 kubelet[2837]: I0213 19:49:27.306029 2837 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:49:27.306910 kubelet[2837]: I0213 19:49:27.306337 2837 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:49:27.306910 kubelet[2837]: I0213 19:49:27.306358 2837 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:49:27.306910 kubelet[2837]: I0213 19:49:27.306771 2837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:49:27.310989 kubelet[2837]: E0213 19:49:27.310785 2837 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:49:27.310989 kubelet[2837]: E0213 19:49:27.310880 2837 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-215\" not found" Feb 13 19:49:27.357160 systemd[1]: Created slice kubepods-burstable-podd9b0992571b41b049d011a1ddd676de4.slice - libcontainer container kubepods-burstable-podd9b0992571b41b049d011a1ddd676de4.slice. Feb 13 19:49:27.373465 kubelet[2837]: E0213 19:49:27.373046 2837 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-215\" not found" node="ip-172-31-26-215" Feb 13 19:49:27.376150 systemd[1]: Created slice kubepods-burstable-poda36df5914d70eb6cd384d48840462f71.slice - libcontainer container kubepods-burstable-poda36df5914d70eb6cd384d48840462f71.slice. Feb 13 19:49:27.380520 kubelet[2837]: E0213 19:49:27.380476 2837 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-215\" not found" node="ip-172-31-26-215" Feb 13 19:49:27.384737 systemd[1]: Created slice kubepods-burstable-podd7a84ec80460e416dd4f3ec679d7c41c.slice - libcontainer container kubepods-burstable-podd7a84ec80460e416dd4f3ec679d7c41c.slice. Feb 13 19:49:27.388714 kubelet[2837]: E0213 19:49:27.388615 2837 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-215\" not found" node="ip-172-31-26-215" Feb 13 19:49:27.401509 kubelet[2837]: E0213 19:49:27.401439 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-215?timeout=10s\": dial tcp 172.31.26.215:6443: connect: connection refused" interval="400ms" Feb 13 19:49:27.409442 kubelet[2837]: I0213 19:49:27.409378 2837 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-215" Feb 13 19:49:27.410155 kubelet[2837]: E0213 19:49:27.410109 2837 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.26.215:6443/api/v1/nodes\": dial tcp 172.31.26.215:6443: connect: connection refused" node="ip-172-31-26-215" Feb 13 19:49:27.497620 kubelet[2837]: I0213 19:49:27.497504 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a36df5914d70eb6cd384d48840462f71-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-215\" (UID: \"a36df5914d70eb6cd384d48840462f71\") " pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:27.497749 kubelet[2837]: I0213 19:49:27.497644 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9b0992571b41b049d011a1ddd676de4-ca-certs\") pod \"kube-apiserver-ip-172-31-26-215\" (UID: \"d9b0992571b41b049d011a1ddd676de4\") " pod="kube-system/kube-apiserver-ip-172-31-26-215" Feb 13 19:49:27.497749 kubelet[2837]: I0213 19:49:27.497726 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9b0992571b41b049d011a1ddd676de4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-215\" (UID: \"d9b0992571b41b049d011a1ddd676de4\") " pod="kube-system/kube-apiserver-ip-172-31-26-215" Feb 13 19:49:27.497870 kubelet[2837]: I0213 19:49:27.497806 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a36df5914d70eb6cd384d48840462f71-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-215\" (UID: \"a36df5914d70eb6cd384d48840462f71\") " pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:27.497927 kubelet[2837]: I0213 19:49:27.497884 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a36df5914d70eb6cd384d48840462f71-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-215\" (UID: \"a36df5914d70eb6cd384d48840462f71\") " pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:27.497977 kubelet[2837]: I0213 19:49:27.497926 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a36df5914d70eb6cd384d48840462f71-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-215\" (UID: \"a36df5914d70eb6cd384d48840462f71\") " pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:27.498048 kubelet[2837]: I0213 19:49:27.498003 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7a84ec80460e416dd4f3ec679d7c41c-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-215\" (UID: \"d7a84ec80460e416dd4f3ec679d7c41c\") " pod="kube-system/kube-scheduler-ip-172-31-26-215" Feb 13 19:49:27.498134 kubelet[2837]: I0213 19:49:27.498100 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9b0992571b41b049d011a1ddd676de4-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-215\" (UID: \"d9b0992571b41b049d011a1ddd676de4\") " pod="kube-system/kube-apiserver-ip-172-31-26-215" Feb 13 19:49:27.498202 kubelet[2837]: I0213 19:49:27.498166 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a36df5914d70eb6cd384d48840462f71-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-215\" (UID: \"a36df5914d70eb6cd384d48840462f71\") " pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:27.612118 kubelet[2837]: I0213 19:49:27.611968 2837 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-215" Feb 13 19:49:27.612765 kubelet[2837]: E0213 19:49:27.612698 2837 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.26.215:6443/api/v1/nodes\": dial tcp 172.31.26.215:6443: connect: connection refused" node="ip-172-31-26-215" Feb 13 19:49:27.675567 containerd[2015]: time="2025-02-13T19:49:27.675505504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-215,Uid:d9b0992571b41b049d011a1ddd676de4,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:27.682370 containerd[2015]: time="2025-02-13T19:49:27.682308640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-215,Uid:a36df5914d70eb6cd384d48840462f71,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:27.690959 containerd[2015]: time="2025-02-13T19:49:27.690886888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-215,Uid:d7a84ec80460e416dd4f3ec679d7c41c,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:27.802770 kubelet[2837]: E0213 19:49:27.802706 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-215?timeout=10s\": dial tcp 172.31.26.215:6443: connect: connection refused" interval="800ms" Feb 13 19:49:28.014991 kubelet[2837]: I0213 19:49:28.014848 2837 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-215" Feb 13 19:49:28.016108 kubelet[2837]: E0213 19:49:28.016057 2837 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.26.215:6443/api/v1/nodes\": dial tcp 172.31.26.215:6443: connect: connection refused" node="ip-172-31-26-215" Feb 13 19:49:28.223455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010802142.mount: Deactivated successfully. Feb 13 19:49:28.240166 containerd[2015]: time="2025-02-13T19:49:28.240075495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:28.242366 containerd[2015]: time="2025-02-13T19:49:28.242297271Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:28.244366 containerd[2015]: time="2025-02-13T19:49:28.244268499Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:49:28.246354 containerd[2015]: time="2025-02-13T19:49:28.246299775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:49:28.248614 containerd[2015]: time="2025-02-13T19:49:28.248544879Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:28.251537 containerd[2015]: time="2025-02-13T19:49:28.251332275Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:28.253558 containerd[2015]: time="2025-02-13T19:49:28.253068027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:49:28.258184 containerd[2015]: time="2025-02-13T19:49:28.258110451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:28.262412 containerd[2015]: time="2025-02-13T19:49:28.262337571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.876219ms" Feb 13 19:49:28.267201 containerd[2015]: time="2025-02-13T19:49:28.266859375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.240231ms" Feb 13 19:49:28.274197 containerd[2015]: time="2025-02-13T19:49:28.273895155Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 582.895851ms" Feb 13 19:49:28.294534 kubelet[2837]: W0213 19:49:28.294438 2837 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.215:6443: connect: connection refused Feb 13 19:49:28.295070 kubelet[2837]: E0213 19:49:28.294545 2837 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.215:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.215:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:28.329416 kubelet[2837]: W0213 19:49:28.328980 2837 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-215&limit=500&resourceVersion=0": dial tcp 172.31.26.215:6443: connect: connection refused Feb 13 19:49:28.329416 kubelet[2837]: E0213 19:49:28.329096 2837 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.215:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-215&limit=500&resourceVersion=0\": dial tcp 172.31.26.215:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:28.399239 kubelet[2837]: W0213 19:49:28.399164 2837 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.215:6443: connect: connection refused Feb 13 19:49:28.399366 kubelet[2837]: E0213 19:49:28.399245 2837 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.215:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.215:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:28.480640 containerd[2015]: time="2025-02-13T19:49:28.480220072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:28.480640 containerd[2015]: time="2025-02-13T19:49:28.480304732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:28.480640 containerd[2015]: time="2025-02-13T19:49:28.480330292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:28.480640 containerd[2015]: time="2025-02-13T19:49:28.480490168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:28.484013 containerd[2015]: time="2025-02-13T19:49:28.483794176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:28.484013 containerd[2015]: time="2025-02-13T19:49:28.483918016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:28.485263 containerd[2015]: time="2025-02-13T19:49:28.483967348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:28.487121 containerd[2015]: time="2025-02-13T19:49:28.485717512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:28.490195 containerd[2015]: time="2025-02-13T19:49:28.489993292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:28.490195 containerd[2015]: time="2025-02-13T19:49:28.490115644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:28.490511 containerd[2015]: time="2025-02-13T19:49:28.490202728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:28.490511 containerd[2015]: time="2025-02-13T19:49:28.490426924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:28.533202 systemd[1]: Started cri-containerd-d2d0ff414c2266f0f78d87e4ba10711ab3d696411c27a41b052260565487edc6.scope - libcontainer container d2d0ff414c2266f0f78d87e4ba10711ab3d696411c27a41b052260565487edc6. Feb 13 19:49:28.549712 systemd[1]: Started cri-containerd-f357d349c4174561b1cb1c467b1a3bb49dfa82bc492c1020fdaf13c23d7bf517.scope - libcontainer container f357d349c4174561b1cb1c467b1a3bb49dfa82bc492c1020fdaf13c23d7bf517. Feb 13 19:49:28.563345 systemd[1]: Started cri-containerd-751956cd794361438c88b85ba9edc4c82b682872a428cfdb9d8b79729733d751.scope - libcontainer container 751956cd794361438c88b85ba9edc4c82b682872a428cfdb9d8b79729733d751. Feb 13 19:49:28.603640 kubelet[2837]: E0213 19:49:28.603568 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-215?timeout=10s\": dial tcp 172.31.26.215:6443: connect: connection refused" interval="1.6s" Feb 13 19:49:28.660952 containerd[2015]: time="2025-02-13T19:49:28.659727473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-215,Uid:d7a84ec80460e416dd4f3ec679d7c41c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2d0ff414c2266f0f78d87e4ba10711ab3d696411c27a41b052260565487edc6\"" Feb 13 19:49:28.674756 containerd[2015]: time="2025-02-13T19:49:28.674702117Z" level=info msg="CreateContainer within sandbox \"d2d0ff414c2266f0f78d87e4ba10711ab3d696411c27a41b052260565487edc6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:49:28.678847 kubelet[2837]: W0213 19:49:28.678756 2837 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.215:6443: connect: connection refused Feb 13 19:49:28.679213 kubelet[2837]: E0213 19:49:28.679172 2837 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.215:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.215:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:28.683787 containerd[2015]: time="2025-02-13T19:49:28.683734289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-215,Uid:a36df5914d70eb6cd384d48840462f71,Namespace:kube-system,Attempt:0,} returns sandbox id \"f357d349c4174561b1cb1c467b1a3bb49dfa82bc492c1020fdaf13c23d7bf517\"" Feb 13 19:49:28.690653 containerd[2015]: time="2025-02-13T19:49:28.690507365Z" level=info msg="CreateContainer within sandbox \"f357d349c4174561b1cb1c467b1a3bb49dfa82bc492c1020fdaf13c23d7bf517\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:49:28.697378 containerd[2015]: time="2025-02-13T19:49:28.696684941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-215,Uid:d9b0992571b41b049d011a1ddd676de4,Namespace:kube-system,Attempt:0,} returns sandbox id \"751956cd794361438c88b85ba9edc4c82b682872a428cfdb9d8b79729733d751\"" Feb 13 19:49:28.703798 containerd[2015]: time="2025-02-13T19:49:28.703733417Z" level=info msg="CreateContainer within sandbox \"751956cd794361438c88b85ba9edc4c82b682872a428cfdb9d8b79729733d751\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:49:28.721429 containerd[2015]: time="2025-02-13T19:49:28.721220393Z" level=info msg="CreateContainer within sandbox \"d2d0ff414c2266f0f78d87e4ba10711ab3d696411c27a41b052260565487edc6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f5746dda18459fb2e5e8774283fbf79326aa13173d4dc0fd13073b2d17ea68e3\"" Feb 13 19:49:28.722329 containerd[2015]: time="2025-02-13T19:49:28.722285429Z" level=info msg="StartContainer for \"f5746dda18459fb2e5e8774283fbf79326aa13173d4dc0fd13073b2d17ea68e3\"" Feb 13 19:49:28.747948 containerd[2015]: time="2025-02-13T19:49:28.747605597Z" level=info msg="CreateContainer within sandbox \"f357d349c4174561b1cb1c467b1a3bb49dfa82bc492c1020fdaf13c23d7bf517\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b4f3b8e2945dfc818d9815036120924812bd393bee522d6a13c4df97c3f73cb\"" Feb 13 19:49:28.750420 containerd[2015]: time="2025-02-13T19:49:28.749505425Z" level=info msg="StartContainer for \"1b4f3b8e2945dfc818d9815036120924812bd393bee522d6a13c4df97c3f73cb\"" Feb 13 19:49:28.763712 containerd[2015]: time="2025-02-13T19:49:28.763643177Z" level=info msg="CreateContainer within sandbox \"751956cd794361438c88b85ba9edc4c82b682872a428cfdb9d8b79729733d751\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f1dac0fd5fea34f4deb8ebac13ebd6475d5e03cd2c352a45a0c1c6b09ffcd1a1\"" Feb 13 19:49:28.767746 containerd[2015]: time="2025-02-13T19:49:28.767432957Z" level=info msg="StartContainer for \"f1dac0fd5fea34f4deb8ebac13ebd6475d5e03cd2c352a45a0c1c6b09ffcd1a1\"" Feb 13 19:49:28.780994 systemd[1]: Started cri-containerd-f5746dda18459fb2e5e8774283fbf79326aa13173d4dc0fd13073b2d17ea68e3.scope - libcontainer container f5746dda18459fb2e5e8774283fbf79326aa13173d4dc0fd13073b2d17ea68e3. Feb 13 19:49:28.822438 kubelet[2837]: I0213 19:49:28.821440 2837 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-215" Feb 13 19:49:28.824348 kubelet[2837]: E0213 19:49:28.823630 2837 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.26.215:6443/api/v1/nodes\": dial tcp 172.31.26.215:6443: connect: connection refused" node="ip-172-31-26-215" Feb 13 19:49:28.835581 systemd[1]: Started cri-containerd-1b4f3b8e2945dfc818d9815036120924812bd393bee522d6a13c4df97c3f73cb.scope - libcontainer container 1b4f3b8e2945dfc818d9815036120924812bd393bee522d6a13c4df97c3f73cb. Feb 13 19:49:28.866892 systemd[1]: Started cri-containerd-f1dac0fd5fea34f4deb8ebac13ebd6475d5e03cd2c352a45a0c1c6b09ffcd1a1.scope - libcontainer container f1dac0fd5fea34f4deb8ebac13ebd6475d5e03cd2c352a45a0c1c6b09ffcd1a1. Feb 13 19:49:28.920427 containerd[2015]: time="2025-02-13T19:49:28.918740910Z" level=info msg="StartContainer for \"f5746dda18459fb2e5e8774283fbf79326aa13173d4dc0fd13073b2d17ea68e3\" returns successfully" Feb 13 19:49:28.968849 containerd[2015]: time="2025-02-13T19:49:28.968366238Z" level=info msg="StartContainer for \"1b4f3b8e2945dfc818d9815036120924812bd393bee522d6a13c4df97c3f73cb\" returns successfully" Feb 13 19:49:29.004619 containerd[2015]: time="2025-02-13T19:49:29.004547523Z" level=info msg="StartContainer for \"f1dac0fd5fea34f4deb8ebac13ebd6475d5e03cd2c352a45a0c1c6b09ffcd1a1\" returns successfully" Feb 13 19:49:29.263543 kubelet[2837]: E0213 19:49:29.261629 2837 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-215\" not found" node="ip-172-31-26-215" Feb 13 19:49:29.270943 kubelet[2837]: E0213 19:49:29.270907 2837 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-215\" not found" node="ip-172-31-26-215" Feb 13 19:49:29.274367 kubelet[2837]: E0213 19:49:29.274044 2837 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-215\" not found" node="ip-172-31-26-215" Feb 13 19:49:30.278142 kubelet[2837]: E0213 19:49:30.277902 2837 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-215\" not found" node="ip-172-31-26-215" Feb 13 19:49:30.281806 kubelet[2837]: E0213 19:49:30.280825 2837 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-215\" not found" node="ip-172-31-26-215" Feb 13 19:49:30.428930 kubelet[2837]: I0213 19:49:30.426093 2837 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-215" Feb 13 19:49:32.765815 kubelet[2837]: E0213 19:49:32.765768 2837 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-215\" not found" node="ip-172-31-26-215" Feb 13 19:49:33.076626 kubelet[2837]: E0213 19:49:33.076501 2837 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-215\" not found" node="ip-172-31-26-215" Feb 13 19:49:33.158078 kubelet[2837]: E0213 19:49:33.158019 2837 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-215\" not found" node="ip-172-31-26-215" Feb 13 19:49:33.172807 kubelet[2837]: I0213 19:49:33.172593 2837 apiserver.go:52] "Watching apiserver" Feb 13 19:49:33.196631 kubelet[2837]: I0213 19:49:33.196554 2837 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:49:33.222891 kubelet[2837]: I0213 19:49:33.222553 2837 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-26-215" Feb 13 19:49:33.296428 kubelet[2837]: I0213 19:49:33.296270 2837 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-215" Feb 13 19:49:33.338151 kubelet[2837]: E0213 19:49:33.337693 2837 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-215" Feb 13 19:49:33.338151 kubelet[2837]: I0213 19:49:33.337736 2837 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:33.349421 kubelet[2837]: E0213 19:49:33.347945 2837 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:33.349421 kubelet[2837]: I0213 19:49:33.347999 2837 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-215" Feb 13 19:49:33.355898 kubelet[2837]: E0213 19:49:33.355825 2837 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-215\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-215" Feb 13 19:49:35.227238 systemd[1]: Reloading requested from client PID 3119 ('systemctl') (unit session-7.scope)... Feb 13 19:49:35.227445 systemd[1]: Reloading... Feb 13 19:49:35.492602 zram_generator::config[3162]: No configuration found. Feb 13 19:49:35.741857 kubelet[2837]: I0213 19:49:35.741492 2837 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-215" Feb 13 19:49:35.769868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:35.970898 systemd[1]: Reloading finished in 742 ms. Feb 13 19:49:36.063675 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:36.079559 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:49:36.080052 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:36.080131 systemd[1]: kubelet.service: Consumed 2.097s CPU time, 123.4M memory peak, 0B memory swap peak. Feb 13 19:49:36.088990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:36.414797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:36.426307 (kubelet)[3219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:36.565419 kubelet[3219]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:36.565419 kubelet[3219]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:49:36.565419 kubelet[3219]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:36.568488 kubelet[3219]: I0213 19:49:36.566133 3219 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:49:36.588513 kubelet[3219]: I0213 19:49:36.587991 3219 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:49:36.588513 kubelet[3219]: I0213 19:49:36.588044 3219 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:49:36.588938 kubelet[3219]: I0213 19:49:36.588901 3219 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:49:36.592377 kubelet[3219]: I0213 19:49:36.592317 3219 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:49:36.602775 kubelet[3219]: I0213 19:49:36.601474 3219 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:49:36.618275 kubelet[3219]: E0213 19:49:36.618192 3219 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:49:36.618582 kubelet[3219]: I0213 19:49:36.618347 3219 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:49:36.626962 kubelet[3219]: I0213 19:49:36.626881 3219 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:49:36.627348 sudo[3233]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:49:36.628096 sudo[3233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:49:36.629066 kubelet[3219]: I0213 19:49:36.628983 3219 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:49:36.629760 kubelet[3219]: I0213 19:49:36.629161 3219 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-215","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:49:36.630230 kubelet[3219]: I0213 19:49:36.630037 3219 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:49:36.630230 kubelet[3219]: I0213 19:49:36.630153 3219 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:49:36.630590 kubelet[3219]: I0213 19:49:36.630502 3219 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:36.631177 kubelet[3219]: I0213 19:49:36.631141 3219 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:49:36.631372 kubelet[3219]: I0213 19:49:36.631287 3219 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:49:36.631600 kubelet[3219]: I0213 19:49:36.631330 3219 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:49:36.631600 kubelet[3219]: I0213 19:49:36.631536 3219 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:49:36.636341 kubelet[3219]: I0213 19:49:36.636207 3219 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:49:36.644559 kubelet[3219]: I0213 19:49:36.644352 3219 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:49:36.657609 kubelet[3219]: I0213 19:49:36.657406 3219 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:49:36.657609 kubelet[3219]: I0213 19:49:36.657511 3219 server.go:1287] "Started kubelet" Feb 13 19:49:36.662235 kubelet[3219]: I0213 19:49:36.662073 3219 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:49:36.662732 kubelet[3219]: I0213 19:49:36.662646 3219 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:49:36.663134 kubelet[3219]: I0213 19:49:36.663093 3219 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:49:36.666608 kubelet[3219]: I0213 19:49:36.665121 3219 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:49:36.679431 kubelet[3219]: I0213 19:49:36.679158 3219 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:49:36.719855 kubelet[3219]: I0213 19:49:36.719759 3219 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:49:36.724665 kubelet[3219]: I0213 19:49:36.724620 3219 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:49:36.724851 kubelet[3219]: I0213 19:49:36.724832 3219 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:49:36.726329 kubelet[3219]: I0213 19:49:36.725469 3219 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:49:36.726329 kubelet[3219]: I0213 19:49:36.725500 3219 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:49:36.726329 kubelet[3219]: E0213 19:49:36.725582 3219 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:49:36.726329 kubelet[3219]: I0213 19:49:36.719779 3219 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:49:36.739170 kubelet[3219]: I0213 19:49:36.737794 3219 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:49:36.739170 kubelet[3219]: E0213 19:49:36.738155 3219 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-26-215\" not found" Feb 13 19:49:36.739170 kubelet[3219]: I0213 19:49:36.738542 3219 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:49:36.739170 kubelet[3219]: I0213 19:49:36.738771 3219 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:49:36.808866 kubelet[3219]: I0213 19:49:36.808810 3219 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:49:36.808866 kubelet[3219]: I0213 19:49:36.808851 3219 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:49:36.809032 kubelet[3219]: I0213 19:49:36.808986 3219 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:49:36.829620 kubelet[3219]: E0213 19:49:36.829543 3219 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:49:36.935435 kubelet[3219]: I0213 19:49:36.933338 3219 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:49:36.935435 kubelet[3219]: I0213 19:49:36.933372 3219 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:49:36.935435 kubelet[3219]: I0213 19:49:36.933435 3219 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:36.935435 kubelet[3219]: I0213 19:49:36.934582 3219 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:49:36.935435 kubelet[3219]: I0213 19:49:36.934611 3219 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:49:36.935435 kubelet[3219]: I0213 19:49:36.934645 3219 policy_none.go:49] "None policy: Start" Feb 13 19:49:36.935435 kubelet[3219]: I0213 19:49:36.934666 3219 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:49:36.935435 kubelet[3219]: I0213 19:49:36.934687 3219 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:49:36.935435 kubelet[3219]: I0213 19:49:36.934877 3219 state_mem.go:75] "Updated machine memory state" Feb 13 19:49:36.947106 kubelet[3219]: I0213 19:49:36.947068 3219 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:49:36.949759 kubelet[3219]: I0213 19:49:36.949727 3219 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:49:36.951455 kubelet[3219]: I0213 19:49:36.950765 3219 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:49:36.951455 kubelet[3219]: I0213 19:49:36.951255 3219 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:49:36.961218 kubelet[3219]: E0213 19:49:36.958920 3219 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:49:37.032418 kubelet[3219]: I0213 19:49:37.032319 3219 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-215" Feb 13 19:49:37.034446 kubelet[3219]: I0213 19:49:37.032687 3219 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-215" Feb 13 19:49:37.034861 kubelet[3219]: I0213 19:49:37.032331 3219 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:37.043828 kubelet[3219]: E0213 19:49:37.043785 3219 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-215\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-215" Feb 13 19:49:37.044749 kubelet[3219]: I0213 19:49:37.044106 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d7a84ec80460e416dd4f3ec679d7c41c-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-215\" (UID: \"d7a84ec80460e416dd4f3ec679d7c41c\") " pod="kube-system/kube-scheduler-ip-172-31-26-215" Feb 13 19:49:37.044749 kubelet[3219]: I0213 19:49:37.044253 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9b0992571b41b049d011a1ddd676de4-ca-certs\") pod \"kube-apiserver-ip-172-31-26-215\" (UID: \"d9b0992571b41b049d011a1ddd676de4\") " pod="kube-system/kube-apiserver-ip-172-31-26-215" Feb 13 19:49:37.044749 kubelet[3219]: I0213 19:49:37.044292 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9b0992571b41b049d011a1ddd676de4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-215\" (UID: \"d9b0992571b41b049d011a1ddd676de4\") " pod="kube-system/kube-apiserver-ip-172-31-26-215" Feb 13 19:49:37.044749 kubelet[3219]: I0213 19:49:37.044334 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a36df5914d70eb6cd384d48840462f71-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-215\" (UID: \"a36df5914d70eb6cd384d48840462f71\") " pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:37.044749 kubelet[3219]: I0213 19:49:37.044372 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a36df5914d70eb6cd384d48840462f71-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-215\" (UID: \"a36df5914d70eb6cd384d48840462f71\") " pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:37.045071 kubelet[3219]: I0213 19:49:37.044458 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9b0992571b41b049d011a1ddd676de4-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-215\" (UID: \"d9b0992571b41b049d011a1ddd676de4\") " pod="kube-system/kube-apiserver-ip-172-31-26-215" Feb 13 19:49:37.045071 kubelet[3219]: I0213 19:49:37.044496 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a36df5914d70eb6cd384d48840462f71-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-215\" (UID: \"a36df5914d70eb6cd384d48840462f71\") " pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:37.045071 kubelet[3219]: I0213 19:49:37.044532 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a36df5914d70eb6cd384d48840462f71-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-215\" (UID: \"a36df5914d70eb6cd384d48840462f71\") " pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:37.045071 kubelet[3219]: I0213 19:49:37.044600 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a36df5914d70eb6cd384d48840462f71-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-215\" (UID: \"a36df5914d70eb6cd384d48840462f71\") " pod="kube-system/kube-controller-manager-ip-172-31-26-215" Feb 13 19:49:37.083254 kubelet[3219]: I0213 19:49:37.082078 3219 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-215" Feb 13 19:49:37.101811 kubelet[3219]: I0213 19:49:37.100923 3219 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-26-215" Feb 13 19:49:37.101811 kubelet[3219]: I0213 19:49:37.101032 3219 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-26-215" Feb 13 19:49:37.591005 sudo[3233]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:37.633010 kubelet[3219]: I0213 19:49:37.632777 3219 apiserver.go:52] "Watching apiserver" Feb 13 19:49:37.639360 kubelet[3219]: I0213 19:49:37.639282 3219 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:49:37.777345 kubelet[3219]: I0213 19:49:37.777216 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-215" podStartSLOduration=0.777177854 podStartE2EDuration="777.177854ms" podCreationTimestamp="2025-02-13 19:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:37.758722694 +0000 UTC m=+1.325035496" watchObservedRunningTime="2025-02-13 19:49:37.777177854 +0000 UTC m=+1.343490644" Feb 13 19:49:37.795862 kubelet[3219]: I0213 19:49:37.795610 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-215" podStartSLOduration=0.795564998 podStartE2EDuration="795.564998ms" podCreationTimestamp="2025-02-13 19:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:37.79341455 +0000 UTC m=+1.359727352" watchObservedRunningTime="2025-02-13 19:49:37.795564998 +0000 UTC m=+1.361877788" Feb 13 19:49:37.795862 kubelet[3219]: I0213 19:49:37.795782 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-215" podStartSLOduration=2.795773738 podStartE2EDuration="2.795773738s" podCreationTimestamp="2025-02-13 19:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:37.777678806 +0000 UTC m=+1.343991620" watchObservedRunningTime="2025-02-13 19:49:37.795773738 +0000 UTC m=+1.362086540" Feb 13 19:49:37.855331 kubelet[3219]: I0213 19:49:37.855118 3219 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-215" Feb 13 19:49:37.873204 kubelet[3219]: E0213 19:49:37.872841 3219 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-215\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-215" Feb 13 19:49:39.995428 sudo[2335]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:40.020733 sshd[2332]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:40.025909 systemd[1]: sshd@6-172.31.26.215:22-139.178.89.65:51190.service: Deactivated successfully. Feb 13 19:49:40.030514 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:49:40.031499 systemd[1]: session-7.scope: Consumed 12.172s CPU time, 151.7M memory peak, 0B memory swap peak. Feb 13 19:49:40.034163 systemd-logind[1994]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:49:40.036610 systemd-logind[1994]: Removed session 7. Feb 13 19:49:40.442529 update_engine[1997]: I20250213 19:49:40.442425 1997 update_attempter.cc:509] Updating boot flags... Feb 13 19:49:40.526590 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3302) Feb 13 19:49:40.835469 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3306) Feb 13 19:49:41.020970 kubelet[3219]: I0213 19:49:41.018476 3219 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:49:41.020970 kubelet[3219]: I0213 19:49:41.019703 3219 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:49:41.022307 containerd[2015]: time="2025-02-13T19:49:41.019192814Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:49:41.301529 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3306) Feb 13 19:49:41.660343 systemd[1]: Created slice kubepods-besteffort-pod98670348_bd89_4fea_9a80_7799b398db7b.slice - libcontainer container kubepods-besteffort-pod98670348_bd89_4fea_9a80_7799b398db7b.slice. Feb 13 19:49:41.690273 systemd[1]: Created slice kubepods-burstable-pod80795f0c_15af_47cd_acd8_0c80cd0663c0.slice - libcontainer container kubepods-burstable-pod80795f0c_15af_47cd_acd8_0c80cd0663c0.slice. Feb 13 19:49:41.781037 kubelet[3219]: I0213 19:49:41.780922 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/98670348-bd89-4fea-9a80-7799b398db7b-kube-proxy\") pod \"kube-proxy-h5kxm\" (UID: \"98670348-bd89-4fea-9a80-7799b398db7b\") " pod="kube-system/kube-proxy-h5kxm" Feb 13 19:49:41.781037 kubelet[3219]: I0213 19:49:41.780999 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-config-path\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.781037 kubelet[3219]: I0213 19:49:41.781040 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-run\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782342 kubelet[3219]: I0213 19:49:41.781081 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-host-proc-sys-kernel\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782342 kubelet[3219]: I0213 19:49:41.781123 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-bpf-maps\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782342 kubelet[3219]: I0213 19:49:41.781159 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-cgroup\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782342 kubelet[3219]: I0213 19:49:41.781200 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cni-path\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782342 kubelet[3219]: I0213 19:49:41.781252 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-host-proc-sys-net\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782342 kubelet[3219]: I0213 19:49:41.781288 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80795f0c-15af-47cd-acd8-0c80cd0663c0-hubble-tls\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782677 kubelet[3219]: I0213 19:49:41.781370 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-xtables-lock\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782677 kubelet[3219]: I0213 19:49:41.781433 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98670348-bd89-4fea-9a80-7799b398db7b-xtables-lock\") pod \"kube-proxy-h5kxm\" (UID: \"98670348-bd89-4fea-9a80-7799b398db7b\") " pod="kube-system/kube-proxy-h5kxm" Feb 13 19:49:41.782677 kubelet[3219]: I0213 19:49:41.781474 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-hostproc\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782677 kubelet[3219]: I0213 19:49:41.781512 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-etc-cni-netd\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782677 kubelet[3219]: I0213 19:49:41.781548 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80795f0c-15af-47cd-acd8-0c80cd0663c0-clustermesh-secrets\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782677 kubelet[3219]: I0213 19:49:41.781607 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98670348-bd89-4fea-9a80-7799b398db7b-lib-modules\") pod \"kube-proxy-h5kxm\" (UID: \"98670348-bd89-4fea-9a80-7799b398db7b\") " pod="kube-system/kube-proxy-h5kxm" Feb 13 19:49:41.782949 kubelet[3219]: I0213 19:49:41.781651 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27nmp\" (UniqueName: \"kubernetes.io/projected/80795f0c-15af-47cd-acd8-0c80cd0663c0-kube-api-access-27nmp\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.782949 kubelet[3219]: I0213 19:49:41.781696 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrzvv\" (UniqueName: \"kubernetes.io/projected/98670348-bd89-4fea-9a80-7799b398db7b-kube-api-access-wrzvv\") pod \"kube-proxy-h5kxm\" (UID: \"98670348-bd89-4fea-9a80-7799b398db7b\") " pod="kube-system/kube-proxy-h5kxm" Feb 13 19:49:41.782949 kubelet[3219]: I0213 19:49:41.781734 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-lib-modules\") pod \"cilium-h4wpc\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " pod="kube-system/cilium-h4wpc" Feb 13 19:49:41.907428 kubelet[3219]: E0213 19:49:41.901866 3219 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:49:41.907428 kubelet[3219]: E0213 19:49:41.904441 3219 projected.go:194] Error preparing data for projected volume kube-api-access-wrzvv for pod kube-system/kube-proxy-h5kxm: configmap "kube-root-ca.crt" not found Feb 13 19:49:41.907428 kubelet[3219]: E0213 19:49:41.904569 3219 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98670348-bd89-4fea-9a80-7799b398db7b-kube-api-access-wrzvv podName:98670348-bd89-4fea-9a80-7799b398db7b nodeName:}" failed. No retries permitted until 2025-02-13 19:49:42.404537147 +0000 UTC m=+5.970849937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wrzvv" (UniqueName: "kubernetes.io/projected/98670348-bd89-4fea-9a80-7799b398db7b-kube-api-access-wrzvv") pod "kube-proxy-h5kxm" (UID: "98670348-bd89-4fea-9a80-7799b398db7b") : configmap "kube-root-ca.crt" not found Feb 13 19:49:41.929884 kubelet[3219]: E0213 19:49:41.929637 3219 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:49:41.929884 kubelet[3219]: E0213 19:49:41.929799 3219 projected.go:194] Error preparing data for projected volume kube-api-access-27nmp for pod kube-system/cilium-h4wpc: configmap "kube-root-ca.crt" not found Feb 13 19:49:41.933559 kubelet[3219]: E0213 19:49:41.933497 3219 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80795f0c-15af-47cd-acd8-0c80cd0663c0-kube-api-access-27nmp podName:80795f0c-15af-47cd-acd8-0c80cd0663c0 nodeName:}" failed. No retries permitted until 2025-02-13 19:49:42.431237591 +0000 UTC m=+5.997550381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-27nmp" (UniqueName: "kubernetes.io/projected/80795f0c-15af-47cd-acd8-0c80cd0663c0-kube-api-access-27nmp") pod "cilium-h4wpc" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0") : configmap "kube-root-ca.crt" not found Feb 13 19:49:42.104084 kubelet[3219]: I0213 19:49:42.103040 3219 status_manager.go:890] "Failed to get status for pod" podUID="7f135082-5ca3-4925-bcb9-78764085bbf1" pod="kube-system/cilium-operator-6c4d7847fc-v8ff6" err="pods \"cilium-operator-6c4d7847fc-v8ff6\" is forbidden: User \"system:node:ip-172-31-26-215\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-215' and this object" Feb 13 19:49:42.107914 systemd[1]: Created slice kubepods-besteffort-pod7f135082_5ca3_4925_bcb9_78764085bbf1.slice - libcontainer container kubepods-besteffort-pod7f135082_5ca3_4925_bcb9_78764085bbf1.slice. Feb 13 19:49:42.185105 kubelet[3219]: I0213 19:49:42.184968 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwjm8\" (UniqueName: \"kubernetes.io/projected/7f135082-5ca3-4925-bcb9-78764085bbf1-kube-api-access-kwjm8\") pod \"cilium-operator-6c4d7847fc-v8ff6\" (UID: \"7f135082-5ca3-4925-bcb9-78764085bbf1\") " pod="kube-system/cilium-operator-6c4d7847fc-v8ff6" Feb 13 19:49:42.185478 kubelet[3219]: I0213 19:49:42.185323 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f135082-5ca3-4925-bcb9-78764085bbf1-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-v8ff6\" (UID: \"7f135082-5ca3-4925-bcb9-78764085bbf1\") " pod="kube-system/cilium-operator-6c4d7847fc-v8ff6" Feb 13 19:49:42.418611 containerd[2015]: time="2025-02-13T19:49:42.418092005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v8ff6,Uid:7f135082-5ca3-4925-bcb9-78764085bbf1,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:42.483028 containerd[2015]: time="2025-02-13T19:49:42.482157065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:42.483905 containerd[2015]: time="2025-02-13T19:49:42.483763097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:42.483905 containerd[2015]: time="2025-02-13T19:49:42.483821849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:42.484234 containerd[2015]: time="2025-02-13T19:49:42.484164977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:42.526718 systemd[1]: Started cri-containerd-272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0.scope - libcontainer container 272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0. Feb 13 19:49:42.578580 containerd[2015]: time="2025-02-13T19:49:42.578464470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h5kxm,Uid:98670348-bd89-4fea-9a80-7799b398db7b,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:42.595210 containerd[2015]: time="2025-02-13T19:49:42.595157214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v8ff6,Uid:7f135082-5ca3-4925-bcb9-78764085bbf1,Namespace:kube-system,Attempt:0,} returns sandbox id \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\"" Feb 13 19:49:42.599624 containerd[2015]: time="2025-02-13T19:49:42.599558538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h4wpc,Uid:80795f0c-15af-47cd-acd8-0c80cd0663c0,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:42.600736 containerd[2015]: time="2025-02-13T19:49:42.600664062Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:49:42.626591 containerd[2015]: time="2025-02-13T19:49:42.626425074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:42.626591 containerd[2015]: time="2025-02-13T19:49:42.626540250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:42.628474 containerd[2015]: time="2025-02-13T19:49:42.626578458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:42.628474 containerd[2015]: time="2025-02-13T19:49:42.626743170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:42.665734 systemd[1]: Started cri-containerd-ec4d24dde7d24913806f2e9530fec04f08c88f5bd842b6d444570c757b5ab513.scope - libcontainer container ec4d24dde7d24913806f2e9530fec04f08c88f5bd842b6d444570c757b5ab513. Feb 13 19:49:42.675578 containerd[2015]: time="2025-02-13T19:49:42.674543622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:42.675578 containerd[2015]: time="2025-02-13T19:49:42.675013026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:42.675578 containerd[2015]: time="2025-02-13T19:49:42.675089382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:42.675578 containerd[2015]: time="2025-02-13T19:49:42.675511542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:42.709937 systemd[1]: Started cri-containerd-57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22.scope - libcontainer container 57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22. Feb 13 19:49:42.749317 containerd[2015]: time="2025-02-13T19:49:42.749161915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h5kxm,Uid:98670348-bd89-4fea-9a80-7799b398db7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec4d24dde7d24913806f2e9530fec04f08c88f5bd842b6d444570c757b5ab513\"" Feb 13 19:49:42.759365 containerd[2015]: time="2025-02-13T19:49:42.759294667Z" level=info msg="CreateContainer within sandbox \"ec4d24dde7d24913806f2e9530fec04f08c88f5bd842b6d444570c757b5ab513\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:49:42.796502 containerd[2015]: time="2025-02-13T19:49:42.796442551Z" level=info msg="CreateContainer within sandbox \"ec4d24dde7d24913806f2e9530fec04f08c88f5bd842b6d444570c757b5ab513\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4139ea7140ba692b0569d588915b8db6de23093bb65d01d959c47e1b992b694c\"" Feb 13 19:49:42.796925 containerd[2015]: time="2025-02-13T19:49:42.796755607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h4wpc,Uid:80795f0c-15af-47cd-acd8-0c80cd0663c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\"" Feb 13 19:49:42.798702 containerd[2015]: time="2025-02-13T19:49:42.798638095Z" level=info msg="StartContainer for \"4139ea7140ba692b0569d588915b8db6de23093bb65d01d959c47e1b992b694c\"" Feb 13 19:49:42.846803 systemd[1]: Started cri-containerd-4139ea7140ba692b0569d588915b8db6de23093bb65d01d959c47e1b992b694c.scope - libcontainer container 4139ea7140ba692b0569d588915b8db6de23093bb65d01d959c47e1b992b694c. Feb 13 19:49:42.925554 containerd[2015]: time="2025-02-13T19:49:42.921947096Z" level=info msg="StartContainer for \"4139ea7140ba692b0569d588915b8db6de23093bb65d01d959c47e1b992b694c\" returns successfully" Feb 13 19:49:43.917739 kubelet[3219]: I0213 19:49:43.917644 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h5kxm" podStartSLOduration=2.917619093 podStartE2EDuration="2.917619093s" podCreationTimestamp="2025-02-13 19:49:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:43.903333705 +0000 UTC m=+7.469646519" watchObservedRunningTime="2025-02-13 19:49:43.917619093 +0000 UTC m=+7.483931907" Feb 13 19:49:44.429212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706733773.mount: Deactivated successfully. Feb 13 19:49:45.285849 containerd[2015]: time="2025-02-13T19:49:45.285766411Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:45.288121 containerd[2015]: time="2025-02-13T19:49:45.288050383Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:49:45.289699 containerd[2015]: time="2025-02-13T19:49:45.289621423Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:45.292600 containerd[2015]: time="2025-02-13T19:49:45.292544731Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.691804145s" Feb 13 19:49:45.292922 containerd[2015]: time="2025-02-13T19:49:45.292774507Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:49:45.296378 containerd[2015]: time="2025-02-13T19:49:45.295997371Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:49:45.298688 containerd[2015]: time="2025-02-13T19:49:45.298610719Z" level=info msg="CreateContainer within sandbox \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:49:45.320982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712796628.mount: Deactivated successfully. Feb 13 19:49:45.329323 containerd[2015]: time="2025-02-13T19:49:45.329234876Z" level=info msg="CreateContainer within sandbox \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\"" Feb 13 19:49:45.330166 containerd[2015]: time="2025-02-13T19:49:45.329903336Z" level=info msg="StartContainer for \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\"" Feb 13 19:49:45.380708 systemd[1]: Started cri-containerd-a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d.scope - libcontainer container a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d. Feb 13 19:49:45.430190 containerd[2015]: time="2025-02-13T19:49:45.430134440Z" level=info msg="StartContainer for \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\" returns successfully" Feb 13 19:49:46.876416 kubelet[3219]: I0213 19:49:46.876315 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-v8ff6" podStartSLOduration=2.179800602 podStartE2EDuration="4.876291839s" podCreationTimestamp="2025-02-13 19:49:42 +0000 UTC" firstStartedPulling="2025-02-13 19:49:42.597885102 +0000 UTC m=+6.164197880" lastFinishedPulling="2025-02-13 19:49:45.294376315 +0000 UTC m=+8.860689117" observedRunningTime="2025-02-13 19:49:45.943009691 +0000 UTC m=+9.509322541" watchObservedRunningTime="2025-02-13 19:49:46.876291839 +0000 UTC m=+10.442604629" Feb 13 19:49:51.501232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935367252.mount: Deactivated successfully. Feb 13 19:49:54.148415 containerd[2015]: time="2025-02-13T19:49:54.148328307Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:54.150578 containerd[2015]: time="2025-02-13T19:49:54.150436227Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:49:54.153007 containerd[2015]: time="2025-02-13T19:49:54.152960019Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:54.156902 containerd[2015]: time="2025-02-13T19:49:54.156669759Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.860603064s" Feb 13 19:49:54.156902 containerd[2015]: time="2025-02-13T19:49:54.156735075Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:49:54.162209 containerd[2015]: time="2025-02-13T19:49:54.161980984Z" level=info msg="CreateContainer within sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:49:54.187488 containerd[2015]: time="2025-02-13T19:49:54.187369960Z" level=info msg="CreateContainer within sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe\"" Feb 13 19:49:54.188221 containerd[2015]: time="2025-02-13T19:49:54.188119816Z" level=info msg="StartContainer for \"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe\"" Feb 13 19:49:54.243737 systemd[1]: Started cri-containerd-2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe.scope - libcontainer container 2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe. Feb 13 19:49:54.292612 containerd[2015]: time="2025-02-13T19:49:54.292374748Z" level=info msg="StartContainer for \"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe\" returns successfully" Feb 13 19:49:54.313086 systemd[1]: cri-containerd-2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe.scope: Deactivated successfully. Feb 13 19:49:54.352357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe-rootfs.mount: Deactivated successfully. Feb 13 19:49:55.287569 containerd[2015]: time="2025-02-13T19:49:55.287457809Z" level=info msg="shim disconnected" id=2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe namespace=k8s.io Feb 13 19:49:55.287569 containerd[2015]: time="2025-02-13T19:49:55.287553377Z" level=warning msg="cleaning up after shim disconnected" id=2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe namespace=k8s.io Feb 13 19:49:55.287569 containerd[2015]: time="2025-02-13T19:49:55.287578481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:49:55.946890 containerd[2015]: time="2025-02-13T19:49:55.946656536Z" level=info msg="CreateContainer within sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:49:55.973580 containerd[2015]: time="2025-02-13T19:49:55.973032152Z" level=info msg="CreateContainer within sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a\"" Feb 13 19:49:55.976146 containerd[2015]: time="2025-02-13T19:49:55.976092849Z" level=info msg="StartContainer for \"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a\"" Feb 13 19:49:56.038701 systemd[1]: Started cri-containerd-9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a.scope - libcontainer container 9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a. Feb 13 19:49:56.083982 containerd[2015]: time="2025-02-13T19:49:56.083807405Z" level=info msg="StartContainer for \"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a\" returns successfully" Feb 13 19:49:56.107999 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:49:56.108682 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:49:56.108796 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:49:56.116078 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:49:56.116524 systemd[1]: cri-containerd-9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a.scope: Deactivated successfully. Feb 13 19:49:56.164480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:49:56.172417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a-rootfs.mount: Deactivated successfully. Feb 13 19:49:56.176981 containerd[2015]: time="2025-02-13T19:49:56.176754750Z" level=info msg="shim disconnected" id=9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a namespace=k8s.io Feb 13 19:49:56.176981 containerd[2015]: time="2025-02-13T19:49:56.176849466Z" level=warning msg="cleaning up after shim disconnected" id=9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a namespace=k8s.io Feb 13 19:49:56.176981 containerd[2015]: time="2025-02-13T19:49:56.176873670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:49:56.947353 containerd[2015]: time="2025-02-13T19:49:56.946945821Z" level=info msg="CreateContainer within sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:49:56.991302 containerd[2015]: time="2025-02-13T19:49:56.990906430Z" level=info msg="CreateContainer within sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9\"" Feb 13 19:49:56.994007 containerd[2015]: time="2025-02-13T19:49:56.993942874Z" level=info msg="StartContainer for \"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9\"" Feb 13 19:49:57.054460 systemd[1]: Started cri-containerd-187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9.scope - libcontainer container 187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9. Feb 13 19:49:57.122006 containerd[2015]: time="2025-02-13T19:49:57.121906002Z" level=info msg="StartContainer for \"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9\" returns successfully" Feb 13 19:49:57.126079 systemd[1]: cri-containerd-187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9.scope: Deactivated successfully. Feb 13 19:49:57.177016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9-rootfs.mount: Deactivated successfully. Feb 13 19:49:57.180622 containerd[2015]: time="2025-02-13T19:49:57.180533754Z" level=info msg="shim disconnected" id=187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9 namespace=k8s.io Feb 13 19:49:57.180622 containerd[2015]: time="2025-02-13T19:49:57.180613062Z" level=warning msg="cleaning up after shim disconnected" id=187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9 namespace=k8s.io Feb 13 19:49:57.180932 containerd[2015]: time="2025-02-13T19:49:57.180634746Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:49:57.957009 containerd[2015]: time="2025-02-13T19:49:57.956760478Z" level=info msg="CreateContainer within sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:49:57.989448 containerd[2015]: time="2025-02-13T19:49:57.987978779Z" level=info msg="CreateContainer within sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1\"" Feb 13 19:49:57.993427 containerd[2015]: time="2025-02-13T19:49:57.990744599Z" level=info msg="StartContainer for \"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1\"" Feb 13 19:49:58.055720 systemd[1]: Started cri-containerd-32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1.scope - libcontainer container 32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1. Feb 13 19:49:58.103065 systemd[1]: cri-containerd-32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1.scope: Deactivated successfully. Feb 13 19:49:58.105358 containerd[2015]: time="2025-02-13T19:49:58.104940511Z" level=info msg="StartContainer for \"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1\" returns successfully" Feb 13 19:49:58.142060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1-rootfs.mount: Deactivated successfully. Feb 13 19:49:58.145076 containerd[2015]: time="2025-02-13T19:49:58.144901555Z" level=info msg="shim disconnected" id=32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1 namespace=k8s.io Feb 13 19:49:58.145076 containerd[2015]: time="2025-02-13T19:49:58.145033951Z" level=warning msg="cleaning up after shim disconnected" id=32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1 namespace=k8s.io Feb 13 19:49:58.145554 containerd[2015]: time="2025-02-13T19:49:58.145297735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:49:58.966997 containerd[2015]: time="2025-02-13T19:49:58.966931787Z" level=info msg="CreateContainer within sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:49:58.999052 containerd[2015]: time="2025-02-13T19:49:58.998692668Z" level=info msg="CreateContainer within sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\"" Feb 13 19:49:59.001925 containerd[2015]: time="2025-02-13T19:49:59.001589708Z" level=info msg="StartContainer for \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\"" Feb 13 19:49:59.069716 systemd[1]: Started cri-containerd-ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15.scope - libcontainer container ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15. Feb 13 19:49:59.122971 containerd[2015]: time="2025-02-13T19:49:59.121052072Z" level=info msg="StartContainer for \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\" returns successfully" Feb 13 19:49:59.338712 kubelet[3219]: I0213 19:49:59.338653 3219 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:49:59.402008 systemd[1]: Created slice kubepods-burstable-pode4ea1f42_b736_42d0_9684_39e1dd15feb5.slice - libcontainer container kubepods-burstable-pode4ea1f42_b736_42d0_9684_39e1dd15feb5.slice. Feb 13 19:49:59.418692 systemd[1]: Created slice kubepods-burstable-pod71862c2e_28e2_4ab4_8030_c0125e764f00.slice - libcontainer container kubepods-burstable-pod71862c2e_28e2_4ab4_8030_c0125e764f00.slice. Feb 13 19:49:59.422426 kubelet[3219]: I0213 19:49:59.422302 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71862c2e-28e2-4ab4-8030-c0125e764f00-config-volume\") pod \"coredns-668d6bf9bc-dw52m\" (UID: \"71862c2e-28e2-4ab4-8030-c0125e764f00\") " pod="kube-system/coredns-668d6bf9bc-dw52m" Feb 13 19:49:59.422426 kubelet[3219]: I0213 19:49:59.422375 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jtr6\" (UniqueName: \"kubernetes.io/projected/71862c2e-28e2-4ab4-8030-c0125e764f00-kube-api-access-8jtr6\") pod \"coredns-668d6bf9bc-dw52m\" (UID: \"71862c2e-28e2-4ab4-8030-c0125e764f00\") " pod="kube-system/coredns-668d6bf9bc-dw52m" Feb 13 19:49:59.422639 kubelet[3219]: I0213 19:49:59.422456 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5smr\" (UniqueName: \"kubernetes.io/projected/e4ea1f42-b736-42d0-9684-39e1dd15feb5-kube-api-access-c5smr\") pod \"coredns-668d6bf9bc-kv4nj\" (UID: \"e4ea1f42-b736-42d0-9684-39e1dd15feb5\") " pod="kube-system/coredns-668d6bf9bc-kv4nj" Feb 13 19:49:59.422639 kubelet[3219]: I0213 19:49:59.422513 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4ea1f42-b736-42d0-9684-39e1dd15feb5-config-volume\") pod \"coredns-668d6bf9bc-kv4nj\" (UID: \"e4ea1f42-b736-42d0-9684-39e1dd15feb5\") " pod="kube-system/coredns-668d6bf9bc-kv4nj" Feb 13 19:49:59.717506 containerd[2015]: time="2025-02-13T19:49:59.715706195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kv4nj,Uid:e4ea1f42-b736-42d0-9684-39e1dd15feb5,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:59.729323 containerd[2015]: time="2025-02-13T19:49:59.729262787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dw52m,Uid:71862c2e-28e2-4ab4-8030-c0125e764f00,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:00.038930 kubelet[3219]: I0213 19:50:00.038834 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h4wpc" podStartSLOduration=7.679871245 podStartE2EDuration="19.038806749s" podCreationTimestamp="2025-02-13 19:49:41 +0000 UTC" firstStartedPulling="2025-02-13 19:49:42.800366635 +0000 UTC m=+6.366679413" lastFinishedPulling="2025-02-13 19:49:54.159302127 +0000 UTC m=+17.725614917" observedRunningTime="2025-02-13 19:50:00.033965481 +0000 UTC m=+23.600278283" watchObservedRunningTime="2025-02-13 19:50:00.038806749 +0000 UTC m=+23.605119527" Feb 13 19:50:02.013306 (udev-worker)[4279]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:02.017347 systemd-networkd[1922]: cilium_host: Link UP Feb 13 19:50:02.019194 systemd-networkd[1922]: cilium_net: Link UP Feb 13 19:50:02.019208 systemd-networkd[1922]: cilium_net: Gained carrier Feb 13 19:50:02.020209 systemd-networkd[1922]: cilium_host: Gained carrier Feb 13 19:50:02.020786 systemd-networkd[1922]: cilium_host: Gained IPv6LL Feb 13 19:50:02.021770 (udev-worker)[4316]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:02.210611 systemd-networkd[1922]: cilium_vxlan: Link UP Feb 13 19:50:02.210633 systemd-networkd[1922]: cilium_vxlan: Gained carrier Feb 13 19:50:02.280727 systemd-networkd[1922]: cilium_net: Gained IPv6LL Feb 13 19:50:02.702550 kernel: NET: Registered PF_ALG protocol family Feb 13 19:50:04.022479 systemd-networkd[1922]: lxc_health: Link UP Feb 13 19:50:04.055673 systemd-networkd[1922]: lxc_health: Gained carrier Feb 13 19:50:04.057504 (udev-worker)[4326]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:04.208584 systemd-networkd[1922]: cilium_vxlan: Gained IPv6LL Feb 13 19:50:04.334224 systemd-networkd[1922]: lxcac78397a8fb5: Link UP Feb 13 19:50:04.346237 kernel: eth0: renamed from tmpf20f5 Feb 13 19:50:04.348985 (udev-worker)[4330]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:04.351818 systemd-networkd[1922]: lxcac78397a8fb5: Gained carrier Feb 13 19:50:04.354159 systemd-networkd[1922]: lxccb1c7935deef: Link UP Feb 13 19:50:04.366210 kernel: eth0: renamed from tmpcaa29 Feb 13 19:50:04.370876 systemd-networkd[1922]: lxccb1c7935deef: Gained carrier Feb 13 19:50:05.424643 systemd-networkd[1922]: lxccb1c7935deef: Gained IPv6LL Feb 13 19:50:05.745091 systemd-networkd[1922]: lxc_health: Gained IPv6LL Feb 13 19:50:06.257563 systemd-networkd[1922]: lxcac78397a8fb5: Gained IPv6LL Feb 13 19:50:08.814635 ntpd[1988]: Listen normally on 8 cilium_host 192.168.0.60:123 Feb 13 19:50:08.815959 ntpd[1988]: 13 Feb 19:50:08 ntpd[1988]: Listen normally on 8 cilium_host 192.168.0.60:123 Feb 13 19:50:08.815959 ntpd[1988]: 13 Feb 19:50:08 ntpd[1988]: Listen normally on 9 cilium_net [fe80::803a:9eff:fec6:fd34%4]:123 Feb 13 19:50:08.815959 ntpd[1988]: 13 Feb 19:50:08 ntpd[1988]: Listen normally on 10 cilium_host [fe80::c63:b1ff:feba:56be%5]:123 Feb 13 19:50:08.815959 ntpd[1988]: 13 Feb 19:50:08 ntpd[1988]: Listen normally on 11 cilium_vxlan [fe80::8ca2:c2ff:fe24:b9ed%6]:123 Feb 13 19:50:08.815959 ntpd[1988]: 13 Feb 19:50:08 ntpd[1988]: Listen normally on 12 lxc_health [fe80::11:4fff:fe79:e3a6%8]:123 Feb 13 19:50:08.815959 ntpd[1988]: 13 Feb 19:50:08 ntpd[1988]: Listen normally on 13 lxcac78397a8fb5 [fe80::3c1b:4bff:feec:f484%10]:123 Feb 13 19:50:08.815959 ntpd[1988]: 13 Feb 19:50:08 ntpd[1988]: Listen normally on 14 lxccb1c7935deef [fe80::6c2f:53ff:fe1e:8388%12]:123 Feb 13 19:50:08.814758 ntpd[1988]: Listen normally on 9 cilium_net [fe80::803a:9eff:fec6:fd34%4]:123 Feb 13 19:50:08.814839 ntpd[1988]: Listen normally on 10 cilium_host [fe80::c63:b1ff:feba:56be%5]:123 Feb 13 19:50:08.814906 ntpd[1988]: Listen normally on 11 cilium_vxlan [fe80::8ca2:c2ff:fe24:b9ed%6]:123 Feb 13 19:50:08.814975 ntpd[1988]: Listen normally on 12 lxc_health [fe80::11:4fff:fe79:e3a6%8]:123 Feb 13 19:50:08.815042 ntpd[1988]: Listen normally on 13 lxcac78397a8fb5 [fe80::3c1b:4bff:feec:f484%10]:123 Feb 13 19:50:08.815119 ntpd[1988]: Listen normally on 14 lxccb1c7935deef [fe80::6c2f:53ff:fe1e:8388%12]:123 Feb 13 19:50:11.088328 kubelet[3219]: I0213 19:50:11.087045 3219 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:50:12.813078 containerd[2015]: time="2025-02-13T19:50:12.812729952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:12.813078 containerd[2015]: time="2025-02-13T19:50:12.812824152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:12.813078 containerd[2015]: time="2025-02-13T19:50:12.812851116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:12.813078 containerd[2015]: time="2025-02-13T19:50:12.813005484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:12.865718 systemd[1]: Started cri-containerd-caa294361759a04302a5143162f0e1b87067e0c76d6164905477f67ad8563598.scope - libcontainer container caa294361759a04302a5143162f0e1b87067e0c76d6164905477f67ad8563598. Feb 13 19:50:12.892277 containerd[2015]: time="2025-02-13T19:50:12.891639205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:12.892277 containerd[2015]: time="2025-02-13T19:50:12.891736477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:12.892277 containerd[2015]: time="2025-02-13T19:50:12.891763909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:12.892277 containerd[2015]: time="2025-02-13T19:50:12.891911317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:12.961362 systemd[1]: run-containerd-runc-k8s.io-f20f52afb9172a303d7549bf832222458553e12d5bd082ffbecdb0c311a61c67-runc.3PdsUO.mount: Deactivated successfully. Feb 13 19:50:12.978738 systemd[1]: Started cri-containerd-f20f52afb9172a303d7549bf832222458553e12d5bd082ffbecdb0c311a61c67.scope - libcontainer container f20f52afb9172a303d7549bf832222458553e12d5bd082ffbecdb0c311a61c67. Feb 13 19:50:13.023725 containerd[2015]: time="2025-02-13T19:50:13.023649705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dw52m,Uid:71862c2e-28e2-4ab4-8030-c0125e764f00,Namespace:kube-system,Attempt:0,} returns sandbox id \"caa294361759a04302a5143162f0e1b87067e0c76d6164905477f67ad8563598\"" Feb 13 19:50:13.035872 containerd[2015]: time="2025-02-13T19:50:13.035362245Z" level=info msg="CreateContainer within sandbox \"caa294361759a04302a5143162f0e1b87067e0c76d6164905477f67ad8563598\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:50:13.079932 containerd[2015]: time="2025-02-13T19:50:13.079770141Z" level=info msg="CreateContainer within sandbox \"caa294361759a04302a5143162f0e1b87067e0c76d6164905477f67ad8563598\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e18295da4dd1867850864e4904de762e20ac242e1479fe779924c5459d91cb6\"" Feb 13 19:50:13.085288 containerd[2015]: time="2025-02-13T19:50:13.084832173Z" level=info msg="StartContainer for \"4e18295da4dd1867850864e4904de762e20ac242e1479fe779924c5459d91cb6\"" Feb 13 19:50:13.122991 containerd[2015]: time="2025-02-13T19:50:13.122794558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kv4nj,Uid:e4ea1f42-b736-42d0-9684-39e1dd15feb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f20f52afb9172a303d7549bf832222458553e12d5bd082ffbecdb0c311a61c67\"" Feb 13 19:50:13.133841 containerd[2015]: time="2025-02-13T19:50:13.133751830Z" level=info msg="CreateContainer within sandbox \"f20f52afb9172a303d7549bf832222458553e12d5bd082ffbecdb0c311a61c67\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:50:13.158724 systemd[1]: Started cri-containerd-4e18295da4dd1867850864e4904de762e20ac242e1479fe779924c5459d91cb6.scope - libcontainer container 4e18295da4dd1867850864e4904de762e20ac242e1479fe779924c5459d91cb6. Feb 13 19:50:13.178664 containerd[2015]: time="2025-02-13T19:50:13.178572958Z" level=info msg="CreateContainer within sandbox \"f20f52afb9172a303d7549bf832222458553e12d5bd082ffbecdb0c311a61c67\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f4bf9ff04eae229aea40fdb248ee60615cc0ab0401967253fbf50a944019bac\"" Feb 13 19:50:13.179711 containerd[2015]: time="2025-02-13T19:50:13.179628046Z" level=info msg="StartContainer for \"4f4bf9ff04eae229aea40fdb248ee60615cc0ab0401967253fbf50a944019bac\"" Feb 13 19:50:13.251481 containerd[2015]: time="2025-02-13T19:50:13.251056990Z" level=info msg="StartContainer for \"4e18295da4dd1867850864e4904de762e20ac242e1479fe779924c5459d91cb6\" returns successfully" Feb 13 19:50:13.273727 systemd[1]: Started cri-containerd-4f4bf9ff04eae229aea40fdb248ee60615cc0ab0401967253fbf50a944019bac.scope - libcontainer container 4f4bf9ff04eae229aea40fdb248ee60615cc0ab0401967253fbf50a944019bac. Feb 13 19:50:13.382235 containerd[2015]: time="2025-02-13T19:50:13.381986471Z" level=info msg="StartContainer for \"4f4bf9ff04eae229aea40fdb248ee60615cc0ab0401967253fbf50a944019bac\" returns successfully" Feb 13 19:50:14.069810 kubelet[3219]: I0213 19:50:14.068971 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kv4nj" podStartSLOduration=32.068910106 podStartE2EDuration="32.068910106s" podCreationTimestamp="2025-02-13 19:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:14.062600422 +0000 UTC m=+37.628913236" watchObservedRunningTime="2025-02-13 19:50:14.068910106 +0000 UTC m=+37.635222896" Feb 13 19:50:14.870937 systemd[1]: Started sshd@7-172.31.26.215:22-139.178.89.65:42522.service - OpenSSH per-connection server daemon (139.178.89.65:42522). Feb 13 19:50:15.058783 sshd[4848]: Accepted publickey for core from 139.178.89.65 port 42522 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:15.061566 sshd[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:15.069517 systemd-logind[1994]: New session 8 of user core. Feb 13 19:50:15.077658 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:50:15.336730 sshd[4848]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:15.342978 systemd[1]: sshd@7-172.31.26.215:22-139.178.89.65:42522.service: Deactivated successfully. Feb 13 19:50:15.348269 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:50:15.350626 systemd-logind[1994]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:50:15.353222 systemd-logind[1994]: Removed session 8. Feb 13 19:50:20.373369 systemd[1]: Started sshd@8-172.31.26.215:22-139.178.89.65:42538.service - OpenSSH per-connection server daemon (139.178.89.65:42538). Feb 13 19:50:20.553488 sshd[4864]: Accepted publickey for core from 139.178.89.65 port 42538 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:20.556229 sshd[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:20.565246 systemd-logind[1994]: New session 9 of user core. Feb 13 19:50:20.576684 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:50:20.816437 sshd[4864]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:20.821174 systemd[1]: sshd@8-172.31.26.215:22-139.178.89.65:42538.service: Deactivated successfully. Feb 13 19:50:20.825995 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:50:20.828172 systemd-logind[1994]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:50:20.831741 systemd-logind[1994]: Removed session 9. Feb 13 19:50:25.855939 systemd[1]: Started sshd@9-172.31.26.215:22-139.178.89.65:53620.service - OpenSSH per-connection server daemon (139.178.89.65:53620). Feb 13 19:50:26.039321 sshd[4879]: Accepted publickey for core from 139.178.89.65 port 53620 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:26.042087 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:26.049518 systemd-logind[1994]: New session 10 of user core. Feb 13 19:50:26.056630 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:50:26.300663 sshd[4879]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:26.307040 systemd[1]: sshd@9-172.31.26.215:22-139.178.89.65:53620.service: Deactivated successfully. Feb 13 19:50:26.311969 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:50:26.313977 systemd-logind[1994]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:50:26.316235 systemd-logind[1994]: Removed session 10. Feb 13 19:50:31.344923 systemd[1]: Started sshd@10-172.31.26.215:22-139.178.89.65:53634.service - OpenSSH per-connection server daemon (139.178.89.65:53634). Feb 13 19:50:31.513449 sshd[4893]: Accepted publickey for core from 139.178.89.65 port 53634 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:31.516434 sshd[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:31.525054 systemd-logind[1994]: New session 11 of user core. Feb 13 19:50:31.534685 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:50:31.780418 sshd[4893]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:31.785684 systemd[1]: sshd@10-172.31.26.215:22-139.178.89.65:53634.service: Deactivated successfully. Feb 13 19:50:31.790062 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:50:31.795122 systemd-logind[1994]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:50:31.797233 systemd-logind[1994]: Removed session 11. Feb 13 19:50:31.818049 systemd[1]: Started sshd@11-172.31.26.215:22-139.178.89.65:53648.service - OpenSSH per-connection server daemon (139.178.89.65:53648). Feb 13 19:50:31.997971 sshd[4907]: Accepted publickey for core from 139.178.89.65 port 53648 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:32.000692 sshd[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:32.009537 systemd-logind[1994]: New session 12 of user core. Feb 13 19:50:32.015668 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:50:32.337167 sshd[4907]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:32.349264 systemd[1]: sshd@11-172.31.26.215:22-139.178.89.65:53648.service: Deactivated successfully. Feb 13 19:50:32.355057 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:50:32.359928 systemd-logind[1994]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:50:32.380965 systemd[1]: Started sshd@12-172.31.26.215:22-139.178.89.65:53664.service - OpenSSH per-connection server daemon (139.178.89.65:53664). Feb 13 19:50:32.383520 systemd-logind[1994]: Removed session 12. Feb 13 19:50:32.560507 sshd[4917]: Accepted publickey for core from 139.178.89.65 port 53664 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:32.563673 sshd[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:32.572717 systemd-logind[1994]: New session 13 of user core. Feb 13 19:50:32.583683 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:50:32.828809 sshd[4917]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:32.835085 systemd[1]: sshd@12-172.31.26.215:22-139.178.89.65:53664.service: Deactivated successfully. Feb 13 19:50:32.838320 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:50:32.841498 systemd-logind[1994]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:50:32.844469 systemd-logind[1994]: Removed session 13. Feb 13 19:50:37.867928 systemd[1]: Started sshd@13-172.31.26.215:22-139.178.89.65:59758.service - OpenSSH per-connection server daemon (139.178.89.65:59758). Feb 13 19:50:38.054249 sshd[4932]: Accepted publickey for core from 139.178.89.65 port 59758 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:38.058195 sshd[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:38.067268 systemd-logind[1994]: New session 14 of user core. Feb 13 19:50:38.076691 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:50:38.322710 sshd[4932]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:38.327795 systemd[1]: sshd@13-172.31.26.215:22-139.178.89.65:59758.service: Deactivated successfully. Feb 13 19:50:38.331093 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:50:38.337731 systemd-logind[1994]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:50:38.340537 systemd-logind[1994]: Removed session 14. Feb 13 19:50:43.370925 systemd[1]: Started sshd@14-172.31.26.215:22-139.178.89.65:59772.service - OpenSSH per-connection server daemon (139.178.89.65:59772). Feb 13 19:50:43.542140 sshd[4948]: Accepted publickey for core from 139.178.89.65 port 59772 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:43.544805 sshd[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:43.553721 systemd-logind[1994]: New session 15 of user core. Feb 13 19:50:43.560685 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:50:43.804198 sshd[4948]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:43.810375 systemd[1]: sshd@14-172.31.26.215:22-139.178.89.65:59772.service: Deactivated successfully. Feb 13 19:50:43.815553 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:50:43.821104 systemd-logind[1994]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:50:43.822761 systemd-logind[1994]: Removed session 15. Feb 13 19:50:48.843904 systemd[1]: Started sshd@15-172.31.26.215:22-139.178.89.65:60268.service - OpenSSH per-connection server daemon (139.178.89.65:60268). Feb 13 19:50:49.017197 sshd[4961]: Accepted publickey for core from 139.178.89.65 port 60268 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:49.020261 sshd[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:49.029754 systemd-logind[1994]: New session 16 of user core. Feb 13 19:50:49.037687 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:50:49.279257 sshd[4961]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:49.285207 systemd[1]: sshd@15-172.31.26.215:22-139.178.89.65:60268.service: Deactivated successfully. Feb 13 19:50:49.289018 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:50:49.290504 systemd-logind[1994]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:50:49.292532 systemd-logind[1994]: Removed session 16. Feb 13 19:50:54.327881 systemd[1]: Started sshd@16-172.31.26.215:22-139.178.89.65:60276.service - OpenSSH per-connection server daemon (139.178.89.65:60276). Feb 13 19:50:54.493653 sshd[4974]: Accepted publickey for core from 139.178.89.65 port 60276 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:54.496350 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:54.503980 systemd-logind[1994]: New session 17 of user core. Feb 13 19:50:54.514758 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:50:54.764229 sshd[4974]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:54.770009 systemd-logind[1994]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:50:54.770541 systemd[1]: sshd@16-172.31.26.215:22-139.178.89.65:60276.service: Deactivated successfully. Feb 13 19:50:54.775748 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:50:54.780297 systemd-logind[1994]: Removed session 17. Feb 13 19:50:54.804992 systemd[1]: Started sshd@17-172.31.26.215:22-139.178.89.65:50160.service - OpenSSH per-connection server daemon (139.178.89.65:50160). Feb 13 19:50:54.977197 sshd[4987]: Accepted publickey for core from 139.178.89.65 port 50160 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:54.981880 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:54.990978 systemd-logind[1994]: New session 18 of user core. Feb 13 19:50:54.998683 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:50:55.289745 sshd[4987]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:55.295970 systemd[1]: sshd@17-172.31.26.215:22-139.178.89.65:50160.service: Deactivated successfully. Feb 13 19:50:55.301053 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:50:55.303836 systemd-logind[1994]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:50:55.306155 systemd-logind[1994]: Removed session 18. Feb 13 19:50:55.328918 systemd[1]: Started sshd@18-172.31.26.215:22-139.178.89.65:50170.service - OpenSSH per-connection server daemon (139.178.89.65:50170). Feb 13 19:50:55.499370 sshd[4998]: Accepted publickey for core from 139.178.89.65 port 50170 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:55.502243 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:55.510047 systemd-logind[1994]: New session 19 of user core. Feb 13 19:50:55.519735 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:50:56.665002 sshd[4998]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:56.673606 systemd[1]: sshd@18-172.31.26.215:22-139.178.89.65:50170.service: Deactivated successfully. Feb 13 19:50:56.680743 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:50:56.689260 systemd-logind[1994]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:50:56.713935 systemd[1]: Started sshd@19-172.31.26.215:22-139.178.89.65:50178.service - OpenSSH per-connection server daemon (139.178.89.65:50178). Feb 13 19:50:56.722769 systemd-logind[1994]: Removed session 19. Feb 13 19:50:56.922128 sshd[5013]: Accepted publickey for core from 139.178.89.65 port 50178 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:56.925732 sshd[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:56.936180 systemd-logind[1994]: New session 20 of user core. Feb 13 19:50:56.941677 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:50:57.452837 sshd[5013]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:57.458924 systemd[1]: sshd@19-172.31.26.215:22-139.178.89.65:50178.service: Deactivated successfully. Feb 13 19:50:57.462197 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:50:57.467217 systemd-logind[1994]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:50:57.469098 systemd-logind[1994]: Removed session 20. Feb 13 19:50:57.492896 systemd[1]: Started sshd@20-172.31.26.215:22-139.178.89.65:50182.service - OpenSSH per-connection server daemon (139.178.89.65:50182). Feb 13 19:50:57.664731 sshd[5027]: Accepted publickey for core from 139.178.89.65 port 50182 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:57.667366 sshd[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:57.675051 systemd-logind[1994]: New session 21 of user core. Feb 13 19:50:57.687643 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:50:57.923901 sshd[5027]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:57.930445 systemd[1]: sshd@20-172.31.26.215:22-139.178.89.65:50182.service: Deactivated successfully. Feb 13 19:50:57.935149 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:50:57.936678 systemd-logind[1994]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:50:57.939045 systemd-logind[1994]: Removed session 21. Feb 13 19:51:02.963990 systemd[1]: Started sshd@21-172.31.26.215:22-139.178.89.65:50196.service - OpenSSH per-connection server daemon (139.178.89.65:50196). Feb 13 19:51:03.136857 sshd[5041]: Accepted publickey for core from 139.178.89.65 port 50196 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:03.139815 sshd[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:03.149124 systemd-logind[1994]: New session 22 of user core. Feb 13 19:51:03.157666 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:51:03.396160 sshd[5041]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:03.401970 systemd-logind[1994]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:51:03.403121 systemd[1]: sshd@21-172.31.26.215:22-139.178.89.65:50196.service: Deactivated successfully. Feb 13 19:51:03.406298 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:51:03.410894 systemd-logind[1994]: Removed session 22. Feb 13 19:51:08.439895 systemd[1]: Started sshd@22-172.31.26.215:22-139.178.89.65:32978.service - OpenSSH per-connection server daemon (139.178.89.65:32978). Feb 13 19:51:08.611891 sshd[5056]: Accepted publickey for core from 139.178.89.65 port 32978 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:08.615702 sshd[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:08.631738 systemd-logind[1994]: New session 23 of user core. Feb 13 19:51:08.639775 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:51:08.876746 sshd[5056]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:08.881595 systemd-logind[1994]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:51:08.882932 systemd[1]: sshd@22-172.31.26.215:22-139.178.89.65:32978.service: Deactivated successfully. Feb 13 19:51:08.886986 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:51:08.891065 systemd-logind[1994]: Removed session 23. Feb 13 19:51:13.916947 systemd[1]: Started sshd@23-172.31.26.215:22-139.178.89.65:32992.service - OpenSSH per-connection server daemon (139.178.89.65:32992). Feb 13 19:51:14.098978 sshd[5071]: Accepted publickey for core from 139.178.89.65 port 32992 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:14.102013 sshd[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:14.110573 systemd-logind[1994]: New session 24 of user core. Feb 13 19:51:14.119655 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:51:14.355765 sshd[5071]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:14.362168 systemd[1]: sshd@23-172.31.26.215:22-139.178.89.65:32992.service: Deactivated successfully. Feb 13 19:51:14.367443 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:51:14.369357 systemd-logind[1994]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:51:14.372072 systemd-logind[1994]: Removed session 24. Feb 13 19:51:19.394169 systemd[1]: Started sshd@24-172.31.26.215:22-139.178.89.65:43310.service - OpenSSH per-connection server daemon (139.178.89.65:43310). Feb 13 19:51:19.570733 sshd[5084]: Accepted publickey for core from 139.178.89.65 port 43310 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:19.573341 sshd[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:19.583257 systemd-logind[1994]: New session 25 of user core. Feb 13 19:51:19.590677 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:51:19.828106 sshd[5084]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:19.834348 systemd[1]: sshd@24-172.31.26.215:22-139.178.89.65:43310.service: Deactivated successfully. Feb 13 19:51:19.839002 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:51:19.840761 systemd-logind[1994]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:51:19.843169 systemd-logind[1994]: Removed session 25. Feb 13 19:51:19.871933 systemd[1]: Started sshd@25-172.31.26.215:22-139.178.89.65:43326.service - OpenSSH per-connection server daemon (139.178.89.65:43326). Feb 13 19:51:20.054233 sshd[5097]: Accepted publickey for core from 139.178.89.65 port 43326 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:20.056912 sshd[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:20.066482 systemd-logind[1994]: New session 26 of user core. Feb 13 19:51:20.073652 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:51:22.324426 kubelet[3219]: I0213 19:51:22.321802 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dw52m" podStartSLOduration=100.321780845 podStartE2EDuration="1m40.321780845s" podCreationTimestamp="2025-02-13 19:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:14.124260611 +0000 UTC m=+37.690573413" watchObservedRunningTime="2025-02-13 19:51:22.321780845 +0000 UTC m=+105.888093635" Feb 13 19:51:22.370599 containerd[2015]: time="2025-02-13T19:51:22.370525002Z" level=info msg="StopContainer for \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\" with timeout 30 (s)" Feb 13 19:51:22.374884 containerd[2015]: time="2025-02-13T19:51:22.374348622Z" level=info msg="Stop container \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\" with signal terminated" Feb 13 19:51:22.402588 systemd[1]: cri-containerd-a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d.scope: Deactivated successfully. Feb 13 19:51:22.409010 containerd[2015]: time="2025-02-13T19:51:22.408938046Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:51:22.428889 containerd[2015]: time="2025-02-13T19:51:22.428696778Z" level=info msg="StopContainer for \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\" with timeout 2 (s)" Feb 13 19:51:22.429704 containerd[2015]: time="2025-02-13T19:51:22.429564186Z" level=info msg="Stop container \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\" with signal terminated" Feb 13 19:51:22.450190 systemd-networkd[1922]: lxc_health: Link DOWN Feb 13 19:51:22.450205 systemd-networkd[1922]: lxc_health: Lost carrier Feb 13 19:51:22.468289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d-rootfs.mount: Deactivated successfully. Feb 13 19:51:22.489534 containerd[2015]: time="2025-02-13T19:51:22.489446874Z" level=info msg="shim disconnected" id=a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d namespace=k8s.io Feb 13 19:51:22.489534 containerd[2015]: time="2025-02-13T19:51:22.489524622Z" level=warning msg="cleaning up after shim disconnected" id=a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d namespace=k8s.io Feb 13 19:51:22.490003 containerd[2015]: time="2025-02-13T19:51:22.489552642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:22.492776 systemd[1]: cri-containerd-ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15.scope: Deactivated successfully. Feb 13 19:51:22.493254 systemd[1]: cri-containerd-ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15.scope: Consumed 14.468s CPU time. Feb 13 19:51:22.527719 containerd[2015]: time="2025-02-13T19:51:22.527533926Z" level=info msg="StopContainer for \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\" returns successfully" Feb 13 19:51:22.528871 containerd[2015]: time="2025-02-13T19:51:22.528655926Z" level=info msg="StopPodSandbox for \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\"" Feb 13 19:51:22.528871 containerd[2015]: time="2025-02-13T19:51:22.528729234Z" level=info msg="Container to stop \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:22.535131 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0-shm.mount: Deactivated successfully. Feb 13 19:51:22.546589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15-rootfs.mount: Deactivated successfully. Feb 13 19:51:22.551597 systemd[1]: cri-containerd-272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0.scope: Deactivated successfully. Feb 13 19:51:22.557452 containerd[2015]: time="2025-02-13T19:51:22.557194495Z" level=info msg="shim disconnected" id=ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15 namespace=k8s.io Feb 13 19:51:22.557452 containerd[2015]: time="2025-02-13T19:51:22.557264755Z" level=warning msg="cleaning up after shim disconnected" id=ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15 namespace=k8s.io Feb 13 19:51:22.557452 containerd[2015]: time="2025-02-13T19:51:22.557284891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:22.585148 containerd[2015]: time="2025-02-13T19:51:22.584947327Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:51:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:51:22.591806 containerd[2015]: time="2025-02-13T19:51:22.591642535Z" level=info msg="StopContainer for \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\" returns successfully" Feb 13 19:51:22.592767 containerd[2015]: time="2025-02-13T19:51:22.592344331Z" level=info msg="StopPodSandbox for \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\"" Feb 13 19:51:22.592767 containerd[2015]: time="2025-02-13T19:51:22.592738711Z" level=info msg="Container to stop \"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:22.593012 containerd[2015]: time="2025-02-13T19:51:22.592772515Z" level=info msg="Container to stop \"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:22.593012 containerd[2015]: time="2025-02-13T19:51:22.592796383Z" level=info msg="Container to stop \"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:22.593012 containerd[2015]: time="2025-02-13T19:51:22.592819219Z" level=info msg="Container to stop \"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:22.593012 containerd[2015]: time="2025-02-13T19:51:22.592845451Z" level=info msg="Container to stop \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:51:22.598563 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22-shm.mount: Deactivated successfully. Feb 13 19:51:22.616671 systemd[1]: cri-containerd-57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22.scope: Deactivated successfully. Feb 13 19:51:22.621050 containerd[2015]: time="2025-02-13T19:51:22.620715979Z" level=info msg="shim disconnected" id=272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0 namespace=k8s.io Feb 13 19:51:22.621050 containerd[2015]: time="2025-02-13T19:51:22.620862451Z" level=warning msg="cleaning up after shim disconnected" id=272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0 namespace=k8s.io Feb 13 19:51:22.621050 containerd[2015]: time="2025-02-13T19:51:22.620886067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:22.665461 containerd[2015]: time="2025-02-13T19:51:22.664712011Z" level=info msg="TearDown network for sandbox \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\" successfully" Feb 13 19:51:22.665461 containerd[2015]: time="2025-02-13T19:51:22.665189803Z" level=info msg="StopPodSandbox for \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\" returns successfully" Feb 13 19:51:22.679700 containerd[2015]: time="2025-02-13T19:51:22.679223803Z" level=info msg="shim disconnected" id=57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22 namespace=k8s.io Feb 13 19:51:22.679700 containerd[2015]: time="2025-02-13T19:51:22.679300039Z" level=warning msg="cleaning up after shim disconnected" id=57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22 namespace=k8s.io Feb 13 19:51:22.679700 containerd[2015]: time="2025-02-13T19:51:22.679319971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:22.708419 containerd[2015]: time="2025-02-13T19:51:22.707830831Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:51:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:51:22.710175 containerd[2015]: time="2025-02-13T19:51:22.710130859Z" level=info msg="TearDown network for sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" successfully" Feb 13 19:51:22.710364 containerd[2015]: time="2025-02-13T19:51:22.710334235Z" level=info msg="StopPodSandbox for \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" returns successfully" Feb 13 19:51:22.750045 kubelet[3219]: I0213 19:51:22.749977 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwjm8\" (UniqueName: \"kubernetes.io/projected/7f135082-5ca3-4925-bcb9-78764085bbf1-kube-api-access-kwjm8\") pod \"7f135082-5ca3-4925-bcb9-78764085bbf1\" (UID: \"7f135082-5ca3-4925-bcb9-78764085bbf1\") " Feb 13 19:51:22.750243 kubelet[3219]: I0213 19:51:22.750058 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f135082-5ca3-4925-bcb9-78764085bbf1-cilium-config-path\") pod \"7f135082-5ca3-4925-bcb9-78764085bbf1\" (UID: \"7f135082-5ca3-4925-bcb9-78764085bbf1\") " Feb 13 19:51:22.757295 kubelet[3219]: I0213 19:51:22.757194 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f135082-5ca3-4925-bcb9-78764085bbf1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f135082-5ca3-4925-bcb9-78764085bbf1" (UID: "7f135082-5ca3-4925-bcb9-78764085bbf1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:51:22.758141 kubelet[3219]: I0213 19:51:22.758003 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f135082-5ca3-4925-bcb9-78764085bbf1-kube-api-access-kwjm8" (OuterVolumeSpecName: "kube-api-access-kwjm8") pod "7f135082-5ca3-4925-bcb9-78764085bbf1" (UID: "7f135082-5ca3-4925-bcb9-78764085bbf1"). InnerVolumeSpecName "kube-api-access-kwjm8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:51:22.855175 kubelet[3219]: I0213 19:51:22.850928 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-run\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.855175 kubelet[3219]: I0213 19:51:22.851019 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-host-proc-sys-net\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.855175 kubelet[3219]: I0213 19:51:22.851071 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27nmp\" (UniqueName: \"kubernetes.io/projected/80795f0c-15af-47cd-acd8-0c80cd0663c0-kube-api-access-27nmp\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.855175 kubelet[3219]: I0213 19:51:22.851093 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:22.855175 kubelet[3219]: I0213 19:51:22.851108 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cni-path\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.855175 kubelet[3219]: I0213 19:51:22.851144 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-cgroup\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.855693 kubelet[3219]: I0213 19:51:22.851181 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-hostproc\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.855693 kubelet[3219]: I0213 19:51:22.851215 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-host-proc-sys-kernel\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.855693 kubelet[3219]: I0213 19:51:22.851248 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-xtables-lock\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.855693 kubelet[3219]: I0213 19:51:22.851291 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80795f0c-15af-47cd-acd8-0c80cd0663c0-clustermesh-secrets\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.855693 kubelet[3219]: I0213 19:51:22.851332 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-config-path\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.855693 kubelet[3219]: I0213 19:51:22.851372 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80795f0c-15af-47cd-acd8-0c80cd0663c0-hubble-tls\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.856005 kubelet[3219]: I0213 19:51:22.851451 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-bpf-maps\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.856005 kubelet[3219]: I0213 19:51:22.851489 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-lib-modules\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.856005 kubelet[3219]: I0213 19:51:22.851526 3219 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-etc-cni-netd\") pod \"80795f0c-15af-47cd-acd8-0c80cd0663c0\" (UID: \"80795f0c-15af-47cd-acd8-0c80cd0663c0\") " Feb 13 19:51:22.856005 kubelet[3219]: I0213 19:51:22.851599 3219 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kwjm8\" (UniqueName: \"kubernetes.io/projected/7f135082-5ca3-4925-bcb9-78764085bbf1-kube-api-access-kwjm8\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.856005 kubelet[3219]: I0213 19:51:22.851624 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-run\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.856005 kubelet[3219]: I0213 19:51:22.851648 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f135082-5ca3-4925-bcb9-78764085bbf1-cilium-config-path\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.856347 kubelet[3219]: I0213 19:51:22.851697 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:22.856347 kubelet[3219]: I0213 19:51:22.851743 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cni-path" (OuterVolumeSpecName: "cni-path") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:22.856347 kubelet[3219]: I0213 19:51:22.851778 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:22.856347 kubelet[3219]: I0213 19:51:22.851813 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-hostproc" (OuterVolumeSpecName: "hostproc") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:22.856347 kubelet[3219]: I0213 19:51:22.851847 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:22.856691 kubelet[3219]: I0213 19:51:22.851912 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:22.857354 kubelet[3219]: I0213 19:51:22.857217 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:22.857519 kubelet[3219]: I0213 19:51:22.857479 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:22.857583 kubelet[3219]: I0213 19:51:22.857523 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:51:22.857731 kubelet[3219]: I0213 19:51:22.857695 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80795f0c-15af-47cd-acd8-0c80cd0663c0-kube-api-access-27nmp" (OuterVolumeSpecName: "kube-api-access-27nmp") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "kube-api-access-27nmp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:51:22.863458 kubelet[3219]: I0213 19:51:22.862567 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80795f0c-15af-47cd-acd8-0c80cd0663c0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:51:22.864305 kubelet[3219]: I0213 19:51:22.864258 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80795f0c-15af-47cd-acd8-0c80cd0663c0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:51:22.867271 kubelet[3219]: I0213 19:51:22.867207 3219 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "80795f0c-15af-47cd-acd8-0c80cd0663c0" (UID: "80795f0c-15af-47cd-acd8-0c80cd0663c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:51:22.952849 kubelet[3219]: I0213 19:51:22.952781 3219 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-bpf-maps\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.952849 kubelet[3219]: I0213 19:51:22.952839 3219 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-lib-modules\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.953057 kubelet[3219]: I0213 19:51:22.952863 3219 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-etc-cni-netd\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.953057 kubelet[3219]: I0213 19:51:22.952886 3219 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-host-proc-sys-net\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.953057 kubelet[3219]: I0213 19:51:22.952909 3219 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-27nmp\" (UniqueName: \"kubernetes.io/projected/80795f0c-15af-47cd-acd8-0c80cd0663c0-kube-api-access-27nmp\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.953057 kubelet[3219]: I0213 19:51:22.952931 3219 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-host-proc-sys-kernel\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.953057 kubelet[3219]: I0213 19:51:22.952959 3219 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cni-path\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.953057 kubelet[3219]: I0213 19:51:22.952982 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-cgroup\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.953057 kubelet[3219]: I0213 19:51:22.953034 3219 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-hostproc\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.953057 kubelet[3219]: I0213 19:51:22.953056 3219 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80795f0c-15af-47cd-acd8-0c80cd0663c0-xtables-lock\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.953484 kubelet[3219]: I0213 19:51:22.953077 3219 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80795f0c-15af-47cd-acd8-0c80cd0663c0-clustermesh-secrets\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.953484 kubelet[3219]: I0213 19:51:22.953099 3219 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80795f0c-15af-47cd-acd8-0c80cd0663c0-cilium-config-path\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:22.953484 kubelet[3219]: I0213 19:51:22.953119 3219 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80795f0c-15af-47cd-acd8-0c80cd0663c0-hubble-tls\") on node \"ip-172-31-26-215\" DevicePath \"\"" Feb 13 19:51:23.226514 kubelet[3219]: I0213 19:51:23.226198 3219 scope.go:117] "RemoveContainer" containerID="ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15" Feb 13 19:51:23.230354 containerd[2015]: time="2025-02-13T19:51:23.229855422Z" level=info msg="RemoveContainer for \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\"" Feb 13 19:51:23.243308 containerd[2015]: time="2025-02-13T19:51:23.243251250Z" level=info msg="RemoveContainer for \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\" returns successfully" Feb 13 19:51:23.244984 systemd[1]: Removed slice kubepods-besteffort-pod7f135082_5ca3_4925_bcb9_78764085bbf1.slice - libcontainer container kubepods-besteffort-pod7f135082_5ca3_4925_bcb9_78764085bbf1.slice. Feb 13 19:51:23.248475 kubelet[3219]: I0213 19:51:23.248045 3219 scope.go:117] "RemoveContainer" containerID="32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1" Feb 13 19:51:23.252539 containerd[2015]: time="2025-02-13T19:51:23.252018690Z" level=info msg="RemoveContainer for \"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1\"" Feb 13 19:51:23.253419 systemd[1]: Removed slice kubepods-burstable-pod80795f0c_15af_47cd_acd8_0c80cd0663c0.slice - libcontainer container kubepods-burstable-pod80795f0c_15af_47cd_acd8_0c80cd0663c0.slice. Feb 13 19:51:23.253638 systemd[1]: kubepods-burstable-pod80795f0c_15af_47cd_acd8_0c80cd0663c0.slice: Consumed 14.613s CPU time. Feb 13 19:51:23.261805 containerd[2015]: time="2025-02-13T19:51:23.261612054Z" level=info msg="RemoveContainer for \"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1\" returns successfully" Feb 13 19:51:23.261970 kubelet[3219]: I0213 19:51:23.261947 3219 scope.go:117] "RemoveContainer" containerID="187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9" Feb 13 19:51:23.266878 containerd[2015]: time="2025-02-13T19:51:23.266818854Z" level=info msg="RemoveContainer for \"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9\"" Feb 13 19:51:23.276027 containerd[2015]: time="2025-02-13T19:51:23.275928198Z" level=info msg="RemoveContainer for \"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9\" returns successfully" Feb 13 19:51:23.277537 kubelet[3219]: I0213 19:51:23.276361 3219 scope.go:117] "RemoveContainer" containerID="9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a" Feb 13 19:51:23.280898 containerd[2015]: time="2025-02-13T19:51:23.280821234Z" level=info msg="RemoveContainer for \"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a\"" Feb 13 19:51:23.292424 containerd[2015]: time="2025-02-13T19:51:23.291850362Z" level=info msg="RemoveContainer for \"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a\" returns successfully" Feb 13 19:51:23.300495 kubelet[3219]: I0213 19:51:23.299975 3219 scope.go:117] "RemoveContainer" containerID="2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe" Feb 13 19:51:23.305918 containerd[2015]: time="2025-02-13T19:51:23.305862306Z" level=info msg="RemoveContainer for \"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe\"" Feb 13 19:51:23.312682 containerd[2015]: time="2025-02-13T19:51:23.312605514Z" level=info msg="RemoveContainer for \"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe\" returns successfully" Feb 13 19:51:23.313047 kubelet[3219]: I0213 19:51:23.313018 3219 scope.go:117] "RemoveContainer" containerID="ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15" Feb 13 19:51:23.313883 containerd[2015]: time="2025-02-13T19:51:23.313796874Z" level=error msg="ContainerStatus for \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\": not found" Feb 13 19:51:23.314076 kubelet[3219]: E0213 19:51:23.314028 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\": not found" containerID="ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15" Feb 13 19:51:23.314179 kubelet[3219]: I0213 19:51:23.314075 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15"} err="failed to get container status \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffee8bf25f547d6989f54edbe91e10f29c66add9c5542b7b20ae43b116b9ea15\": not found" Feb 13 19:51:23.314274 kubelet[3219]: I0213 19:51:23.314190 3219 scope.go:117] "RemoveContainer" containerID="32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1" Feb 13 19:51:23.314742 containerd[2015]: time="2025-02-13T19:51:23.314530254Z" level=error msg="ContainerStatus for \"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1\": not found" Feb 13 19:51:23.314916 kubelet[3219]: E0213 19:51:23.314817 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1\": not found" containerID="32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1" Feb 13 19:51:23.314916 kubelet[3219]: I0213 19:51:23.314871 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1"} err="failed to get container status \"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"32f1f00666a4642ba99203a7a06c7446265550cb91b56375e4954be464b967a1\": not found" Feb 13 19:51:23.314916 kubelet[3219]: I0213 19:51:23.314910 3219 scope.go:117] "RemoveContainer" containerID="187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9" Feb 13 19:51:23.315708 containerd[2015]: time="2025-02-13T19:51:23.315581970Z" level=error msg="ContainerStatus for \"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9\": not found" Feb 13 19:51:23.315947 kubelet[3219]: E0213 19:51:23.315824 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9\": not found" containerID="187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9" Feb 13 19:51:23.315947 kubelet[3219]: I0213 19:51:23.315866 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9"} err="failed to get container status \"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"187e2fd9ca905d97be12e67595106cc044e52742b28114f41212af0717ef68f9\": not found" Feb 13 19:51:23.315947 kubelet[3219]: I0213 19:51:23.315904 3219 scope.go:117] "RemoveContainer" containerID="9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a" Feb 13 19:51:23.316804 containerd[2015]: time="2025-02-13T19:51:23.316634274Z" level=error msg="ContainerStatus for \"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a\": not found" Feb 13 19:51:23.316980 kubelet[3219]: E0213 19:51:23.316863 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a\": not found" containerID="9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a" Feb 13 19:51:23.316980 kubelet[3219]: I0213 19:51:23.316906 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a"} err="failed to get container status \"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9f01973560e2405bd262282841ecc5356322c3080592699b6ca2b753b504c42a\": not found" Feb 13 19:51:23.316980 kubelet[3219]: I0213 19:51:23.316939 3219 scope.go:117] "RemoveContainer" containerID="2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe" Feb 13 19:51:23.317415 containerd[2015]: time="2025-02-13T19:51:23.317247234Z" level=error msg="ContainerStatus for \"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe\": not found" Feb 13 19:51:23.317694 kubelet[3219]: E0213 19:51:23.317653 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe\": not found" containerID="2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe" Feb 13 19:51:23.317777 kubelet[3219]: I0213 19:51:23.317702 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe"} err="failed to get container status \"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e49273d3e313889f87ee90d2b11c23e7e9d6d9f6e3e59620ddf9bd4598603fe\": not found" Feb 13 19:51:23.317777 kubelet[3219]: I0213 19:51:23.317739 3219 scope.go:117] "RemoveContainer" containerID="a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d" Feb 13 19:51:23.320124 containerd[2015]: time="2025-02-13T19:51:23.319663662Z" level=info msg="RemoveContainer for \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\"" Feb 13 19:51:23.325934 containerd[2015]: time="2025-02-13T19:51:23.325874934Z" level=info msg="RemoveContainer for \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\" returns successfully" Feb 13 19:51:23.326501 kubelet[3219]: I0213 19:51:23.326453 3219 scope.go:117] "RemoveContainer" containerID="a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d" Feb 13 19:51:23.327139 containerd[2015]: time="2025-02-13T19:51:23.326845722Z" level=error msg="ContainerStatus for \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\": not found" Feb 13 19:51:23.327477 kubelet[3219]: E0213 19:51:23.327345 3219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\": not found" containerID="a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d" Feb 13 19:51:23.327477 kubelet[3219]: I0213 19:51:23.327412 3219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d"} err="failed to get container status \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5ed7b7742f2fadb9de082c51cdaa047d9f2c875c1404ee6a0474aa47d43164d\": not found" Feb 13 19:51:23.363648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22-rootfs.mount: Deactivated successfully. Feb 13 19:51:23.363823 systemd[1]: var-lib-kubelet-pods-80795f0c\x2d15af\x2d47cd\x2dacd8\x2d0c80cd0663c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d27nmp.mount: Deactivated successfully. Feb 13 19:51:23.363962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0-rootfs.mount: Deactivated successfully. Feb 13 19:51:23.364094 systemd[1]: var-lib-kubelet-pods-7f135082\x2d5ca3\x2d4925\x2dbcb9\x2d78764085bbf1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkwjm8.mount: Deactivated successfully. Feb 13 19:51:23.364229 systemd[1]: var-lib-kubelet-pods-80795f0c\x2d15af\x2d47cd\x2dacd8\x2d0c80cd0663c0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:51:23.364368 systemd[1]: var-lib-kubelet-pods-80795f0c\x2d15af\x2d47cd\x2dacd8\x2d0c80cd0663c0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:51:24.281467 sshd[5097]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:24.286577 systemd[1]: sshd@25-172.31.26.215:22-139.178.89.65:43326.service: Deactivated successfully. Feb 13 19:51:24.291001 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:51:24.291871 systemd[1]: session-26.scope: Consumed 1.515s CPU time. Feb 13 19:51:24.294562 systemd-logind[1994]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:51:24.297151 systemd-logind[1994]: Removed session 26. Feb 13 19:51:24.318995 systemd[1]: Started sshd@26-172.31.26.215:22-139.178.89.65:43340.service - OpenSSH per-connection server daemon (139.178.89.65:43340). Feb 13 19:51:24.499663 sshd[5259]: Accepted publickey for core from 139.178.89.65 port 43340 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:24.502851 sshd[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:24.511436 systemd-logind[1994]: New session 27 of user core. Feb 13 19:51:24.519691 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:51:24.736731 kubelet[3219]: I0213 19:51:24.736599 3219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f135082-5ca3-4925-bcb9-78764085bbf1" path="/var/lib/kubelet/pods/7f135082-5ca3-4925-bcb9-78764085bbf1/volumes" Feb 13 19:51:24.739420 kubelet[3219]: I0213 19:51:24.738557 3219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80795f0c-15af-47cd-acd8-0c80cd0663c0" path="/var/lib/kubelet/pods/80795f0c-15af-47cd-acd8-0c80cd0663c0/volumes" Feb 13 19:51:24.814545 ntpd[1988]: Deleting interface #12 lxc_health, fe80::11:4fff:fe79:e3a6%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Feb 13 19:51:24.815091 ntpd[1988]: 13 Feb 19:51:24 ntpd[1988]: Deleting interface #12 lxc_health, fe80::11:4fff:fe79:e3a6%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Feb 13 19:51:25.732701 sshd[5259]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:25.741549 systemd[1]: sshd@26-172.31.26.215:22-139.178.89.65:43340.service: Deactivated successfully. Feb 13 19:51:25.749571 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:51:25.752626 systemd[1]: session-27.scope: Consumed 1.004s CPU time. Feb 13 19:51:25.760556 systemd-logind[1994]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:51:25.782333 kubelet[3219]: I0213 19:51:25.781901 3219 memory_manager.go:355] "RemoveStaleState removing state" podUID="7f135082-5ca3-4925-bcb9-78764085bbf1" containerName="cilium-operator" Feb 13 19:51:25.782333 kubelet[3219]: I0213 19:51:25.781941 3219 memory_manager.go:355] "RemoveStaleState removing state" podUID="80795f0c-15af-47cd-acd8-0c80cd0663c0" containerName="cilium-agent" Feb 13 19:51:25.791018 systemd[1]: Started sshd@27-172.31.26.215:22-139.178.89.65:37144.service - OpenSSH per-connection server daemon (139.178.89.65:37144). Feb 13 19:51:25.795579 systemd-logind[1994]: Removed session 27. Feb 13 19:51:25.816760 systemd[1]: Created slice kubepods-burstable-pod0c09c2ce_27f1_48ba_8c1d_090c4b5b800e.slice - libcontainer container kubepods-burstable-pod0c09c2ce_27f1_48ba_8c1d_090c4b5b800e.slice. Feb 13 19:51:25.879015 kubelet[3219]: I0213 19:51:25.878959 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-host-proc-sys-kernel\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.879410 kubelet[3219]: I0213 19:51:25.879331 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-bpf-maps\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.879713 kubelet[3219]: I0213 19:51:25.879552 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-xtables-lock\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.879713 kubelet[3219]: I0213 19:51:25.879636 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-cilium-config-path\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.880022 kubelet[3219]: I0213 19:51:25.879682 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-cilium-run\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.880226 kubelet[3219]: I0213 19:51:25.880110 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-host-proc-sys-net\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.880405 kubelet[3219]: I0213 19:51:25.880316 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-cni-path\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.880405 kubelet[3219]: I0213 19:51:25.880365 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-etc-cni-netd\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.880729 kubelet[3219]: I0213 19:51:25.880582 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-lib-modules\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.880729 kubelet[3219]: I0213 19:51:25.880671 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzn4b\" (UniqueName: \"kubernetes.io/projected/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-kube-api-access-nzn4b\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.881029 kubelet[3219]: I0213 19:51:25.880883 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-clustermesh-secrets\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.881029 kubelet[3219]: I0213 19:51:25.880961 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-cilium-ipsec-secrets\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.881247 kubelet[3219]: I0213 19:51:25.881000 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-hubble-tls\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.881247 kubelet[3219]: I0213 19:51:25.881203 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-cilium-cgroup\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:25.881512 kubelet[3219]: I0213 19:51:25.881422 3219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c09c2ce-27f1-48ba-8c1d-090c4b5b800e-hostproc\") pod \"cilium-8dsj7\" (UID: \"0c09c2ce-27f1-48ba-8c1d-090c4b5b800e\") " pod="kube-system/cilium-8dsj7" Feb 13 19:51:26.003481 sshd[5271]: Accepted publickey for core from 139.178.89.65 port 37144 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:26.011022 sshd[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:26.060507 systemd-logind[1994]: New session 28 of user core. Feb 13 19:51:26.070664 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:51:26.127375 containerd[2015]: time="2025-02-13T19:51:26.126766712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8dsj7,Uid:0c09c2ce-27f1-48ba-8c1d-090c4b5b800e,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:26.179507 containerd[2015]: time="2025-02-13T19:51:26.179219457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:26.179507 containerd[2015]: time="2025-02-13T19:51:26.179302653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:26.179507 containerd[2015]: time="2025-02-13T19:51:26.179327685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:26.179964 containerd[2015]: time="2025-02-13T19:51:26.179647065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:26.192947 sshd[5271]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:26.203852 systemd[1]: sshd@27-172.31.26.215:22-139.178.89.65:37144.service: Deactivated successfully. Feb 13 19:51:26.205500 systemd-logind[1994]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:51:26.210971 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:51:26.231836 systemd-logind[1994]: Removed session 28. Feb 13 19:51:26.239737 systemd[1]: Started cri-containerd-37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f.scope - libcontainer container 37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f. Feb 13 19:51:26.244819 systemd[1]: Started sshd@28-172.31.26.215:22-139.178.89.65:37160.service - OpenSSH per-connection server daemon (139.178.89.65:37160). Feb 13 19:51:26.297928 containerd[2015]: time="2025-02-13T19:51:26.297848493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8dsj7,Uid:0c09c2ce-27f1-48ba-8c1d-090c4b5b800e,Namespace:kube-system,Attempt:0,} returns sandbox id \"37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f\"" Feb 13 19:51:26.304450 containerd[2015]: time="2025-02-13T19:51:26.304357125Z" level=info msg="CreateContainer within sandbox \"37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:51:26.330845 containerd[2015]: time="2025-02-13T19:51:26.330659313Z" level=info msg="CreateContainer within sandbox \"37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"91668f9b3d17a65ab7b4daa1a68f6cdfdbf2dd6215c89c89cbc7b67cea776b71\"" Feb 13 19:51:26.332822 containerd[2015]: time="2025-02-13T19:51:26.332751345Z" level=info msg="StartContainer for \"91668f9b3d17a65ab7b4daa1a68f6cdfdbf2dd6215c89c89cbc7b67cea776b71\"" Feb 13 19:51:26.383709 systemd[1]: Started cri-containerd-91668f9b3d17a65ab7b4daa1a68f6cdfdbf2dd6215c89c89cbc7b67cea776b71.scope - libcontainer container 91668f9b3d17a65ab7b4daa1a68f6cdfdbf2dd6215c89c89cbc7b67cea776b71. Feb 13 19:51:26.435810 containerd[2015]: time="2025-02-13T19:51:26.435711586Z" level=info msg="StartContainer for \"91668f9b3d17a65ab7b4daa1a68f6cdfdbf2dd6215c89c89cbc7b67cea776b71\" returns successfully" Feb 13 19:51:26.449529 sshd[5310]: Accepted publickey for core from 139.178.89.65 port 37160 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:26.451831 sshd[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:26.458169 systemd[1]: cri-containerd-91668f9b3d17a65ab7b4daa1a68f6cdfdbf2dd6215c89c89cbc7b67cea776b71.scope: Deactivated successfully. Feb 13 19:51:26.468080 systemd-logind[1994]: New session 29 of user core. Feb 13 19:51:26.474075 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:51:26.513634 containerd[2015]: time="2025-02-13T19:51:26.513526750Z" level=info msg="shim disconnected" id=91668f9b3d17a65ab7b4daa1a68f6cdfdbf2dd6215c89c89cbc7b67cea776b71 namespace=k8s.io Feb 13 19:51:26.513634 containerd[2015]: time="2025-02-13T19:51:26.513625498Z" level=warning msg="cleaning up after shim disconnected" id=91668f9b3d17a65ab7b4daa1a68f6cdfdbf2dd6215c89c89cbc7b67cea776b71 namespace=k8s.io Feb 13 19:51:26.513932 containerd[2015]: time="2025-02-13T19:51:26.513647890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:27.004523 kubelet[3219]: E0213 19:51:27.004459 3219 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:51:27.261106 containerd[2015]: time="2025-02-13T19:51:27.256973746Z" level=info msg="CreateContainer within sandbox \"37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:51:27.286370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872131254.mount: Deactivated successfully. Feb 13 19:51:27.289743 containerd[2015]: time="2025-02-13T19:51:27.288364402Z" level=info msg="CreateContainer within sandbox \"37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"734f0fa77ad150f281c31a910ddb592e2a6816b75f431a5a6b201068f6f455d4\"" Feb 13 19:51:27.291352 containerd[2015]: time="2025-02-13T19:51:27.291275014Z" level=info msg="StartContainer for \"734f0fa77ad150f281c31a910ddb592e2a6816b75f431a5a6b201068f6f455d4\"" Feb 13 19:51:27.351709 systemd[1]: Started cri-containerd-734f0fa77ad150f281c31a910ddb592e2a6816b75f431a5a6b201068f6f455d4.scope - libcontainer container 734f0fa77ad150f281c31a910ddb592e2a6816b75f431a5a6b201068f6f455d4. Feb 13 19:51:27.405667 containerd[2015]: time="2025-02-13T19:51:27.402882455Z" level=info msg="StartContainer for \"734f0fa77ad150f281c31a910ddb592e2a6816b75f431a5a6b201068f6f455d4\" returns successfully" Feb 13 19:51:27.414160 systemd[1]: cri-containerd-734f0fa77ad150f281c31a910ddb592e2a6816b75f431a5a6b201068f6f455d4.scope: Deactivated successfully. Feb 13 19:51:27.457668 containerd[2015]: time="2025-02-13T19:51:27.457585307Z" level=info msg="shim disconnected" id=734f0fa77ad150f281c31a910ddb592e2a6816b75f431a5a6b201068f6f455d4 namespace=k8s.io Feb 13 19:51:27.457668 containerd[2015]: time="2025-02-13T19:51:27.457660223Z" level=warning msg="cleaning up after shim disconnected" id=734f0fa77ad150f281c31a910ddb592e2a6816b75f431a5a6b201068f6f455d4 namespace=k8s.io Feb 13 19:51:27.458143 containerd[2015]: time="2025-02-13T19:51:27.457680479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:28.004952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-734f0fa77ad150f281c31a910ddb592e2a6816b75f431a5a6b201068f6f455d4-rootfs.mount: Deactivated successfully. Feb 13 19:51:28.264612 containerd[2015]: time="2025-02-13T19:51:28.264045551Z" level=info msg="CreateContainer within sandbox \"37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:51:28.296276 containerd[2015]: time="2025-02-13T19:51:28.296191499Z" level=info msg="CreateContainer within sandbox \"37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eae5dbf83626ec2861db950861abdc777e3d4cfc40f305b5be6b7a37d4c2982e\"" Feb 13 19:51:28.300683 containerd[2015]: time="2025-02-13T19:51:28.297333791Z" level=info msg="StartContainer for \"eae5dbf83626ec2861db950861abdc777e3d4cfc40f305b5be6b7a37d4c2982e\"" Feb 13 19:51:28.366723 systemd[1]: Started cri-containerd-eae5dbf83626ec2861db950861abdc777e3d4cfc40f305b5be6b7a37d4c2982e.scope - libcontainer container eae5dbf83626ec2861db950861abdc777e3d4cfc40f305b5be6b7a37d4c2982e. Feb 13 19:51:28.421490 containerd[2015]: time="2025-02-13T19:51:28.421235064Z" level=info msg="StartContainer for \"eae5dbf83626ec2861db950861abdc777e3d4cfc40f305b5be6b7a37d4c2982e\" returns successfully" Feb 13 19:51:28.422902 systemd[1]: cri-containerd-eae5dbf83626ec2861db950861abdc777e3d4cfc40f305b5be6b7a37d4c2982e.scope: Deactivated successfully. Feb 13 19:51:28.469268 containerd[2015]: time="2025-02-13T19:51:28.469176792Z" level=info msg="shim disconnected" id=eae5dbf83626ec2861db950861abdc777e3d4cfc40f305b5be6b7a37d4c2982e namespace=k8s.io Feb 13 19:51:28.469268 containerd[2015]: time="2025-02-13T19:51:28.469253076Z" level=warning msg="cleaning up after shim disconnected" id=eae5dbf83626ec2861db950861abdc777e3d4cfc40f305b5be6b7a37d4c2982e namespace=k8s.io Feb 13 19:51:28.469268 containerd[2015]: time="2025-02-13T19:51:28.469273284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:28.726315 kubelet[3219]: E0213 19:51:28.726005 3219 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dw52m" podUID="71862c2e-28e2-4ab4-8030-c0125e764f00" Feb 13 19:51:29.004761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eae5dbf83626ec2861db950861abdc777e3d4cfc40f305b5be6b7a37d4c2982e-rootfs.mount: Deactivated successfully. Feb 13 19:51:29.269235 containerd[2015]: time="2025-02-13T19:51:29.268535928Z" level=info msg="CreateContainer within sandbox \"37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:51:29.300338 containerd[2015]: time="2025-02-13T19:51:29.300263136Z" level=info msg="CreateContainer within sandbox \"37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"75752ba88e81cfb99e035497002fe45eb0906121665309ae3dfeda165efaec8f\"" Feb 13 19:51:29.303566 containerd[2015]: time="2025-02-13T19:51:29.301264056Z" level=info msg="StartContainer for \"75752ba88e81cfb99e035497002fe45eb0906121665309ae3dfeda165efaec8f\"" Feb 13 19:51:29.356771 systemd[1]: Started cri-containerd-75752ba88e81cfb99e035497002fe45eb0906121665309ae3dfeda165efaec8f.scope - libcontainer container 75752ba88e81cfb99e035497002fe45eb0906121665309ae3dfeda165efaec8f. Feb 13 19:51:29.398059 systemd[1]: cri-containerd-75752ba88e81cfb99e035497002fe45eb0906121665309ae3dfeda165efaec8f.scope: Deactivated successfully. Feb 13 19:51:29.403916 containerd[2015]: time="2025-02-13T19:51:29.403861333Z" level=info msg="StartContainer for \"75752ba88e81cfb99e035497002fe45eb0906121665309ae3dfeda165efaec8f\" returns successfully" Feb 13 19:51:29.446585 containerd[2015]: time="2025-02-13T19:51:29.446356189Z" level=info msg="shim disconnected" id=75752ba88e81cfb99e035497002fe45eb0906121665309ae3dfeda165efaec8f namespace=k8s.io Feb 13 19:51:29.447212 containerd[2015]: time="2025-02-13T19:51:29.447141577Z" level=warning msg="cleaning up after shim disconnected" id=75752ba88e81cfb99e035497002fe45eb0906121665309ae3dfeda165efaec8f namespace=k8s.io Feb 13 19:51:29.447212 containerd[2015]: time="2025-02-13T19:51:29.447201757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:29.834537 kubelet[3219]: I0213 19:51:29.834126 3219 setters.go:602] "Node became not ready" node="ip-172-31-26-215" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:51:29Z","lastTransitionTime":"2025-02-13T19:51:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:51:30.004939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75752ba88e81cfb99e035497002fe45eb0906121665309ae3dfeda165efaec8f-rootfs.mount: Deactivated successfully. Feb 13 19:51:30.279210 containerd[2015]: time="2025-02-13T19:51:30.278219749Z" level=info msg="CreateContainer within sandbox \"37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:51:30.313276 containerd[2015]: time="2025-02-13T19:51:30.312818797Z" level=info msg="CreateContainer within sandbox \"37a183f05aed6b2c0215160464dec57782a4c10130c7bec3aeb0f0daa474e50f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2e01310e7dcc8527b04ff6f308870f437de7abbc465538fd6efd9c06d12839e1\"" Feb 13 19:51:30.314958 containerd[2015]: time="2025-02-13T19:51:30.314885113Z" level=info msg="StartContainer for \"2e01310e7dcc8527b04ff6f308870f437de7abbc465538fd6efd9c06d12839e1\"" Feb 13 19:51:30.377728 systemd[1]: Started cri-containerd-2e01310e7dcc8527b04ff6f308870f437de7abbc465538fd6efd9c06d12839e1.scope - libcontainer container 2e01310e7dcc8527b04ff6f308870f437de7abbc465538fd6efd9c06d12839e1. Feb 13 19:51:30.466463 containerd[2015]: time="2025-02-13T19:51:30.466365866Z" level=info msg="StartContainer for \"2e01310e7dcc8527b04ff6f308870f437de7abbc465538fd6efd9c06d12839e1\" returns successfully" Feb 13 19:51:30.728684 kubelet[3219]: E0213 19:51:30.728109 3219 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-dw52m" podUID="71862c2e-28e2-4ab4-8030-c0125e764f00" Feb 13 19:51:31.030154 systemd[1]: run-containerd-runc-k8s.io-2e01310e7dcc8527b04ff6f308870f437de7abbc465538fd6efd9c06d12839e1-runc.FAwazr.mount: Deactivated successfully. Feb 13 19:51:31.259816 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:51:31.318219 kubelet[3219]: I0213 19:51:31.318011 3219 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8dsj7" podStartSLOduration=6.31798973 podStartE2EDuration="6.31798973s" podCreationTimestamp="2025-02-13 19:51:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:31.317838782 +0000 UTC m=+114.884151608" watchObservedRunningTime="2025-02-13 19:51:31.31798973 +0000 UTC m=+114.884302568" Feb 13 19:51:35.503276 systemd-networkd[1922]: lxc_health: Link UP Feb 13 19:51:35.511815 (udev-worker)[6114]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:35.520514 systemd-networkd[1922]: lxc_health: Gained carrier Feb 13 19:51:36.795291 containerd[2015]: time="2025-02-13T19:51:36.795201861Z" level=info msg="StopPodSandbox for \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\"" Feb 13 19:51:36.796318 containerd[2015]: time="2025-02-13T19:51:36.795362781Z" level=info msg="TearDown network for sandbox \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\" successfully" Feb 13 19:51:36.796318 containerd[2015]: time="2025-02-13T19:51:36.795406965Z" level=info msg="StopPodSandbox for \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\" returns successfully" Feb 13 19:51:36.799475 containerd[2015]: time="2025-02-13T19:51:36.797954733Z" level=info msg="RemovePodSandbox for \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\"" Feb 13 19:51:36.799475 containerd[2015]: time="2025-02-13T19:51:36.798040029Z" level=info msg="Forcibly stopping sandbox \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\"" Feb 13 19:51:36.799475 containerd[2015]: time="2025-02-13T19:51:36.798206409Z" level=info msg="TearDown network for sandbox \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\" successfully" Feb 13 19:51:36.806312 containerd[2015]: time="2025-02-13T19:51:36.806212521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:51:36.806640 containerd[2015]: time="2025-02-13T19:51:36.806598489Z" level=info msg="RemovePodSandbox \"272499e6ad2c5c580e52bcfb8d82f8fcfb35765e47cecd912e8ba53510bf7dc0\" returns successfully" Feb 13 19:51:36.808034 containerd[2015]: time="2025-02-13T19:51:36.807982749Z" level=info msg="StopPodSandbox for \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\"" Feb 13 19:51:36.808331 containerd[2015]: time="2025-02-13T19:51:36.808295433Z" level=info msg="TearDown network for sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" successfully" Feb 13 19:51:36.808609 containerd[2015]: time="2025-02-13T19:51:36.808453929Z" level=info msg="StopPodSandbox for \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" returns successfully" Feb 13 19:51:36.809785 containerd[2015]: time="2025-02-13T19:51:36.809533053Z" level=info msg="RemovePodSandbox for \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\"" Feb 13 19:51:36.810240 containerd[2015]: time="2025-02-13T19:51:36.809895501Z" level=info msg="Forcibly stopping sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\"" Feb 13 19:51:36.810587 containerd[2015]: time="2025-02-13T19:51:36.810378417Z" level=info msg="TearDown network for sandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" successfully" Feb 13 19:51:36.819599 containerd[2015]: time="2025-02-13T19:51:36.819332577Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:51:36.819599 containerd[2015]: time="2025-02-13T19:51:36.819460869Z" level=info msg="RemovePodSandbox \"57ef8087e404a70759b9e5103e2270655b4c98b9b8d0efafe981bc25e7ae2d22\" returns successfully" Feb 13 19:51:37.456760 systemd-networkd[1922]: lxc_health: Gained IPv6LL Feb 13 19:51:39.814873 ntpd[1988]: Listen normally on 15 lxc_health [fe80::fc38:59ff:feaf:fbcb%14]:123 Feb 13 19:51:39.817786 ntpd[1988]: 13 Feb 19:51:39 ntpd[1988]: Listen normally on 15 lxc_health [fe80::fc38:59ff:feaf:fbcb%14]:123 Feb 13 19:51:42.214207 systemd[1]: run-containerd-runc-k8s.io-2e01310e7dcc8527b04ff6f308870f437de7abbc465538fd6efd9c06d12839e1-runc.62zJg1.mount: Deactivated successfully. Feb 13 19:51:42.334028 sshd[5310]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:42.342687 systemd[1]: sshd@28-172.31.26.215:22-139.178.89.65:37160.service: Deactivated successfully. Feb 13 19:51:42.350545 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:51:42.354680 systemd-logind[1994]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:51:42.358027 systemd-logind[1994]: Removed session 29. Feb 13 19:51:56.830147 systemd[1]: cri-containerd-1b4f3b8e2945dfc818d9815036120924812bd393bee522d6a13c4df97c3f73cb.scope: Deactivated successfully. Feb 13 19:51:56.831656 systemd[1]: cri-containerd-1b4f3b8e2945dfc818d9815036120924812bd393bee522d6a13c4df97c3f73cb.scope: Consumed 5.145s CPU time, 17.4M memory peak, 0B memory swap peak. Feb 13 19:51:56.876161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b4f3b8e2945dfc818d9815036120924812bd393bee522d6a13c4df97c3f73cb-rootfs.mount: Deactivated successfully. Feb 13 19:51:56.887508 containerd[2015]: time="2025-02-13T19:51:56.887331869Z" level=info msg="shim disconnected" id=1b4f3b8e2945dfc818d9815036120924812bd393bee522d6a13c4df97c3f73cb namespace=k8s.io Feb 13 19:51:56.887508 containerd[2015]: time="2025-02-13T19:51:56.887456573Z" level=warning msg="cleaning up after shim disconnected" id=1b4f3b8e2945dfc818d9815036120924812bd393bee522d6a13c4df97c3f73cb namespace=k8s.io Feb 13 19:51:56.887508 containerd[2015]: time="2025-02-13T19:51:56.887479169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:57.366213 kubelet[3219]: I0213 19:51:57.366135 3219 scope.go:117] "RemoveContainer" containerID="1b4f3b8e2945dfc818d9815036120924812bd393bee522d6a13c4df97c3f73cb" Feb 13 19:51:57.368970 containerd[2015]: time="2025-02-13T19:51:57.368879799Z" level=info msg="CreateContainer within sandbox \"f357d349c4174561b1cb1c467b1a3bb49dfa82bc492c1020fdaf13c23d7bf517\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:51:57.393199 containerd[2015]: time="2025-02-13T19:51:57.393139864Z" level=info msg="CreateContainer within sandbox \"f357d349c4174561b1cb1c467b1a3bb49dfa82bc492c1020fdaf13c23d7bf517\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"786bc9037a47ee3e94c7c6aff2b2a6b4b0d50955f8471b1bb1296e6ce249f98f\"" Feb 13 19:51:57.394116 containerd[2015]: time="2025-02-13T19:51:57.394040776Z" level=info msg="StartContainer for \"786bc9037a47ee3e94c7c6aff2b2a6b4b0d50955f8471b1bb1296e6ce249f98f\"" Feb 13 19:51:57.447430 systemd[1]: run-containerd-runc-k8s.io-786bc9037a47ee3e94c7c6aff2b2a6b4b0d50955f8471b1bb1296e6ce249f98f-runc.Eh4mVQ.mount: Deactivated successfully. Feb 13 19:51:57.459716 systemd[1]: Started cri-containerd-786bc9037a47ee3e94c7c6aff2b2a6b4b0d50955f8471b1bb1296e6ce249f98f.scope - libcontainer container 786bc9037a47ee3e94c7c6aff2b2a6b4b0d50955f8471b1bb1296e6ce249f98f. Feb 13 19:51:57.528479 containerd[2015]: time="2025-02-13T19:51:57.528263236Z" level=info msg="StartContainer for \"786bc9037a47ee3e94c7c6aff2b2a6b4b0d50955f8471b1bb1296e6ce249f98f\" returns successfully" Feb 13 19:51:58.650008 kubelet[3219]: E0213 19:51:58.649283 3219 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-215?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:52:01.190627 systemd[1]: cri-containerd-f5746dda18459fb2e5e8774283fbf79326aa13173d4dc0fd13073b2d17ea68e3.scope: Deactivated successfully. Feb 13 19:52:01.191100 systemd[1]: cri-containerd-f5746dda18459fb2e5e8774283fbf79326aa13173d4dc0fd13073b2d17ea68e3.scope: Consumed 5.285s CPU time, 15.7M memory peak, 0B memory swap peak. Feb 13 19:52:01.237914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5746dda18459fb2e5e8774283fbf79326aa13173d4dc0fd13073b2d17ea68e3-rootfs.mount: Deactivated successfully. Feb 13 19:52:01.248300 containerd[2015]: time="2025-02-13T19:52:01.248224363Z" level=info msg="shim disconnected" id=f5746dda18459fb2e5e8774283fbf79326aa13173d4dc0fd13073b2d17ea68e3 namespace=k8s.io Feb 13 19:52:01.248960 containerd[2015]: time="2025-02-13T19:52:01.248829883Z" level=warning msg="cleaning up after shim disconnected" id=f5746dda18459fb2e5e8774283fbf79326aa13173d4dc0fd13073b2d17ea68e3 namespace=k8s.io Feb 13 19:52:01.248960 containerd[2015]: time="2025-02-13T19:52:01.248861491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:52:01.385117 kubelet[3219]: I0213 19:52:01.384378 3219 scope.go:117] "RemoveContainer" containerID="f5746dda18459fb2e5e8774283fbf79326aa13173d4dc0fd13073b2d17ea68e3" Feb 13 19:52:01.388793 containerd[2015]: time="2025-02-13T19:52:01.388223611Z" level=info msg="CreateContainer within sandbox \"d2d0ff414c2266f0f78d87e4ba10711ab3d696411c27a41b052260565487edc6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:52:01.429459 containerd[2015]: time="2025-02-13T19:52:01.429063728Z" level=info msg="CreateContainer within sandbox \"d2d0ff414c2266f0f78d87e4ba10711ab3d696411c27a41b052260565487edc6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2490f4230952e1c4d8654388fc0abb58179a5b3a9480742f5d065c67de1fcb79\"" Feb 13 19:52:01.432116 containerd[2015]: time="2025-02-13T19:52:01.430114508Z" level=info msg="StartContainer for \"2490f4230952e1c4d8654388fc0abb58179a5b3a9480742f5d065c67de1fcb79\"" Feb 13 19:52:01.504721 systemd[1]: Started cri-containerd-2490f4230952e1c4d8654388fc0abb58179a5b3a9480742f5d065c67de1fcb79.scope - libcontainer container 2490f4230952e1c4d8654388fc0abb58179a5b3a9480742f5d065c67de1fcb79. Feb 13 19:52:01.583812 containerd[2015]: time="2025-02-13T19:52:01.583754372Z" level=info msg="StartContainer for \"2490f4230952e1c4d8654388fc0abb58179a5b3a9480742f5d065c67de1fcb79\" returns successfully" Feb 13 19:52:02.237642 systemd[1]: run-containerd-runc-k8s.io-2490f4230952e1c4d8654388fc0abb58179a5b3a9480742f5d065c67de1fcb79-runc.blfIoe.mount: Deactivated successfully. Feb 13 19:52:08.650994 kubelet[3219]: E0213 19:52:08.650509 3219 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.215:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-215?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"