Jan 13 20:08:19.168593 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 20:08:19.168637 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:56:28 -00 2025 Jan 13 20:08:19.168661 kernel: KASLR disabled due to lack of seed Jan 13 20:08:19.168677 kernel: efi: EFI v2.7 by EDK II Jan 13 20:08:19.168693 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Jan 13 20:08:19.168708 kernel: secureboot: Secure boot disabled Jan 13 20:08:19.168725 kernel: ACPI: Early table checksum verification disabled Jan 13 20:08:19.168740 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 20:08:19.168756 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 20:08:19.168771 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:08:19.168791 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 20:08:19.168806 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:08:19.168822 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 20:08:19.168837 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 20:08:19.168855 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 20:08:19.168876 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:08:19.168892 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 20:08:19.168909 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 20:08:19.168925 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 20:08:19.168941 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 20:08:19.168957 kernel: printk: bootconsole [uart0] enabled Jan 13 20:08:19.168973 kernel: NUMA: Failed to initialise from firmware Jan 13 20:08:19.168989 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:08:19.169005 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 20:08:19.169021 kernel: Zone ranges: Jan 13 20:08:19.169037 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:08:19.169057 kernel: DMA32 empty Jan 13 20:08:19.169073 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 20:08:19.169089 kernel: Movable zone start for each node Jan 13 20:08:19.169105 kernel: Early memory node ranges Jan 13 20:08:19.169144 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 20:08:19.169164 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 20:08:19.169181 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 20:08:19.169198 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 20:08:19.169214 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 20:08:19.169231 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 20:08:19.169249 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 20:08:19.169265 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 20:08:19.169287 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:08:19.169305 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 20:08:19.169328 kernel: psci: probing for conduit method from ACPI. Jan 13 20:08:19.169346 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 20:08:19.169363 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:08:19.169384 kernel: psci: Trusted OS migration not required Jan 13 20:08:19.169402 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:08:19.169420 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:08:19.169437 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:08:19.169455 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:08:19.169472 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:08:19.169489 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:08:19.169506 kernel: CPU features: detected: Spectre-v2 Jan 13 20:08:19.169523 kernel: CPU features: detected: Spectre-v3a Jan 13 20:08:19.169540 kernel: CPU features: detected: Spectre-BHB Jan 13 20:08:19.169557 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 20:08:19.169574 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 20:08:19.169595 kernel: alternatives: applying boot alternatives Jan 13 20:08:19.169617 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:08:19.169635 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:08:19.169653 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:08:19.169670 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:08:19.169687 kernel: Fallback order for Node 0: 0 Jan 13 20:08:19.169704 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 20:08:19.169721 kernel: Policy zone: Normal Jan 13 20:08:19.169738 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:08:19.169757 kernel: software IO TLB: area num 2. Jan 13 20:08:19.169779 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 20:08:19.169796 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Jan 13 20:08:19.169814 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:08:19.169832 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:08:19.169850 kernel: rcu: RCU event tracing is enabled. Jan 13 20:08:19.169867 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:08:19.169885 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:08:19.169903 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:08:19.169920 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:08:19.169938 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:08:19.169955 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:08:19.169977 kernel: GICv3: 96 SPIs implemented Jan 13 20:08:19.169994 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:08:19.170011 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:08:19.170028 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 20:08:19.170045 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 20:08:19.170063 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 20:08:19.170080 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:08:19.170097 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:08:19.172173 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 20:08:19.172219 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 20:08:19.172237 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 20:08:19.172256 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:08:19.172284 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 20:08:19.172301 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 20:08:19.172319 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 20:08:19.172336 kernel: Console: colour dummy device 80x25 Jan 13 20:08:19.172354 kernel: printk: console [tty1] enabled Jan 13 20:08:19.172371 kernel: ACPI: Core revision 20230628 Jan 13 20:08:19.172389 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 20:08:19.172407 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:08:19.172424 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:08:19.172442 kernel: landlock: Up and running. Jan 13 20:08:19.172463 kernel: SELinux: Initializing. Jan 13 20:08:19.172481 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:08:19.172498 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:08:19.172516 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:08:19.172533 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:08:19.172551 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:08:19.172570 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:08:19.172587 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 20:08:19.172609 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 20:08:19.172627 kernel: Remapping and enabling EFI services. Jan 13 20:08:19.172645 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:08:19.172662 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:08:19.172682 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 20:08:19.172700 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 20:08:19.172717 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 20:08:19.172735 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:08:19.172752 kernel: SMP: Total of 2 processors activated. Jan 13 20:08:19.172769 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:08:19.172792 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 20:08:19.172809 kernel: CPU features: detected: CRC32 instructions Jan 13 20:08:19.172838 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:08:19.172861 kernel: alternatives: applying system-wide alternatives Jan 13 20:08:19.172879 kernel: devtmpfs: initialized Jan 13 20:08:19.172898 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:08:19.172916 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:08:19.172935 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:08:19.172953 kernel: SMBIOS 3.0.0 present. Jan 13 20:08:19.172976 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 20:08:19.172995 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:08:19.173019 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:08:19.173064 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:08:19.175141 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:08:19.175233 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:08:19.175257 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Jan 13 20:08:19.175287 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:08:19.175306 kernel: cpuidle: using governor menu Jan 13 20:08:19.175325 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:08:19.175344 kernel: ASID allocator initialised with 65536 entries Jan 13 20:08:19.175362 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:08:19.175380 kernel: Serial: AMBA PL011 UART driver Jan 13 20:08:19.175398 kernel: Modules: 17360 pages in range for non-PLT usage Jan 13 20:08:19.175417 kernel: Modules: 508880 pages in range for PLT usage Jan 13 20:08:19.175435 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:08:19.175458 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:08:19.175477 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:08:19.175497 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:08:19.175515 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:08:19.175533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:08:19.175551 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:08:19.175569 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:08:19.175587 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:08:19.175606 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:08:19.175628 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:08:19.175646 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:08:19.175664 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:08:19.175682 kernel: ACPI: Interpreter enabled Jan 13 20:08:19.175700 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:08:19.175718 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:08:19.175736 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 20:08:19.176043 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:08:19.176367 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:08:19.176576 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:08:19.176778 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 20:08:19.176975 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 20:08:19.177000 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 20:08:19.177019 kernel: acpiphp: Slot [1] registered Jan 13 20:08:19.177038 kernel: acpiphp: Slot [2] registered Jan 13 20:08:19.177056 kernel: acpiphp: Slot [3] registered Jan 13 20:08:19.177081 kernel: acpiphp: Slot [4] registered Jan 13 20:08:19.177099 kernel: acpiphp: Slot [5] registered Jan 13 20:08:19.177160 kernel: acpiphp: Slot [6] registered Jan 13 20:08:19.177183 kernel: acpiphp: Slot [7] registered Jan 13 20:08:19.177201 kernel: acpiphp: Slot [8] registered Jan 13 20:08:19.177219 kernel: acpiphp: Slot [9] registered Jan 13 20:08:19.177237 kernel: acpiphp: Slot [10] registered Jan 13 20:08:19.177255 kernel: acpiphp: Slot [11] registered Jan 13 20:08:19.177272 kernel: acpiphp: Slot [12] registered Jan 13 20:08:19.177290 kernel: acpiphp: Slot [13] registered Jan 13 20:08:19.177314 kernel: acpiphp: Slot [14] registered Jan 13 20:08:19.177332 kernel: acpiphp: Slot [15] registered Jan 13 20:08:19.177350 kernel: acpiphp: Slot [16] registered Jan 13 20:08:19.177368 kernel: acpiphp: Slot [17] registered Jan 13 20:08:19.177386 kernel: acpiphp: Slot [18] registered Jan 13 20:08:19.177404 kernel: acpiphp: Slot [19] registered Jan 13 20:08:19.177422 kernel: acpiphp: Slot [20] registered Jan 13 20:08:19.177440 kernel: acpiphp: Slot [21] registered Jan 13 20:08:19.177458 kernel: acpiphp: Slot [22] registered Jan 13 20:08:19.177481 kernel: acpiphp: Slot [23] registered Jan 13 20:08:19.177499 kernel: acpiphp: Slot [24] registered Jan 13 20:08:19.177517 kernel: acpiphp: Slot [25] registered Jan 13 20:08:19.177535 kernel: acpiphp: Slot [26] registered Jan 13 20:08:19.177553 kernel: acpiphp: Slot [27] registered Jan 13 20:08:19.177571 kernel: acpiphp: Slot [28] registered Jan 13 20:08:19.177589 kernel: acpiphp: Slot [29] registered Jan 13 20:08:19.177607 kernel: acpiphp: Slot [30] registered Jan 13 20:08:19.177624 kernel: acpiphp: Slot [31] registered Jan 13 20:08:19.177642 kernel: PCI host bridge to bus 0000:00 Jan 13 20:08:19.177857 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 20:08:19.178035 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:08:19.180294 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 20:08:19.180488 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 20:08:19.180716 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 20:08:19.180942 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 20:08:19.182210 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 20:08:19.182468 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:08:19.182673 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 20:08:19.182879 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:08:19.183101 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:08:19.184335 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 20:08:19.184538 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 20:08:19.184743 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 20:08:19.184953 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:08:19.185556 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 20:08:19.185773 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 20:08:19.185982 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 20:08:19.189944 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 20:08:19.190296 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 20:08:19.190525 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 20:08:19.190703 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:08:19.190881 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 20:08:19.190907 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:08:19.190926 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:08:19.190945 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:08:19.190963 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:08:19.190982 kernel: iommu: Default domain type: Translated Jan 13 20:08:19.191007 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:08:19.191025 kernel: efivars: Registered efivars operations Jan 13 20:08:19.191043 kernel: vgaarb: loaded Jan 13 20:08:19.191062 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:08:19.191080 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:08:19.191098 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:08:19.191134 kernel: pnp: PnP ACPI init Jan 13 20:08:19.191342 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 20:08:19.191374 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:08:19.191393 kernel: NET: Registered PF_INET protocol family Jan 13 20:08:19.191412 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:08:19.191430 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:08:19.191449 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:08:19.191467 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:08:19.191486 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:08:19.191504 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:08:19.191523 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:08:19.191546 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:08:19.191565 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:08:19.191583 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:08:19.191600 kernel: kvm [1]: HYP mode not available Jan 13 20:08:19.191619 kernel: Initialise system trusted keyrings Jan 13 20:08:19.191638 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:08:19.191656 kernel: Key type asymmetric registered Jan 13 20:08:19.191674 kernel: Asymmetric key parser 'x509' registered Jan 13 20:08:19.191692 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:08:19.191714 kernel: io scheduler mq-deadline registered Jan 13 20:08:19.191733 kernel: io scheduler kyber registered Jan 13 20:08:19.191751 kernel: io scheduler bfq registered Jan 13 20:08:19.191968 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 20:08:19.191995 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:08:19.192014 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:08:19.192032 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 20:08:19.192050 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 20:08:19.192074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:08:19.192095 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:08:19.192371 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 20:08:19.192400 kernel: printk: console [ttyS0] disabled Jan 13 20:08:19.192419 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 20:08:19.192439 kernel: printk: console [ttyS0] enabled Jan 13 20:08:19.192457 kernel: printk: bootconsole [uart0] disabled Jan 13 20:08:19.192475 kernel: thunder_xcv, ver 1.0 Jan 13 20:08:19.192493 kernel: thunder_bgx, ver 1.0 Jan 13 20:08:19.192511 kernel: nicpf, ver 1.0 Jan 13 20:08:19.192539 kernel: nicvf, ver 1.0 Jan 13 20:08:19.192744 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:08:19.192934 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:08:18 UTC (1736798898) Jan 13 20:08:19.192961 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:08:19.192980 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 20:08:19.192998 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:08:19.193016 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:08:19.193041 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:08:19.193059 kernel: Segment Routing with IPv6 Jan 13 20:08:19.193078 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:08:19.193096 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:08:19.193179 kernel: Key type dns_resolver registered Jan 13 20:08:19.193203 kernel: registered taskstats version 1 Jan 13 20:08:19.193222 kernel: Loading compiled-in X.509 certificates Jan 13 20:08:19.193240 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 46cb4d1b22f3a5974766fe7d7b651e2f296d4fe0' Jan 13 20:08:19.193259 kernel: Key type .fscrypt registered Jan 13 20:08:19.193277 kernel: Key type fscrypt-provisioning registered Jan 13 20:08:19.193302 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:08:19.193320 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:08:19.193339 kernel: ima: No architecture policies found Jan 13 20:08:19.193357 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:08:19.193375 kernel: clk: Disabling unused clocks Jan 13 20:08:19.193393 kernel: Freeing unused kernel memory: 39936K Jan 13 20:08:19.193411 kernel: Run /init as init process Jan 13 20:08:19.193429 kernel: with arguments: Jan 13 20:08:19.193447 kernel: /init Jan 13 20:08:19.193469 kernel: with environment: Jan 13 20:08:19.193487 kernel: HOME=/ Jan 13 20:08:19.193505 kernel: TERM=linux Jan 13 20:08:19.193523 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:08:19.193545 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:08:19.193568 systemd[1]: Detected virtualization amazon. Jan 13 20:08:19.193589 systemd[1]: Detected architecture arm64. Jan 13 20:08:19.193612 systemd[1]: Running in initrd. Jan 13 20:08:19.193632 systemd[1]: No hostname configured, using default hostname. Jan 13 20:08:19.193651 systemd[1]: Hostname set to . Jan 13 20:08:19.193672 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:08:19.193692 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:08:19.193711 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:08:19.193731 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:08:19.193752 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:08:19.193777 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:08:19.193798 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:08:19.193818 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:08:19.193841 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:08:19.193861 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:08:19.193881 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:08:19.193901 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:08:19.193925 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:08:19.193945 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:08:19.193965 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:08:19.193985 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:08:19.194005 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:08:19.194024 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:08:19.194044 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:08:19.194064 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:08:19.194084 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:08:19.194108 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:08:19.194168 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:08:19.194192 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:08:19.194213 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:08:19.194234 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:08:19.194254 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:08:19.194274 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:08:19.196394 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:08:19.196431 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:08:19.196452 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:19.196473 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:08:19.196493 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:08:19.196557 systemd-journald[252]: Collecting audit messages is disabled. Jan 13 20:08:19.196606 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:08:19.196628 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:08:19.196648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:19.196668 systemd-journald[252]: Journal started Jan 13 20:08:19.196718 systemd-journald[252]: Runtime Journal (/run/log/journal/ec27b5c62085956fae9deba28178338d) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:08:19.173589 systemd-modules-load[253]: Inserted module 'overlay' Jan 13 20:08:19.210260 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:08:19.210350 kernel: Bridge firewalling registered Jan 13 20:08:19.210094 systemd-modules-load[253]: Inserted module 'br_netfilter' Jan 13 20:08:19.218185 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:08:19.218250 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:08:19.222433 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:08:19.232505 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:08:19.239385 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:19.244530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:08:19.253421 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:08:19.286531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:08:19.295401 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:08:19.316181 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:19.326491 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:08:19.330636 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:19.344309 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:08:19.373102 dracut-cmdline[289]: dracut-dracut-053 Jan 13 20:08:19.383008 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:08:19.421855 systemd-resolved[288]: Positive Trust Anchors: Jan 13 20:08:19.421891 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:08:19.421953 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:08:19.518139 kernel: SCSI subsystem initialized Jan 13 20:08:19.524146 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:08:19.537157 kernel: iscsi: registered transport (tcp) Jan 13 20:08:19.559158 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:08:19.559242 kernel: QLogic iSCSI HBA Driver Jan 13 20:08:19.659145 kernel: random: crng init done Jan 13 20:08:19.659358 systemd-resolved[288]: Defaulting to hostname 'linux'. Jan 13 20:08:19.662788 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:08:19.665435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:08:19.689351 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:08:19.698438 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:08:19.740155 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:08:19.740229 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:08:19.740256 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:08:19.805162 kernel: raid6: neonx8 gen() 6520 MB/s Jan 13 20:08:19.822147 kernel: raid6: neonx4 gen() 6493 MB/s Jan 13 20:08:19.839148 kernel: raid6: neonx2 gen() 5430 MB/s Jan 13 20:08:19.856148 kernel: raid6: neonx1 gen() 3931 MB/s Jan 13 20:08:19.873148 kernel: raid6: int64x8 gen() 3572 MB/s Jan 13 20:08:19.890148 kernel: raid6: int64x4 gen() 3700 MB/s Jan 13 20:08:19.907147 kernel: raid6: int64x2 gen() 3594 MB/s Jan 13 20:08:19.925025 kernel: raid6: int64x1 gen() 2762 MB/s Jan 13 20:08:19.925063 kernel: raid6: using algorithm neonx8 gen() 6520 MB/s Jan 13 20:08:19.942880 kernel: raid6: .... xor() 4816 MB/s, rmw enabled Jan 13 20:08:19.942925 kernel: raid6: using neon recovery algorithm Jan 13 20:08:19.950975 kernel: xor: measuring software checksum speed Jan 13 20:08:19.951032 kernel: 8regs : 12922 MB/sec Jan 13 20:08:19.952150 kernel: 32regs : 12069 MB/sec Jan 13 20:08:19.954161 kernel: arm64_neon : 8931 MB/sec Jan 13 20:08:19.954195 kernel: xor: using function: 8regs (12922 MB/sec) Jan 13 20:08:20.037167 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:08:20.055597 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:08:20.066437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:08:20.103799 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jan 13 20:08:20.112281 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:08:20.129377 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:08:20.175008 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Jan 13 20:08:20.232162 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:08:20.241401 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:08:20.368643 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:08:20.380557 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:08:20.420033 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:08:20.425711 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:08:20.432987 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:08:20.437505 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:08:20.449301 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:08:20.486698 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:08:20.555294 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:08:20.555368 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 20:08:20.576160 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:08:20.576419 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:08:20.576644 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:33:08:9e:45:2d Jan 13 20:08:20.582472 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:08:20.582995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:20.589660 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:08:20.591811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:08:20.592057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:20.594287 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:20.617044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:20.623281 (udev-worker)[547]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:20.645151 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:08:20.645219 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:08:20.658148 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:08:20.662667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:20.669017 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:08:20.669052 kernel: GPT:9289727 != 16777215 Jan 13 20:08:20.669077 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:08:20.669101 kernel: GPT:9289727 != 16777215 Jan 13 20:08:20.669156 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:08:20.669183 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:08:20.680413 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:08:20.714724 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:20.946503 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:08:20.967170 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (533) Jan 13 20:08:21.021659 kernel: BTRFS: device fsid 2be7cc1c-29d4-4496-b29b-8561323213d2 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (523) Jan 13 20:08:21.044306 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:08:21.065552 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:08:21.104381 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:08:21.104944 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:08:21.127466 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:08:21.141659 disk-uuid[665]: Primary Header is updated. Jan 13 20:08:21.141659 disk-uuid[665]: Secondary Entries is updated. Jan 13 20:08:21.141659 disk-uuid[665]: Secondary Header is updated. Jan 13 20:08:21.153156 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:08:22.166195 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:08:22.168719 disk-uuid[666]: The operation has completed successfully. Jan 13 20:08:22.347208 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:08:22.349188 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:08:22.393441 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:08:22.409782 sh[926]: Success Jan 13 20:08:22.458168 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:08:22.625513 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:08:22.641370 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:08:22.647179 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:08:22.685307 kernel: BTRFS info (device dm-0): first mount of filesystem 2be7cc1c-29d4-4496-b29b-8561323213d2 Jan 13 20:08:22.685370 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:22.685396 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:08:22.686980 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:08:22.688249 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:08:22.881154 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:08:22.956534 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:08:22.958878 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:08:22.976466 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:08:22.984425 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:08:23.007026 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:08:23.007096 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:23.008900 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:08:23.015161 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:08:23.030602 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:08:23.034369 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:08:23.074799 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:08:23.087487 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:08:23.154665 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:08:23.164436 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:08:23.221449 systemd-networkd[1118]: lo: Link UP Jan 13 20:08:23.221472 systemd-networkd[1118]: lo: Gained carrier Jan 13 20:08:23.225501 systemd-networkd[1118]: Enumeration completed Jan 13 20:08:23.226318 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:23.226325 systemd-networkd[1118]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:08:23.227824 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:08:23.233903 systemd-networkd[1118]: eth0: Link UP Jan 13 20:08:23.233912 systemd-networkd[1118]: eth0: Gained carrier Jan 13 20:08:23.233929 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:23.237870 systemd[1]: Reached target network.target - Network. Jan 13 20:08:23.256222 systemd-networkd[1118]: eth0: DHCPv4 address 172.31.31.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:08:23.906730 ignition[1061]: Ignition 2.20.0 Jan 13 20:08:23.906751 ignition[1061]: Stage: fetch-offline Jan 13 20:08:23.907692 ignition[1061]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:23.907718 ignition[1061]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:23.908179 ignition[1061]: Ignition finished successfully Jan 13 20:08:23.917299 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:08:23.937571 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:08:23.961949 ignition[1128]: Ignition 2.20.0 Jan 13 20:08:23.961980 ignition[1128]: Stage: fetch Jan 13 20:08:23.962948 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:23.962982 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:23.963280 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:23.993873 ignition[1128]: PUT result: OK Jan 13 20:08:23.996541 ignition[1128]: parsed url from cmdline: "" Jan 13 20:08:23.996563 ignition[1128]: no config URL provided Jan 13 20:08:23.996581 ignition[1128]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:08:23.996633 ignition[1128]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:08:23.996665 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:24.000247 ignition[1128]: PUT result: OK Jan 13 20:08:24.000319 ignition[1128]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:08:24.004464 ignition[1128]: GET result: OK Jan 13 20:08:24.005587 ignition[1128]: parsing config with SHA512: cf18bcd1b266ed74d25d9b9407298f139e14e239adf93e18a3a4f3b8a718a043b03206c95ccb9e8dacac698f44b729286d88349f358537759b3871d26cdbeaaf Jan 13 20:08:24.014859 unknown[1128]: fetched base config from "system" Jan 13 20:08:24.015653 unknown[1128]: fetched base config from "system" Jan 13 20:08:24.016346 ignition[1128]: fetch: fetch complete Jan 13 20:08:24.015682 unknown[1128]: fetched user config from "aws" Jan 13 20:08:24.016357 ignition[1128]: fetch: fetch passed Jan 13 20:08:24.016443 ignition[1128]: Ignition finished successfully Jan 13 20:08:24.026435 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:08:24.034439 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:08:24.072075 ignition[1135]: Ignition 2.20.0 Jan 13 20:08:24.072591 ignition[1135]: Stage: kargs Jan 13 20:08:24.073211 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:24.073235 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:24.073445 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:24.078028 ignition[1135]: PUT result: OK Jan 13 20:08:24.086196 ignition[1135]: kargs: kargs passed Jan 13 20:08:24.086538 ignition[1135]: Ignition finished successfully Jan 13 20:08:24.091753 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:08:24.106501 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:08:24.129107 ignition[1141]: Ignition 2.20.0 Jan 13 20:08:24.129177 ignition[1141]: Stage: disks Jan 13 20:08:24.129987 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:24.130012 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:24.130555 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:24.138183 ignition[1141]: PUT result: OK Jan 13 20:08:24.142543 ignition[1141]: disks: disks passed Jan 13 20:08:24.142700 ignition[1141]: Ignition finished successfully Jan 13 20:08:24.146887 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:08:24.148076 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:08:24.148834 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:08:24.149147 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:08:24.149721 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:08:24.150026 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:08:24.172559 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:08:24.232745 systemd-fsck[1149]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:08:24.243714 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:08:24.260415 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:08:24.334620 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f9a95e53-2d63-4443-b523-cb2108fb48f6 r/w with ordered data mode. Quota mode: none. Jan 13 20:08:24.335614 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:08:24.339255 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:08:24.362267 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:08:24.368299 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:08:24.373614 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:08:24.373732 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:08:24.373785 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:08:24.397152 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1168) Jan 13 20:08:24.400520 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:08:24.404524 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:08:24.404560 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:24.404586 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:08:24.412179 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:08:24.414526 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:08:24.419432 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:08:25.025852 initrd-setup-root[1192]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:08:25.047305 systemd-networkd[1118]: eth0: Gained IPv6LL Jan 13 20:08:25.056956 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:08:25.065491 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:08:25.073924 initrd-setup-root[1213]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:08:25.573206 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:08:25.582353 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:08:25.589337 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:08:25.605670 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:08:25.609846 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:08:25.647949 ignition[1281]: INFO : Ignition 2.20.0 Jan 13 20:08:25.651257 ignition[1281]: INFO : Stage: mount Jan 13 20:08:25.651257 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:25.651257 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:25.651257 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:25.663026 ignition[1281]: INFO : PUT result: OK Jan 13 20:08:25.657674 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:08:25.668851 ignition[1281]: INFO : mount: mount passed Jan 13 20:08:25.668851 ignition[1281]: INFO : Ignition finished successfully Jan 13 20:08:25.673309 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:08:25.681327 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:08:25.714498 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:08:25.739158 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1292) Jan 13 20:08:25.743214 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:08:25.743255 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:25.743293 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:08:25.749154 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:08:25.752404 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:08:25.785017 ignition[1309]: INFO : Ignition 2.20.0 Jan 13 20:08:25.785017 ignition[1309]: INFO : Stage: files Jan 13 20:08:25.788684 ignition[1309]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:25.788684 ignition[1309]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:25.788684 ignition[1309]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:25.806721 ignition[1309]: INFO : PUT result: OK Jan 13 20:08:25.810775 ignition[1309]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:08:25.835341 ignition[1309]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:08:25.840411 ignition[1309]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:08:25.905192 ignition[1309]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:08:25.907805 ignition[1309]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:08:25.910302 ignition[1309]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:08:25.908592 unknown[1309]: wrote ssh authorized keys file for user: core Jan 13 20:08:25.941652 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:08:25.945450 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:08:26.042845 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:08:26.219011 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:08:26.219011 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:08:26.225938 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:08:26.538476 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:08:26.654575 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:08:26.658672 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:08:26.658672 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:08:26.658672 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:08:26.658672 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:08:26.658672 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:08:26.658672 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:08:26.677723 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:08:26.677723 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:08:26.677723 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:08:26.677723 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:08:26.677723 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:08:26.677723 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:08:26.677723 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:08:26.677723 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 13 20:08:26.931641 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:08:27.251948 ignition[1309]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:08:27.251948 ignition[1309]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:08:27.259539 ignition[1309]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:08:27.259539 ignition[1309]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:08:27.259539 ignition[1309]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:08:27.259539 ignition[1309]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:08:27.259539 ignition[1309]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:08:27.259539 ignition[1309]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:08:27.259539 ignition[1309]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:08:27.259539 ignition[1309]: INFO : files: files passed Jan 13 20:08:27.259539 ignition[1309]: INFO : Ignition finished successfully Jan 13 20:08:27.273149 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:08:27.303494 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:08:27.308807 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:08:27.316079 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:08:27.316324 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:08:27.344319 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:08:27.344319 initrd-setup-root-after-ignition[1338]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:08:27.353040 initrd-setup-root-after-ignition[1342]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:08:27.354991 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:08:27.359616 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:08:27.373472 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:08:27.419272 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:08:27.419657 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:08:27.427632 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:08:27.431245 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:08:27.446623 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:08:27.455425 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:08:27.493203 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:08:27.504475 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:08:27.534554 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:08:27.535070 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:08:27.539714 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:08:27.543181 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:08:27.545494 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:08:27.549044 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:08:27.549173 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:08:27.551575 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:08:27.553485 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:08:27.555181 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:08:27.557185 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:08:27.559394 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:08:27.563368 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:08:27.565248 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:08:27.569015 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:08:27.570932 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:08:27.572825 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:08:27.574385 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:08:27.574476 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:08:27.577009 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:08:27.585449 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:08:27.587702 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:08:27.589315 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:08:27.591973 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:08:27.592063 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:08:27.594379 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:08:27.594465 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:08:27.627809 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:08:27.627915 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:08:27.646325 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:08:27.652397 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:08:27.654343 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:08:27.654456 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:08:27.657348 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:08:27.657458 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:08:27.689155 ignition[1363]: INFO : Ignition 2.20.0 Jan 13 20:08:27.689155 ignition[1363]: INFO : Stage: umount Jan 13 20:08:27.689155 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:27.689155 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:27.689155 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:27.702847 ignition[1363]: INFO : PUT result: OK Jan 13 20:08:27.702847 ignition[1363]: INFO : umount: umount passed Jan 13 20:08:27.702847 ignition[1363]: INFO : Ignition finished successfully Jan 13 20:08:27.703149 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:08:27.705309 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:08:27.713883 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:08:27.713989 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:08:27.716830 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:08:27.716915 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:08:27.723487 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:08:27.723591 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:08:27.727037 systemd[1]: Stopped target network.target - Network. Jan 13 20:08:27.728272 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:08:27.728376 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:08:27.748081 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:08:27.750527 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:08:27.762430 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:08:27.765626 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:08:27.771877 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:08:27.773766 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:08:27.773848 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:08:27.778165 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:08:27.778249 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:08:27.787688 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:08:27.787782 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:08:27.790005 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:08:27.790082 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:08:27.800428 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:08:27.802559 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:08:27.807989 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:08:27.808979 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:08:27.809195 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:08:27.812985 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:08:27.813533 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:08:27.819220 systemd-networkd[1118]: eth0: DHCPv6 lease lost Jan 13 20:08:27.830152 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:08:27.830681 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:08:27.838641 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:08:27.838902 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:08:27.843048 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:08:27.843820 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:08:27.863420 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:08:27.867650 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:08:27.867767 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:08:27.871234 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:08:27.871336 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:27.873593 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:08:27.873672 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:08:27.876514 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:08:27.876589 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:08:27.879027 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:08:27.916038 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:08:27.917868 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:08:27.926242 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:08:27.926690 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:08:27.934551 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:08:27.934656 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:08:27.938499 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:08:27.939214 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:08:27.942327 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:08:27.942413 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:08:27.944555 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:08:27.944633 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:08:27.958006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:08:27.958095 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:27.976044 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:08:27.981374 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:08:27.981497 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:08:27.983911 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:08:27.983998 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:08:27.986463 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:08:27.986576 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:08:27.991482 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:08:27.991587 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:28.000403 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:08:28.002589 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:08:28.006070 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:08:28.027508 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:08:28.048280 systemd[1]: Switching root. Jan 13 20:08:28.111721 systemd-journald[252]: Journal stopped Jan 13 20:08:31.780856 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jan 13 20:08:31.780985 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:08:31.781033 kernel: SELinux: policy capability open_perms=1 Jan 13 20:08:31.781065 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:08:31.781094 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:08:31.784183 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:08:31.784249 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:08:31.784282 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:08:31.784311 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:08:31.784339 kernel: audit: type=1403 audit(1736798910.045:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:08:31.784380 systemd[1]: Successfully loaded SELinux policy in 72.889ms. Jan 13 20:08:31.784432 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.682ms. Jan 13 20:08:31.784466 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:08:31.784499 systemd[1]: Detected virtualization amazon. Jan 13 20:08:31.784537 systemd[1]: Detected architecture arm64. Jan 13 20:08:31.784569 systemd[1]: Detected first boot. Jan 13 20:08:31.784601 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:08:31.784633 zram_generator::config[1405]: No configuration found. Jan 13 20:08:31.784667 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:08:31.784699 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:08:31.784730 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:08:31.784761 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:08:31.784796 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:08:31.784829 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:08:31.784858 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:08:31.784890 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:08:31.784922 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:08:31.784958 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:08:31.784991 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:08:31.785022 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:08:31.785054 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:08:31.785088 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:08:31.788179 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:08:31.788241 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:08:31.788274 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:08:31.788306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:08:31.788336 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:08:31.788369 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:08:31.788397 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:08:31.788429 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:08:31.788467 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:08:31.788499 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:08:31.788528 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:08:31.788557 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:08:31.788587 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:08:31.788619 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:08:31.788648 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:08:31.788690 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:08:31.788723 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:08:31.788759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:08:31.788790 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:08:31.788821 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:08:31.788849 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:08:31.788878 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:08:31.788906 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:08:31.788937 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:08:31.788971 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:08:31.789008 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:08:31.789041 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:08:31.789073 systemd[1]: Reached target machines.target - Containers. Jan 13 20:08:31.789102 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:08:31.793214 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:08:31.793267 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:08:31.793298 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:08:31.793328 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:08:31.793366 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:08:31.793398 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:08:31.793429 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:08:31.793458 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:08:31.793491 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:08:31.793524 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:08:31.793554 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:08:31.793583 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:08:31.793622 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:08:31.793652 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:08:31.793681 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:08:31.793711 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:08:31.793742 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:08:31.793770 kernel: fuse: init (API version 7.39) Jan 13 20:08:31.793798 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:08:31.793828 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:08:31.793856 systemd[1]: Stopped verity-setup.service. Jan 13 20:08:31.793884 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:08:31.793918 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:08:31.793948 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:08:31.793975 kernel: loop: module loaded Jan 13 20:08:31.794005 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:08:31.794034 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:08:31.794067 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:08:31.794096 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:08:31.805678 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:08:31.805740 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:08:31.805776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:08:31.805817 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:08:31.805847 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:08:31.805876 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:08:31.805913 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:08:31.805943 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:08:31.805973 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:08:31.806002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:08:31.806044 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:08:31.806076 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:08:31.806110 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:08:31.806165 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:08:31.806245 systemd-journald[1487]: Collecting audit messages is disabled. Jan 13 20:08:31.806317 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:08:31.806355 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:08:31.806385 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:08:31.806416 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:08:31.806446 systemd-journald[1487]: Journal started Jan 13 20:08:31.806504 systemd-journald[1487]: Runtime Journal (/run/log/journal/ec27b5c62085956fae9deba28178338d) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:08:31.147510 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:08:31.813798 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:08:31.221387 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:08:31.222198 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:08:31.823581 kernel: ACPI: bus type drm_connector registered Jan 13 20:08:31.828445 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:08:31.838182 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:08:31.843158 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:08:31.859709 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:08:31.859786 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:08:31.886457 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:08:31.886548 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:08:31.913236 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:31.927859 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:08:31.950158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:08:31.950244 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:08:31.952773 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:08:31.956965 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:08:31.958042 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:08:31.961245 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:08:31.964893 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:08:31.968586 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:08:31.973383 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:08:31.977882 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:08:32.021252 kernel: loop0: detected capacity change from 0 to 116784 Jan 13 20:08:32.030813 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:08:32.044464 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:08:32.055521 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:08:32.062482 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:08:32.065432 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:32.077300 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 13 20:08:32.077334 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 13 20:08:32.104341 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:08:32.126215 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:08:32.126741 systemd-journald[1487]: Time spent on flushing to /var/log/journal/ec27b5c62085956fae9deba28178338d is 63.644ms for 922 entries. Jan 13 20:08:32.126741 systemd-journald[1487]: System Journal (/var/log/journal/ec27b5c62085956fae9deba28178338d) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:08:32.206249 systemd-journald[1487]: Received client request to flush runtime journal. Jan 13 20:08:32.206338 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:08:32.206401 kernel: loop1: detected capacity change from 0 to 113552 Jan 13 20:08:32.132861 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:08:32.134408 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:08:32.150518 udevadm[1546]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:08:32.211409 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:08:32.237202 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:08:32.247448 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:08:32.314239 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Jan 13 20:08:32.314757 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Jan 13 20:08:32.326969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:08:32.352156 kernel: loop2: detected capacity change from 0 to 53784 Jan 13 20:08:32.444165 kernel: loop3: detected capacity change from 0 to 194096 Jan 13 20:08:32.501178 kernel: loop4: detected capacity change from 0 to 116784 Jan 13 20:08:32.524251 kernel: loop5: detected capacity change from 0 to 113552 Jan 13 20:08:32.542298 kernel: loop6: detected capacity change from 0 to 53784 Jan 13 20:08:32.563347 kernel: loop7: detected capacity change from 0 to 194096 Jan 13 20:08:32.581901 (sd-merge)[1564]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:08:32.583635 (sd-merge)[1564]: Merged extensions into '/usr'. Jan 13 20:08:32.595664 systemd[1]: Reloading requested from client PID 1516 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:08:32.595691 systemd[1]: Reloading... Jan 13 20:08:32.745969 zram_generator::config[1588]: No configuration found. Jan 13 20:08:33.076794 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:08:33.186446 systemd[1]: Reloading finished in 589 ms. Jan 13 20:08:33.228175 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:08:33.243449 systemd[1]: Starting ensure-sysext.service... Jan 13 20:08:33.253804 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:08:33.285933 systemd[1]: Reloading requested from client PID 1641 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:08:33.285969 systemd[1]: Reloading... Jan 13 20:08:33.332345 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:08:33.332886 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:08:33.337391 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:08:33.337940 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Jan 13 20:08:33.338098 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Jan 13 20:08:33.345177 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:08:33.345203 systemd-tmpfiles[1642]: Skipping /boot Jan 13 20:08:33.370925 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:08:33.370958 systemd-tmpfiles[1642]: Skipping /boot Jan 13 20:08:33.451308 zram_generator::config[1670]: No configuration found. Jan 13 20:08:33.667905 ldconfig[1512]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:08:33.692270 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:08:33.801922 systemd[1]: Reloading finished in 515 ms. Jan 13 20:08:33.833318 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:08:33.837457 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:08:33.849997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:08:33.872632 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:08:33.883282 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:08:33.889632 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:08:33.896791 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:08:33.903899 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:08:33.917705 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:08:33.926704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:08:33.933477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:08:33.941421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:08:33.950650 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:08:33.953659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:08:33.959343 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:08:33.968676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:08:33.969652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:08:33.980654 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:08:34.000558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:08:34.027706 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:08:34.028092 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:08:34.028157 systemd-udevd[1729]: Using default interface naming scheme 'v255'. Jan 13 20:08:34.032315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:08:34.036256 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:08:34.055405 systemd[1]: Finished ensure-sysext.service. Jan 13 20:08:34.069649 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:08:34.104866 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:08:34.105201 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:08:34.107717 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:08:34.110939 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:08:34.111259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:08:34.152756 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:08:34.155393 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:08:34.158223 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:08:34.162560 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:08:34.165614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:08:34.165891 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:08:34.183454 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:08:34.193776 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:08:34.198238 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:08:34.223312 augenrules[1776]: No rules Jan 13 20:08:34.224816 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:08:34.225279 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:08:34.242198 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:08:34.245855 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:08:34.417102 systemd-networkd[1759]: lo: Link UP Jan 13 20:08:34.419176 systemd-networkd[1759]: lo: Gained carrier Jan 13 20:08:34.420586 systemd-networkd[1759]: Enumeration completed Jan 13 20:08:34.420877 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:08:34.443095 systemd-resolved[1728]: Positive Trust Anchors: Jan 13 20:08:34.443542 systemd-resolved[1728]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:08:34.443623 systemd-resolved[1728]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:08:34.454226 systemd-resolved[1728]: Defaulting to hostname 'linux'. Jan 13 20:08:34.457473 (udev-worker)[1757]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:34.464819 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:08:34.467455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:08:34.469905 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:08:34.470030 systemd[1]: Reached target network.target - Network. Jan 13 20:08:34.471935 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:08:34.549905 systemd-networkd[1759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:34.549932 systemd-networkd[1759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:08:34.555191 systemd-networkd[1759]: eth0: Link UP Jan 13 20:08:34.555564 systemd-networkd[1759]: eth0: Gained carrier Jan 13 20:08:34.555611 systemd-networkd[1759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:34.566230 systemd-networkd[1759]: eth0: DHCPv4 address 172.31.31.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:08:34.671183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:34.672345 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1781) Jan 13 20:08:34.865730 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:08:34.871096 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:08:34.874043 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:34.888419 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:08:34.892434 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:08:34.936180 lvm[1901]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:08:34.967464 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:08:34.974760 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:08:34.977646 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:08:34.979752 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:08:34.981916 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:08:34.984309 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:08:34.986899 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:08:34.989141 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:08:34.991478 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:08:34.993790 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:08:34.993841 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:08:34.995582 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:08:34.998513 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:08:35.003208 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:08:35.012203 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:08:35.016610 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:08:35.019798 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:08:35.022400 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:08:35.024369 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:08:35.026229 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:08:35.026301 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:08:35.033429 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:08:35.041635 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:08:35.048522 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:08:35.055352 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:08:35.060786 lvm[1908]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:08:35.061451 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:08:35.063505 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:08:35.073498 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:08:35.082489 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:08:35.092701 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:08:35.098371 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:08:35.104517 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:08:35.111610 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:08:35.145878 jq[1912]: false Jan 13 20:08:35.125430 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:08:35.128215 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:08:35.129016 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:08:35.133560 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:08:35.141531 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:08:35.147710 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:08:35.150244 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:08:35.213645 (ntainerd)[1931]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:08:35.220375 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:08:35.220798 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:08:35.269213 jq[1923]: true Jan 13 20:08:35.274641 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:08:35.281945 update_engine[1922]: I20250113 20:08:35.273983 1922 main.cc:92] Flatcar Update Engine starting Jan 13 20:08:35.275016 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:08:35.295342 tar[1937]: linux-arm64/helm Jan 13 20:08:35.305199 extend-filesystems[1913]: Found loop4 Jan 13 20:08:35.305199 extend-filesystems[1913]: Found loop5 Jan 13 20:08:35.305199 extend-filesystems[1913]: Found loop6 Jan 13 20:08:35.305199 extend-filesystems[1913]: Found loop7 Jan 13 20:08:35.305199 extend-filesystems[1913]: Found nvme0n1 Jan 13 20:08:35.305199 extend-filesystems[1913]: Found nvme0n1p2 Jan 13 20:08:35.305199 extend-filesystems[1913]: Found nvme0n1p3 Jan 13 20:08:35.305199 extend-filesystems[1913]: Found usr Jan 13 20:08:35.305199 extend-filesystems[1913]: Found nvme0n1p4 Jan 13 20:08:35.347370 extend-filesystems[1913]: Found nvme0n1p6 Jan 13 20:08:35.347370 extend-filesystems[1913]: Found nvme0n1p7 Jan 13 20:08:35.347370 extend-filesystems[1913]: Found nvme0n1p9 Jan 13 20:08:35.347370 extend-filesystems[1913]: Checking size of /dev/nvme0n1p9 Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:48 UTC 2025 (1): Starting Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: ---------------------------------------------------- Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: corporation. Support and training for ntp-4 are Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: available at https://www.nwtime.org/support Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: ---------------------------------------------------- Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: proto: precision = 0.096 usec (-23) Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: basedate set to 2025-01-01 Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: gps base set to 2025-01-05 (week 2348) Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: Listen normally on 3 eth0 172.31.31.26:123 Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: Listen normally on 4 lo [::1]:123 Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: bind(21) AF_INET6 fe80::433:8ff:fe9e:452d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: unable to create socket on eth0 (5) for fe80::433:8ff:fe9e:452d%2#123 Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: failed to init interface for address fe80::433:8ff:fe9e:452d%2 Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: Listening on routing socket on fd #21 for interface updates Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:35.354677 ntpd[1915]: 13 Jan 20:08:35 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:35.329231 ntpd[1915]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:48 UTC 2025 (1): Starting Jan 13 20:08:35.329279 ntpd[1915]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:08:35.329299 ntpd[1915]: ---------------------------------------------------- Jan 13 20:08:35.329317 ntpd[1915]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:08:35.329335 ntpd[1915]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:08:35.329354 ntpd[1915]: corporation. Support and training for ntp-4 are Jan 13 20:08:35.329371 ntpd[1915]: available at https://www.nwtime.org/support Jan 13 20:08:35.329389 ntpd[1915]: ---------------------------------------------------- Jan 13 20:08:35.334842 ntpd[1915]: proto: precision = 0.096 usec (-23) Jan 13 20:08:35.339763 ntpd[1915]: basedate set to 2025-01-01 Jan 13 20:08:35.339795 ntpd[1915]: gps base set to 2025-01-05 (week 2348) Jan 13 20:08:35.342299 ntpd[1915]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:08:35.342376 ntpd[1915]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:08:35.342641 ntpd[1915]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:08:35.342701 ntpd[1915]: Listen normally on 3 eth0 172.31.31.26:123 Jan 13 20:08:35.342764 ntpd[1915]: Listen normally on 4 lo [::1]:123 Jan 13 20:08:35.342833 ntpd[1915]: bind(21) AF_INET6 fe80::433:8ff:fe9e:452d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:08:35.342869 ntpd[1915]: unable to create socket on eth0 (5) for fe80::433:8ff:fe9e:452d%2#123 Jan 13 20:08:35.342898 ntpd[1915]: failed to init interface for address fe80::433:8ff:fe9e:452d%2 Jan 13 20:08:35.342946 ntpd[1915]: Listening on routing socket on fd #21 for interface updates Jan 13 20:08:35.345091 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:35.346811 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:35.367708 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:08:35.404221 extend-filesystems[1913]: Resized partition /dev/nvme0n1p9 Jan 13 20:08:35.419993 extend-filesystems[1962]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:08:35.422874 dbus-daemon[1911]: [system] SELinux support is enabled Jan 13 20:08:35.423388 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:08:35.437895 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:08:35.455558 jq[1948]: true Jan 13 20:08:35.458524 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:08:35.437960 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:08:35.440417 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:08:35.440453 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:08:35.459374 dbus-daemon[1911]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1759 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:08:35.471477 update_engine[1922]: I20250113 20:08:35.471397 1922 update_check_scheduler.cc:74] Next update check in 7m5s Jan 13 20:08:35.471847 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:08:35.489420 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:08:35.497460 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:08:35.500225 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:08:35.569157 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.582 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.583 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.583 INFO Fetch successful Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.585 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.585 INFO Fetch successful Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.585 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.588 INFO Fetch successful Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.588 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.590 INFO Fetch successful Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.590 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.593 INFO Fetch failed with 404: resource not found Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.593 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.595 INFO Fetch successful Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.595 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.596 INFO Fetch successful Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.596 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.597 INFO Fetch successful Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.597 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.598 INFO Fetch successful Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.598 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:08:35.618003 coreos-metadata[1910]: Jan 13 20:08:35.599 INFO Fetch successful Jan 13 20:08:35.624654 extend-filesystems[1962]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:08:35.624654 extend-filesystems[1962]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:08:35.624654 extend-filesystems[1962]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:08:35.637315 extend-filesystems[1913]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:08:35.637315 extend-filesystems[1913]: Found nvme0n1p1 Jan 13 20:08:35.631779 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:08:35.632150 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:08:35.648094 systemd-logind[1920]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:08:35.659799 systemd-logind[1920]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 20:08:35.660214 systemd-logind[1920]: New seat seat0. Jan 13 20:08:35.662762 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:08:35.718514 bash[1992]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:08:35.763197 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:08:35.774039 systemd[1]: Starting sshkeys.service... Jan 13 20:08:35.787394 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:08:35.791096 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:08:35.836206 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:08:35.844427 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:08:35.853678 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1755) Jan 13 20:08:35.874999 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:08:35.877267 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:08:35.881619 dbus-daemon[1911]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1965 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:08:35.913851 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:08:35.964208 polkitd[2015]: Started polkitd version 121 Jan 13 20:08:35.993934 polkitd[2015]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:08:35.994061 polkitd[2015]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:08:35.999448 polkitd[2015]: Finished loading, compiling and executing 2 rules Jan 13 20:08:36.002821 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:08:36.003101 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:08:36.008327 polkitd[2015]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:08:36.078145 systemd-hostnamed[1965]: Hostname set to (transient) Jan 13 20:08:36.084202 systemd-resolved[1728]: System hostname changed to 'ip-172-31-31-26'. Jan 13 20:08:36.152440 locksmithd[1966]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:08:36.204234 containerd[1931]: time="2025-01-13T20:08:36.199700687Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:08:36.218653 coreos-metadata[2009]: Jan 13 20:08:36.218 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:08:36.219802 coreos-metadata[2009]: Jan 13 20:08:36.219 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:08:36.228157 coreos-metadata[2009]: Jan 13 20:08:36.225 INFO Fetch successful Jan 13 20:08:36.228157 coreos-metadata[2009]: Jan 13 20:08:36.225 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:08:36.228157 coreos-metadata[2009]: Jan 13 20:08:36.227 INFO Fetch successful Jan 13 20:08:36.230310 unknown[2009]: wrote ssh authorized keys file for user: core Jan 13 20:08:36.329932 ntpd[1915]: bind(24) AF_INET6 fe80::433:8ff:fe9e:452d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:08:36.330000 ntpd[1915]: unable to create socket on eth0 (6) for fe80::433:8ff:fe9e:452d%2#123 Jan 13 20:08:36.330528 ntpd[1915]: 13 Jan 20:08:36 ntpd[1915]: bind(24) AF_INET6 fe80::433:8ff:fe9e:452d%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:08:36.330528 ntpd[1915]: 13 Jan 20:08:36 ntpd[1915]: unable to create socket on eth0 (6) for fe80::433:8ff:fe9e:452d%2#123 Jan 13 20:08:36.330528 ntpd[1915]: 13 Jan 20:08:36 ntpd[1915]: failed to init interface for address fe80::433:8ff:fe9e:452d%2 Jan 13 20:08:36.330028 ntpd[1915]: failed to init interface for address fe80::433:8ff:fe9e:452d%2 Jan 13 20:08:36.336154 containerd[1931]: time="2025-01-13T20:08:36.334294740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:36.339237 containerd[1931]: time="2025-01-13T20:08:36.339168624Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:36.339386 containerd[1931]: time="2025-01-13T20:08:36.339355584Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:08:36.339495 containerd[1931]: time="2025-01-13T20:08:36.339468780Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:08:36.340070 containerd[1931]: time="2025-01-13T20:08:36.340032732Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:08:36.342234 containerd[1931]: time="2025-01-13T20:08:36.341185668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:36.342234 containerd[1931]: time="2025-01-13T20:08:36.341356248Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:36.342234 containerd[1931]: time="2025-01-13T20:08:36.341385168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:36.342234 containerd[1931]: time="2025-01-13T20:08:36.341668020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:36.342234 containerd[1931]: time="2025-01-13T20:08:36.341698200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:36.342234 containerd[1931]: time="2025-01-13T20:08:36.341730852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:36.342234 containerd[1931]: time="2025-01-13T20:08:36.341754684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:36.342234 containerd[1931]: time="2025-01-13T20:08:36.341910732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:36.345023 containerd[1931]: time="2025-01-13T20:08:36.344430108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:36.345023 containerd[1931]: time="2025-01-13T20:08:36.344659824Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:36.345023 containerd[1931]: time="2025-01-13T20:08:36.344688936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:08:36.345023 containerd[1931]: time="2025-01-13T20:08:36.344859672Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:08:36.345023 containerd[1931]: time="2025-01-13T20:08:36.344955840Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:08:36.370127 update-ssh-keys[2095]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:08:36.372269 containerd[1931]: time="2025-01-13T20:08:36.371179260Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:08:36.372269 containerd[1931]: time="2025-01-13T20:08:36.371288304Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:08:36.372269 containerd[1931]: time="2025-01-13T20:08:36.371393892Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:08:36.372269 containerd[1931]: time="2025-01-13T20:08:36.371472108Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:08:36.372269 containerd[1931]: time="2025-01-13T20:08:36.371508120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:08:36.372269 containerd[1931]: time="2025-01-13T20:08:36.371813832Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:08:36.376086 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.378740856Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.379005684Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.379041480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.379074816Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.379108956Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.383069028Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.383109516Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.383178576Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.383213376Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.383246172Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.383278068Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.383306580Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.383347836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.387494 containerd[1931]: time="2025-01-13T20:08:36.383381760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.380252 systemd-networkd[1759]: eth0: Gained IPv6LL Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383410956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383441796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383485416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383520720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383548872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383631492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383668488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383703156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383732028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383760480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383789640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.383847240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.386750616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.386814552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.389599 containerd[1931]: time="2025-01-13T20:08:36.386843772Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:08:36.388176 systemd[1]: Finished sshkeys.service. Jan 13 20:08:36.395239 containerd[1931]: time="2025-01-13T20:08:36.390741540Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:08:36.395239 containerd[1931]: time="2025-01-13T20:08:36.391213632Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:08:36.395239 containerd[1931]: time="2025-01-13T20:08:36.391254408Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:08:36.395239 containerd[1931]: time="2025-01-13T20:08:36.391284780Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:08:36.395239 containerd[1931]: time="2025-01-13T20:08:36.391308624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.395239 containerd[1931]: time="2025-01-13T20:08:36.391343328Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:08:36.395239 containerd[1931]: time="2025-01-13T20:08:36.391368216Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:08:36.395239 containerd[1931]: time="2025-01-13T20:08:36.391394172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:08:36.395687 containerd[1931]: time="2025-01-13T20:08:36.391920516Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:08:36.395687 containerd[1931]: time="2025-01-13T20:08:36.392012760Z" level=info msg="Connect containerd service" Jan 13 20:08:36.395687 containerd[1931]: time="2025-01-13T20:08:36.392083656Z" level=info msg="using legacy CRI server" Jan 13 20:08:36.397749 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:08:36.409655 containerd[1931]: time="2025-01-13T20:08:36.392102184Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:08:36.409655 containerd[1931]: time="2025-01-13T20:08:36.406464840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:08:36.409655 containerd[1931]: time="2025-01-13T20:08:36.407470944Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:08:36.401029 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:08:36.414412 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:08:36.418416 containerd[1931]: time="2025-01-13T20:08:36.416505096Z" level=info msg="Start subscribing containerd event" Jan 13 20:08:36.418416 containerd[1931]: time="2025-01-13T20:08:36.416589876Z" level=info msg="Start recovering state" Jan 13 20:08:36.418416 containerd[1931]: time="2025-01-13T20:08:36.416712444Z" level=info msg="Start event monitor" Jan 13 20:08:36.418416 containerd[1931]: time="2025-01-13T20:08:36.416737680Z" level=info msg="Start snapshots syncer" Jan 13 20:08:36.418416 containerd[1931]: time="2025-01-13T20:08:36.416759580Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:08:36.418416 containerd[1931]: time="2025-01-13T20:08:36.416777112Z" level=info msg="Start streaming server" Jan 13 20:08:36.418416 containerd[1931]: time="2025-01-13T20:08:36.417449892Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:08:36.418416 containerd[1931]: time="2025-01-13T20:08:36.417613152Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:08:36.425707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:36.434786 containerd[1931]: time="2025-01-13T20:08:36.430296973Z" level=info msg="containerd successfully booted in 0.234168s" Jan 13 20:08:36.439615 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:08:36.442989 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:08:36.573621 amazon-ssm-agent[2105]: Initializing new seelog logger Jan 13 20:08:36.573621 amazon-ssm-agent[2105]: New Seelog Logger Creation Complete Jan 13 20:08:36.574184 amazon-ssm-agent[2105]: 2025/01/13 20:08:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:36.574184 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:36.576526 amazon-ssm-agent[2105]: 2025/01/13 20:08:36 processing appconfig overrides Jan 13 20:08:36.576526 amazon-ssm-agent[2105]: 2025/01/13 20:08:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:36.576526 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:36.576526 amazon-ssm-agent[2105]: 2025/01/13 20:08:36 processing appconfig overrides Jan 13 20:08:36.576526 amazon-ssm-agent[2105]: 2025/01/13 20:08:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:36.576526 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:36.576526 amazon-ssm-agent[2105]: 2025/01/13 20:08:36 processing appconfig overrides Jan 13 20:08:36.576526 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO Proxy environment variables: Jan 13 20:08:36.584441 amazon-ssm-agent[2105]: 2025/01/13 20:08:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:36.584441 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:36.584441 amazon-ssm-agent[2105]: 2025/01/13 20:08:36 processing appconfig overrides Jan 13 20:08:36.622822 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:08:36.677216 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO https_proxy: Jan 13 20:08:36.779476 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO http_proxy: Jan 13 20:08:36.882473 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO no_proxy: Jan 13 20:08:36.981363 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:08:37.079572 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:08:37.178853 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO Agent will take identity from EC2 Jan 13 20:08:37.278330 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:08:37.378799 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:08:37.478155 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:08:37.526182 tar[1937]: linux-arm64/LICENSE Jan 13 20:08:37.527593 tar[1937]: linux-arm64/README.md Jan 13 20:08:37.569209 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:08:37.579412 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:08:37.679988 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 20:08:37.754364 sshd_keygen[1949]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:08:37.779258 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:08:37.832400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:37.844226 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:08:37.855610 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:08:37.859508 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:08:37.881438 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:08:37.897833 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:08:37.900143 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:08:37.913731 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:08:37.963394 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:08:37.981704 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:08:37.985221 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO [Registrar] Starting registrar module Jan 13 20:08:37.992757 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:08:37.996787 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:08:37.999355 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:08:38.001611 systemd[1]: Startup finished in 1.076s (kernel) + 11.241s (initrd) + 8.027s (userspace) = 20.345s. Jan 13 20:08:38.041935 agetty[2158]: failed to open credentials directory Jan 13 20:08:38.043944 agetty[2159]: failed to open credentials directory Jan 13 20:08:38.074344 amazon-ssm-agent[2105]: 2025-01-13 20:08:36 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:08:38.074344 amazon-ssm-agent[2105]: 2025-01-13 20:08:38 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:08:38.074344 amazon-ssm-agent[2105]: 2025-01-13 20:08:38 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:08:38.075046 amazon-ssm-agent[2105]: 2025-01-13 20:08:38 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:08:38.075046 amazon-ssm-agent[2105]: 2025-01-13 20:08:38 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:08:38.085731 amazon-ssm-agent[2105]: 2025-01-13 20:08:38 INFO [CredentialRefresher] Next credential rotation will be in 31.98332417663333 minutes Jan 13 20:08:38.623309 kubelet[2150]: E0113 20:08:38.623252 2150 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:08:38.627923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:08:38.628301 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:08:38.628764 systemd[1]: kubelet.service: Consumed 1.282s CPU time. Jan 13 20:08:39.099984 amazon-ssm-agent[2105]: 2025-01-13 20:08:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:08:39.201279 amazon-ssm-agent[2105]: 2025-01-13 20:08:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2173) started Jan 13 20:08:39.301715 amazon-ssm-agent[2105]: 2025-01-13 20:08:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:08:39.330010 ntpd[1915]: Listen normally on 7 eth0 [fe80::433:8ff:fe9e:452d%2]:123 Jan 13 20:08:39.330989 ntpd[1915]: 13 Jan 20:08:39 ntpd[1915]: Listen normally on 7 eth0 [fe80::433:8ff:fe9e:452d%2]:123 Jan 13 20:08:41.968852 systemd-resolved[1728]: Clock change detected. Flushing caches. Jan 13 20:08:44.612878 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:08:44.623387 systemd[1]: Started sshd@0-172.31.31.26:22-147.75.109.163:49698.service - OpenSSH per-connection server daemon (147.75.109.163:49698). Jan 13 20:08:44.822446 sshd[2184]: Accepted publickey for core from 147.75.109.163 port 49698 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:44.826013 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:44.844768 systemd-logind[1920]: New session 1 of user core. Jan 13 20:08:44.846579 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:08:44.852483 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:08:44.887614 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:08:44.896592 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:08:44.909343 (systemd)[2188]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:08:45.126433 systemd[2188]: Queued start job for default target default.target. Jan 13 20:08:45.136899 systemd[2188]: Created slice app.slice - User Application Slice. Jan 13 20:08:45.136992 systemd[2188]: Reached target paths.target - Paths. Jan 13 20:08:45.137028 systemd[2188]: Reached target timers.target - Timers. Jan 13 20:08:45.139519 systemd[2188]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:08:45.165106 systemd[2188]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:08:45.165344 systemd[2188]: Reached target sockets.target - Sockets. Jan 13 20:08:45.165377 systemd[2188]: Reached target basic.target - Basic System. Jan 13 20:08:45.165460 systemd[2188]: Reached target default.target - Main User Target. Jan 13 20:08:45.165522 systemd[2188]: Startup finished in 244ms. Jan 13 20:08:45.165667 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:08:45.179200 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:08:45.332476 systemd[1]: Started sshd@1-172.31.31.26:22-147.75.109.163:49702.service - OpenSSH per-connection server daemon (147.75.109.163:49702). Jan 13 20:08:45.520510 sshd[2199]: Accepted publickey for core from 147.75.109.163 port 49702 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:45.522922 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:45.530289 systemd-logind[1920]: New session 2 of user core. Jan 13 20:08:45.540188 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:08:45.666959 sshd[2201]: Connection closed by 147.75.109.163 port 49702 Jan 13 20:08:45.667842 sshd-session[2199]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:45.673801 systemd[1]: sshd@1-172.31.31.26:22-147.75.109.163:49702.service: Deactivated successfully. Jan 13 20:08:45.677653 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:08:45.679203 systemd-logind[1920]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:08:45.681158 systemd-logind[1920]: Removed session 2. Jan 13 20:08:45.709480 systemd[1]: Started sshd@2-172.31.31.26:22-147.75.109.163:49714.service - OpenSSH per-connection server daemon (147.75.109.163:49714). Jan 13 20:08:45.892088 sshd[2206]: Accepted publickey for core from 147.75.109.163 port 49714 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:45.894476 sshd-session[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:45.903049 systemd-logind[1920]: New session 3 of user core. Jan 13 20:08:45.910217 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:08:46.030991 sshd[2208]: Connection closed by 147.75.109.163 port 49714 Jan 13 20:08:46.031936 sshd-session[2206]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:46.038548 systemd[1]: sshd@2-172.31.31.26:22-147.75.109.163:49714.service: Deactivated successfully. Jan 13 20:08:46.042661 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:08:46.044247 systemd-logind[1920]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:08:46.045782 systemd-logind[1920]: Removed session 3. Jan 13 20:08:46.064198 systemd[1]: Started sshd@3-172.31.31.26:22-147.75.109.163:49720.service - OpenSSH per-connection server daemon (147.75.109.163:49720). Jan 13 20:08:46.260506 sshd[2213]: Accepted publickey for core from 147.75.109.163 port 49720 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:46.262995 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:46.270289 systemd-logind[1920]: New session 4 of user core. Jan 13 20:08:46.279204 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:08:46.409257 sshd[2215]: Connection closed by 147.75.109.163 port 49720 Jan 13 20:08:46.409138 sshd-session[2213]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:46.413711 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:08:46.415917 systemd[1]: sshd@3-172.31.31.26:22-147.75.109.163:49720.service: Deactivated successfully. Jan 13 20:08:46.420909 systemd-logind[1920]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:08:46.423033 systemd-logind[1920]: Removed session 4. Jan 13 20:08:46.452440 systemd[1]: Started sshd@4-172.31.31.26:22-147.75.109.163:49722.service - OpenSSH per-connection server daemon (147.75.109.163:49722). Jan 13 20:08:46.636447 sshd[2220]: Accepted publickey for core from 147.75.109.163 port 49722 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:46.639165 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:46.647320 systemd-logind[1920]: New session 5 of user core. Jan 13 20:08:46.658188 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:08:46.802192 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:08:46.802793 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:46.819088 sudo[2223]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:46.843095 sshd[2222]: Connection closed by 147.75.109.163 port 49722 Jan 13 20:08:46.844137 sshd-session[2220]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:46.850866 systemd[1]: sshd@4-172.31.31.26:22-147.75.109.163:49722.service: Deactivated successfully. Jan 13 20:08:46.854683 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:08:46.855933 systemd-logind[1920]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:08:46.857663 systemd-logind[1920]: Removed session 5. Jan 13 20:08:46.874185 systemd[1]: Started sshd@5-172.31.31.26:22-147.75.109.163:49734.service - OpenSSH per-connection server daemon (147.75.109.163:49734). Jan 13 20:08:47.065910 sshd[2228]: Accepted publickey for core from 147.75.109.163 port 49734 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:47.068405 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:47.076842 systemd-logind[1920]: New session 6 of user core. Jan 13 20:08:47.085222 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:08:47.204871 sudo[2232]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:08:47.205535 sudo[2232]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:47.211249 sudo[2232]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:47.221315 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:08:47.221915 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:47.245996 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:08:47.298060 augenrules[2254]: No rules Jan 13 20:08:47.300134 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:08:47.301066 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:08:47.302704 sudo[2231]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:47.326159 sshd[2230]: Connection closed by 147.75.109.163 port 49734 Jan 13 20:08:47.327720 sshd-session[2228]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:47.332432 systemd[1]: sshd@5-172.31.31.26:22-147.75.109.163:49734.service: Deactivated successfully. Jan 13 20:08:47.335639 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:08:47.338500 systemd-logind[1920]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:08:47.340076 systemd-logind[1920]: Removed session 6. Jan 13 20:08:47.359933 systemd[1]: Started sshd@6-172.31.31.26:22-147.75.109.163:60870.service - OpenSSH per-connection server daemon (147.75.109.163:60870). Jan 13 20:08:47.548783 sshd[2262]: Accepted publickey for core from 147.75.109.163 port 60870 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:47.551218 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:47.560294 systemd-logind[1920]: New session 7 of user core. Jan 13 20:08:47.567477 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:08:47.670830 sudo[2265]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:08:47.671472 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:48.227440 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:08:48.230055 (dockerd)[2283]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:08:48.506018 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:08:48.514451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:48.600493 dockerd[2283]: time="2025-01-13T20:08:48.600393128Z" level=info msg="Starting up" Jan 13 20:08:48.882135 dockerd[2283]: time="2025-01-13T20:08:48.881791474Z" level=info msg="Loading containers: start." Jan 13 20:08:49.054409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:49.060273 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:08:49.160083 kubelet[2356]: E0113 20:08:49.159509 2356 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:08:49.167815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:08:49.168257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:08:49.201011 kernel: Initializing XFRM netlink socket Jan 13 20:08:49.231356 (udev-worker)[2309]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:49.319816 systemd-networkd[1759]: docker0: Link UP Jan 13 20:08:49.357152 dockerd[2283]: time="2025-01-13T20:08:49.357103316Z" level=info msg="Loading containers: done." Jan 13 20:08:49.382706 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3865769423-merged.mount: Deactivated successfully. Jan 13 20:08:49.394002 dockerd[2283]: time="2025-01-13T20:08:49.393683708Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:08:49.394002 dockerd[2283]: time="2025-01-13T20:08:49.393841292Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:08:49.394271 dockerd[2283]: time="2025-01-13T20:08:49.394085336Z" level=info msg="Daemon has completed initialization" Jan 13 20:08:49.454632 dockerd[2283]: time="2025-01-13T20:08:49.454455993Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:08:49.455480 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:08:50.590677 containerd[1931]: time="2025-01-13T20:08:50.590604706Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 20:08:51.253660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3933567719.mount: Deactivated successfully. Jan 13 20:08:52.763929 containerd[1931]: time="2025-01-13T20:08:52.763872697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:52.766651 containerd[1931]: time="2025-01-13T20:08:52.766589173Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864010" Jan 13 20:08:52.768144 containerd[1931]: time="2025-01-13T20:08:52.768077077Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:52.773784 containerd[1931]: time="2025-01-13T20:08:52.773704633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:52.777082 containerd[1931]: time="2025-01-13T20:08:52.776073001Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 2.185400219s" Jan 13 20:08:52.777082 containerd[1931]: time="2025-01-13T20:08:52.776135113Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Jan 13 20:08:52.816028 containerd[1931]: time="2025-01-13T20:08:52.815971645Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 20:08:54.461359 containerd[1931]: time="2025-01-13T20:08:54.461281838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:54.463458 containerd[1931]: time="2025-01-13T20:08:54.463385618Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900694" Jan 13 20:08:54.465533 containerd[1931]: time="2025-01-13T20:08:54.465448826Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:54.471507 containerd[1931]: time="2025-01-13T20:08:54.471456938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:54.474491 containerd[1931]: time="2025-01-13T20:08:54.474171074Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 1.657959861s" Jan 13 20:08:54.474491 containerd[1931]: time="2025-01-13T20:08:54.474220130Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Jan 13 20:08:54.514562 containerd[1931]: time="2025-01-13T20:08:54.514492154Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 20:08:55.784101 containerd[1931]: time="2025-01-13T20:08:55.784029556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:55.786935 containerd[1931]: time="2025-01-13T20:08:55.786858124Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164332" Jan 13 20:08:55.788984 containerd[1931]: time="2025-01-13T20:08:55.788901868Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:55.794557 containerd[1931]: time="2025-01-13T20:08:55.794460400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:55.798116 containerd[1931]: time="2025-01-13T20:08:55.797407732Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.282855026s" Jan 13 20:08:55.798116 containerd[1931]: time="2025-01-13T20:08:55.797465656Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Jan 13 20:08:55.836319 containerd[1931]: time="2025-01-13T20:08:55.836262544Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:08:57.077808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2480959934.mount: Deactivated successfully. Jan 13 20:08:57.610741 containerd[1931]: time="2025-01-13T20:08:57.609931709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:57.612439 containerd[1931]: time="2025-01-13T20:08:57.612367121Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011" Jan 13 20:08:57.614618 containerd[1931]: time="2025-01-13T20:08:57.614560553Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:57.618970 containerd[1931]: time="2025-01-13T20:08:57.618871445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:57.620455 containerd[1931]: time="2025-01-13T20:08:57.620270357Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.783947165s" Jan 13 20:08:57.620455 containerd[1931]: time="2025-01-13T20:08:57.620321201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 13 20:08:57.660697 containerd[1931]: time="2025-01-13T20:08:57.660648965Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:08:58.218908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1686439007.mount: Deactivated successfully. Jan 13 20:08:59.256431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:08:59.266999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:59.523075 containerd[1931]: time="2025-01-13T20:08:59.521311675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:59.530203 containerd[1931]: time="2025-01-13T20:08:59.530101819Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 13 20:08:59.535646 containerd[1931]: time="2025-01-13T20:08:59.535560091Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:59.544616 containerd[1931]: time="2025-01-13T20:08:59.544465171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:59.547552 containerd[1931]: time="2025-01-13T20:08:59.547495183Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.886555938s" Jan 13 20:08:59.547754 containerd[1931]: time="2025-01-13T20:08:59.547723471Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:08:59.595862 containerd[1931]: time="2025-01-13T20:08:59.595806451Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:08:59.618263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:59.621397 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:08:59.700367 kubelet[2637]: E0113 20:08:59.700276 2637 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:08:59.704990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:08:59.705366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:09:00.090795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380116749.mount: Deactivated successfully. Jan 13 20:09:00.100222 containerd[1931]: time="2025-01-13T20:09:00.100144674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:00.101790 containerd[1931]: time="2025-01-13T20:09:00.101726394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 13 20:09:00.103144 containerd[1931]: time="2025-01-13T20:09:00.103057734Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:00.107968 containerd[1931]: time="2025-01-13T20:09:00.107867706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:00.111480 containerd[1931]: time="2025-01-13T20:09:00.111414618Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 515.550099ms" Jan 13 20:09:00.111480 containerd[1931]: time="2025-01-13T20:09:00.111473982Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:09:00.150190 containerd[1931]: time="2025-01-13T20:09:00.150134118Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 20:09:00.896484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422761922.mount: Deactivated successfully. Jan 13 20:09:03.446970 containerd[1931]: time="2025-01-13T20:09:03.444934174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:03.460641 containerd[1931]: time="2025-01-13T20:09:03.460556938Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 13 20:09:03.479661 containerd[1931]: time="2025-01-13T20:09:03.479560846Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:03.507273 containerd[1931]: time="2025-01-13T20:09:03.507173758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:03.509936 containerd[1931]: time="2025-01-13T20:09:03.509265022Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.35907658s" Jan 13 20:09:03.509936 containerd[1931]: time="2025-01-13T20:09:03.509322262Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 13 20:09:05.753140 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:09:09.755770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:09:09.765445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:10.208312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:10.218447 (kubelet)[2766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:09:10.305966 kubelet[2766]: E0113 20:09:10.304527 2766 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:09:10.308544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:09:10.308850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:09:11.685442 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:11.704466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:11.728705 systemd[1]: Reloading requested from client PID 2780 ('systemctl') (unit session-7.scope)... Jan 13 20:09:11.728736 systemd[1]: Reloading... Jan 13 20:09:11.905040 zram_generator::config[2820]: No configuration found. Jan 13 20:09:12.154590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:09:12.324581 systemd[1]: Reloading finished in 595 ms. Jan 13 20:09:12.413130 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:09:12.413318 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:09:12.414048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:12.420613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:12.796549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:12.812505 (kubelet)[2883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:09:12.893507 kubelet[2883]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:12.893507 kubelet[2883]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:09:12.893507 kubelet[2883]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:12.895161 kubelet[2883]: I0113 20:09:12.895080 2883 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:09:14.043400 kubelet[2883]: I0113 20:09:14.043332 2883 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:09:14.043400 kubelet[2883]: I0113 20:09:14.043386 2883 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:09:14.044062 kubelet[2883]: I0113 20:09:14.043787 2883 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:09:14.073415 kubelet[2883]: E0113 20:09:14.073358 2883 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.31.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:14.073979 kubelet[2883]: I0113 20:09:14.073777 2883 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:09:14.090393 kubelet[2883]: I0113 20:09:14.090346 2883 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:09:14.093032 kubelet[2883]: I0113 20:09:14.092922 2883 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:09:14.094042 kubelet[2883]: I0113 20:09:14.093148 2883 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:09:14.094042 kubelet[2883]: I0113 20:09:14.093468 2883 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:09:14.094042 kubelet[2883]: I0113 20:09:14.093491 2883 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:09:14.094042 kubelet[2883]: I0113 20:09:14.093730 2883 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:14.097352 kubelet[2883]: I0113 20:09:14.096837 2883 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:09:14.097352 kubelet[2883]: I0113 20:09:14.096886 2883 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:09:14.097352 kubelet[2883]: I0113 20:09:14.097043 2883 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:09:14.097352 kubelet[2883]: I0113 20:09:14.097119 2883 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:09:14.099180 kubelet[2883]: W0113 20:09:14.099112 2883 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:14.099403 kubelet[2883]: E0113 20:09:14.099382 2883 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:14.099670 kubelet[2883]: I0113 20:09:14.099642 2883 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:09:14.100135 kubelet[2883]: I0113 20:09:14.100111 2883 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:09:14.100697 kubelet[2883]: W0113 20:09:14.100285 2883 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:09:14.102579 kubelet[2883]: I0113 20:09:14.101799 2883 server.go:1264] "Started kubelet" Jan 13 20:09:14.103041 kubelet[2883]: W0113 20:09:14.102972 2883 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-26&limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:14.104412 kubelet[2883]: E0113 20:09:14.103174 2883 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-26&limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:14.119982 kubelet[2883]: I0113 20:09:14.118294 2883 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:09:14.132007 kubelet[2883]: E0113 20:09:14.131767 2883 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.26:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-26.181a5972618177cb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-26,UID:ip-172-31-31-26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-26,},FirstTimestamp:2025-01-13 20:09:14.101766091 +0000 UTC m=+1.282792387,LastTimestamp:2025-01-13 20:09:14.101766091 +0000 UTC m=+1.282792387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-26,}" Jan 13 20:09:14.133580 kubelet[2883]: I0113 20:09:14.133517 2883 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:09:14.134672 kubelet[2883]: I0113 20:09:14.134640 2883 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:09:14.141917 kubelet[2883]: I0113 20:09:14.135469 2883 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:09:14.142234 kubelet[2883]: I0113 20:09:14.142212 2883 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:09:14.142332 kubelet[2883]: I0113 20:09:14.136519 2883 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:09:14.142747 kubelet[2883]: I0113 20:09:14.142722 2883 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:09:14.145129 kubelet[2883]: W0113 20:09:14.145024 2883 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:14.145129 kubelet[2883]: E0113 20:09:14.145119 2883 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:14.145370 kubelet[2883]: I0113 20:09:14.145316 2883 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:09:14.145756 kubelet[2883]: I0113 20:09:14.145452 2883 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:09:14.146996 kubelet[2883]: E0113 20:09:14.145971 2883 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:09:14.147473 kubelet[2883]: E0113 20:09:14.147408 2883 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": dial tcp 172.31.31.26:6443: connect: connection refused" interval="200ms" Jan 13 20:09:14.148108 kubelet[2883]: I0113 20:09:14.136477 2883 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:09:14.150239 kubelet[2883]: I0113 20:09:14.150187 2883 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:09:14.164168 kubelet[2883]: I0113 20:09:14.164104 2883 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:09:14.166255 kubelet[2883]: I0113 20:09:14.166203 2883 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:09:14.166370 kubelet[2883]: I0113 20:09:14.166306 2883 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:09:14.166370 kubelet[2883]: I0113 20:09:14.166345 2883 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:09:14.166508 kubelet[2883]: E0113 20:09:14.166414 2883 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:09:14.181567 kubelet[2883]: W0113 20:09:14.181478 2883 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:14.181567 kubelet[2883]: E0113 20:09:14.181573 2883 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:14.199362 kubelet[2883]: I0113 20:09:14.199321 2883 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:09:14.199362 kubelet[2883]: I0113 20:09:14.199353 2883 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:09:14.199652 kubelet[2883]: I0113 20:09:14.199386 2883 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:14.214076 kubelet[2883]: I0113 20:09:14.214032 2883 policy_none.go:49] "None policy: Start" Jan 13 20:09:14.215212 kubelet[2883]: I0113 20:09:14.215155 2883 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:09:14.215212 kubelet[2883]: I0113 20:09:14.215204 2883 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:09:14.230991 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:09:14.237971 kubelet[2883]: I0113 20:09:14.237905 2883 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-26" Jan 13 20:09:14.238815 kubelet[2883]: E0113 20:09:14.238754 2883 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.26:6443/api/v1/nodes\": dial tcp 172.31.31.26:6443: connect: connection refused" node="ip-172-31-31-26" Jan 13 20:09:14.246027 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:09:14.252530 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:09:14.265295 kubelet[2883]: I0113 20:09:14.264575 2883 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:09:14.265295 kubelet[2883]: I0113 20:09:14.264863 2883 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:09:14.265295 kubelet[2883]: I0113 20:09:14.265073 2883 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:09:14.267997 kubelet[2883]: I0113 20:09:14.267282 2883 topology_manager.go:215] "Topology Admit Handler" podUID="edc7fcbdbc95db7d3426b2fd29419cb5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-26" Jan 13 20:09:14.269530 kubelet[2883]: E0113 20:09:14.269471 2883 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-26\" not found" Jan 13 20:09:14.271927 kubelet[2883]: I0113 20:09:14.271861 2883 topology_manager.go:215] "Topology Admit Handler" podUID="e5c7db1c9dba02495840b4594a7d5af2" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:14.275911 kubelet[2883]: I0113 20:09:14.275851 2883 topology_manager.go:215] "Topology Admit Handler" podUID="888b31470859a9d43b47aa7051293362" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-26" Jan 13 20:09:14.288168 systemd[1]: Created slice kubepods-burstable-podedc7fcbdbc95db7d3426b2fd29419cb5.slice - libcontainer container kubepods-burstable-podedc7fcbdbc95db7d3426b2fd29419cb5.slice. Jan 13 20:09:14.315496 systemd[1]: Created slice kubepods-burstable-pode5c7db1c9dba02495840b4594a7d5af2.slice - libcontainer container kubepods-burstable-pode5c7db1c9dba02495840b4594a7d5af2.slice. Jan 13 20:09:14.331158 systemd[1]: Created slice kubepods-burstable-pod888b31470859a9d43b47aa7051293362.slice - libcontainer container kubepods-burstable-pod888b31470859a9d43b47aa7051293362.slice. Jan 13 20:09:14.342713 kubelet[2883]: I0113 20:09:14.342668 2883 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edc7fcbdbc95db7d3426b2fd29419cb5-ca-certs\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"edc7fcbdbc95db7d3426b2fd29419cb5\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 13 20:09:14.342713 kubelet[2883]: I0113 20:09:14.342727 2883 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edc7fcbdbc95db7d3426b2fd29419cb5-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"edc7fcbdbc95db7d3426b2fd29419cb5\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 13 20:09:14.342713 kubelet[2883]: I0113 20:09:14.342770 2883 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c7db1c9dba02495840b4594a7d5af2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"e5c7db1c9dba02495840b4594a7d5af2\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:14.343196 kubelet[2883]: I0113 20:09:14.342807 2883 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c7db1c9dba02495840b4594a7d5af2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"e5c7db1c9dba02495840b4594a7d5af2\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:14.343196 kubelet[2883]: I0113 20:09:14.342850 2883 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edc7fcbdbc95db7d3426b2fd29419cb5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"edc7fcbdbc95db7d3426b2fd29419cb5\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 13 20:09:14.343196 kubelet[2883]: I0113 20:09:14.342886 2883 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c7db1c9dba02495840b4594a7d5af2-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"e5c7db1c9dba02495840b4594a7d5af2\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:14.343196 kubelet[2883]: I0113 20:09:14.342920 2883 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c7db1c9dba02495840b4594a7d5af2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"e5c7db1c9dba02495840b4594a7d5af2\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:14.343196 kubelet[2883]: I0113 20:09:14.342977 2883 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c7db1c9dba02495840b4594a7d5af2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"e5c7db1c9dba02495840b4594a7d5af2\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:14.343604 kubelet[2883]: I0113 20:09:14.343037 2883 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/888b31470859a9d43b47aa7051293362-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-26\" (UID: \"888b31470859a9d43b47aa7051293362\") " pod="kube-system/kube-scheduler-ip-172-31-31-26" Jan 13 20:09:14.348899 kubelet[2883]: E0113 20:09:14.348836 2883 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": dial tcp 172.31.31.26:6443: connect: connection refused" interval="400ms" Jan 13 20:09:14.441409 kubelet[2883]: I0113 20:09:14.441367 2883 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-26" Jan 13 20:09:14.442057 kubelet[2883]: E0113 20:09:14.442013 2883 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.26:6443/api/v1/nodes\": dial tcp 172.31.31.26:6443: connect: connection refused" node="ip-172-31-31-26" Jan 13 20:09:14.611352 containerd[1931]: time="2025-01-13T20:09:14.611180206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-26,Uid:edc7fcbdbc95db7d3426b2fd29419cb5,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:14.626186 containerd[1931]: time="2025-01-13T20:09:14.626116522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-26,Uid:e5c7db1c9dba02495840b4594a7d5af2,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:14.637805 containerd[1931]: time="2025-01-13T20:09:14.637439170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-26,Uid:888b31470859a9d43b47aa7051293362,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:14.750166 kubelet[2883]: E0113 20:09:14.750098 2883 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": dial tcp 172.31.31.26:6443: connect: connection refused" interval="800ms" Jan 13 20:09:14.844385 kubelet[2883]: I0113 20:09:14.844325 2883 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-26" Jan 13 20:09:14.844986 kubelet[2883]: E0113 20:09:14.844898 2883 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.26:6443/api/v1/nodes\": dial tcp 172.31.31.26:6443: connect: connection refused" node="ip-172-31-31-26" Jan 13 20:09:15.141806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3752016535.mount: Deactivated successfully. Jan 13 20:09:15.152525 containerd[1931]: time="2025-01-13T20:09:15.152445224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:09:15.162351 containerd[1931]: time="2025-01-13T20:09:15.162269876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 20:09:15.163932 containerd[1931]: time="2025-01-13T20:09:15.163872488Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:09:15.166401 containerd[1931]: time="2025-01-13T20:09:15.166344944Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:09:15.168647 containerd[1931]: time="2025-01-13T20:09:15.168553244Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:09:15.171707 containerd[1931]: time="2025-01-13T20:09:15.171504116Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:09:15.171707 containerd[1931]: time="2025-01-13T20:09:15.171640760Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:09:15.178096 containerd[1931]: time="2025-01-13T20:09:15.178016936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:09:15.180118 containerd[1931]: time="2025-01-13T20:09:15.179794856Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 553.567406ms" Jan 13 20:09:15.183302 containerd[1931]: time="2025-01-13T20:09:15.183236912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.928246ms" Jan 13 20:09:15.193876 containerd[1931]: time="2025-01-13T20:09:15.193809681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.261959ms" Jan 13 20:09:15.210841 kubelet[2883]: W0113 20:09:15.210683 2883 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:15.210841 kubelet[2883]: E0113 20:09:15.210774 2883 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:15.401129 containerd[1931]: time="2025-01-13T20:09:15.399938530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:15.401129 containerd[1931]: time="2025-01-13T20:09:15.400685938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:15.401129 containerd[1931]: time="2025-01-13T20:09:15.400713730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:15.401129 containerd[1931]: time="2025-01-13T20:09:15.400849618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:15.406687 containerd[1931]: time="2025-01-13T20:09:15.404632282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:15.406687 containerd[1931]: time="2025-01-13T20:09:15.405525718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:15.406687 containerd[1931]: time="2025-01-13T20:09:15.405557758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:15.408054 containerd[1931]: time="2025-01-13T20:09:15.407885998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:15.418711 containerd[1931]: time="2025-01-13T20:09:15.418350106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:15.419062 containerd[1931]: time="2025-01-13T20:09:15.418993210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:15.420388 containerd[1931]: time="2025-01-13T20:09:15.420306910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:15.420895 containerd[1931]: time="2025-01-13T20:09:15.420830842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:15.443321 kubelet[2883]: W0113 20:09:15.443220 2883 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-26&limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:15.443586 kubelet[2883]: E0113 20:09:15.443331 2883 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-26&limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:15.452918 kubelet[2883]: W0113 20:09:15.452795 2883 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:15.452918 kubelet[2883]: E0113 20:09:15.452886 2883 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.26:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:15.459443 systemd[1]: Started cri-containerd-1caf2efc16eabf313fb7d30b8a08437c5b2c651ebf405d1e7a7f9a744963efb2.scope - libcontainer container 1caf2efc16eabf313fb7d30b8a08437c5b2c651ebf405d1e7a7f9a744963efb2. Jan 13 20:09:15.463931 kubelet[2883]: W0113 20:09:15.463616 2883 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:15.463931 kubelet[2883]: E0113 20:09:15.463708 2883 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.26:6443: connect: connection refused Jan 13 20:09:15.472311 systemd[1]: Started cri-containerd-5d5636e8ac7771bbfd61be600a93bf36e21a062cc1787563f102d2204324322e.scope - libcontainer container 5d5636e8ac7771bbfd61be600a93bf36e21a062cc1787563f102d2204324322e. Jan 13 20:09:15.484844 systemd[1]: Started cri-containerd-e316a0585b8d461ed35dbd7e7b40f89378eddd838160736c72d60f285fee87ba.scope - libcontainer container e316a0585b8d461ed35dbd7e7b40f89378eddd838160736c72d60f285fee87ba. Jan 13 20:09:15.551650 kubelet[2883]: E0113 20:09:15.551573 2883 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": dial tcp 172.31.31.26:6443: connect: connection refused" interval="1.6s" Jan 13 20:09:15.591711 containerd[1931]: time="2025-01-13T20:09:15.591644735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-26,Uid:edc7fcbdbc95db7d3426b2fd29419cb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e316a0585b8d461ed35dbd7e7b40f89378eddd838160736c72d60f285fee87ba\"" Jan 13 20:09:15.604712 containerd[1931]: time="2025-01-13T20:09:15.603753227Z" level=info msg="CreateContainer within sandbox \"e316a0585b8d461ed35dbd7e7b40f89378eddd838160736c72d60f285fee87ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:09:15.606073 containerd[1931]: time="2025-01-13T20:09:15.605930243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-26,Uid:e5c7db1c9dba02495840b4594a7d5af2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d5636e8ac7771bbfd61be600a93bf36e21a062cc1787563f102d2204324322e\"" Jan 13 20:09:15.614265 containerd[1931]: time="2025-01-13T20:09:15.614188835Z" level=info msg="CreateContainer within sandbox \"5d5636e8ac7771bbfd61be600a93bf36e21a062cc1787563f102d2204324322e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:09:15.616347 containerd[1931]: time="2025-01-13T20:09:15.616178087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-26,Uid:888b31470859a9d43b47aa7051293362,Namespace:kube-system,Attempt:0,} returns sandbox id \"1caf2efc16eabf313fb7d30b8a08437c5b2c651ebf405d1e7a7f9a744963efb2\"" Jan 13 20:09:15.622749 containerd[1931]: time="2025-01-13T20:09:15.622673555Z" level=info msg="CreateContainer within sandbox \"1caf2efc16eabf313fb7d30b8a08437c5b2c651ebf405d1e7a7f9a744963efb2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:09:15.643290 containerd[1931]: time="2025-01-13T20:09:15.643213247Z" level=info msg="CreateContainer within sandbox \"e316a0585b8d461ed35dbd7e7b40f89378eddd838160736c72d60f285fee87ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"82fb4ba9ae1586c222acc20c61546607f2d7b8cde0d39b8cb50246dd7eb841ad\"" Jan 13 20:09:15.645526 containerd[1931]: time="2025-01-13T20:09:15.644492183Z" level=info msg="StartContainer for \"82fb4ba9ae1586c222acc20c61546607f2d7b8cde0d39b8cb50246dd7eb841ad\"" Jan 13 20:09:15.650596 kubelet[2883]: I0113 20:09:15.650549 2883 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-26" Jan 13 20:09:15.652380 kubelet[2883]: E0113 20:09:15.652132 2883 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.26:6443/api/v1/nodes\": dial tcp 172.31.31.26:6443: connect: connection refused" node="ip-172-31-31-26" Jan 13 20:09:15.672925 containerd[1931]: time="2025-01-13T20:09:15.672483287Z" level=info msg="CreateContainer within sandbox \"1caf2efc16eabf313fb7d30b8a08437c5b2c651ebf405d1e7a7f9a744963efb2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d433c3c27db7fcca9eed8b1132a2db5820f7178b09bb8a224887324c127fb68e\"" Jan 13 20:09:15.673616 containerd[1931]: time="2025-01-13T20:09:15.673499747Z" level=info msg="StartContainer for \"d433c3c27db7fcca9eed8b1132a2db5820f7178b09bb8a224887324c127fb68e\"" Jan 13 20:09:15.678003 containerd[1931]: time="2025-01-13T20:09:15.676928291Z" level=info msg="CreateContainer within sandbox \"5d5636e8ac7771bbfd61be600a93bf36e21a062cc1787563f102d2204324322e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9008ee33a6c9a135dc48d26b7eae4b7591efdac731f5ce8cced4106c956bc129\"" Jan 13 20:09:15.679192 containerd[1931]: time="2025-01-13T20:09:15.679128191Z" level=info msg="StartContainer for \"9008ee33a6c9a135dc48d26b7eae4b7591efdac731f5ce8cced4106c956bc129\"" Jan 13 20:09:15.700465 systemd[1]: Started cri-containerd-82fb4ba9ae1586c222acc20c61546607f2d7b8cde0d39b8cb50246dd7eb841ad.scope - libcontainer container 82fb4ba9ae1586c222acc20c61546607f2d7b8cde0d39b8cb50246dd7eb841ad. Jan 13 20:09:15.758338 systemd[1]: Started cri-containerd-d433c3c27db7fcca9eed8b1132a2db5820f7178b09bb8a224887324c127fb68e.scope - libcontainer container d433c3c27db7fcca9eed8b1132a2db5820f7178b09bb8a224887324c127fb68e. Jan 13 20:09:15.778315 systemd[1]: Started cri-containerd-9008ee33a6c9a135dc48d26b7eae4b7591efdac731f5ce8cced4106c956bc129.scope - libcontainer container 9008ee33a6c9a135dc48d26b7eae4b7591efdac731f5ce8cced4106c956bc129. Jan 13 20:09:15.822035 containerd[1931]: time="2025-01-13T20:09:15.821807268Z" level=info msg="StartContainer for \"82fb4ba9ae1586c222acc20c61546607f2d7b8cde0d39b8cb50246dd7eb841ad\" returns successfully" Jan 13 20:09:15.878813 containerd[1931]: time="2025-01-13T20:09:15.878745780Z" level=info msg="StartContainer for \"9008ee33a6c9a135dc48d26b7eae4b7591efdac731f5ce8cced4106c956bc129\" returns successfully" Jan 13 20:09:15.928302 containerd[1931]: time="2025-01-13T20:09:15.928134168Z" level=info msg="StartContainer for \"d433c3c27db7fcca9eed8b1132a2db5820f7178b09bb8a224887324c127fb68e\" returns successfully" Jan 13 20:09:17.257193 kubelet[2883]: I0113 20:09:17.256118 2883 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-26" Jan 13 20:09:19.426639 kubelet[2883]: E0113 20:09:19.426571 2883 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-26\" not found" node="ip-172-31-31-26" Jan 13 20:09:19.519587 kubelet[2883]: I0113 20:09:19.519305 2883 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-31-26" Jan 13 20:09:20.102761 kubelet[2883]: I0113 20:09:20.102708 2883 apiserver.go:52] "Watching apiserver" Jan 13 20:09:20.143325 kubelet[2883]: I0113 20:09:20.143149 2883 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:09:20.301121 update_engine[1922]: I20250113 20:09:20.299897 1922 update_attempter.cc:509] Updating boot flags... Jan 13 20:09:20.439086 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3169) Jan 13 20:09:20.890983 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3173) Jan 13 20:09:21.309006 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3173) Jan 13 20:09:22.271031 systemd[1]: Reloading requested from client PID 3423 ('systemctl') (unit session-7.scope)... Jan 13 20:09:22.271517 systemd[1]: Reloading... Jan 13 20:09:22.488085 zram_generator::config[3463]: No configuration found. Jan 13 20:09:22.761408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:09:22.981914 systemd[1]: Reloading finished in 709 ms. Jan 13 20:09:23.070331 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:23.087632 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:09:23.089220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:23.089304 systemd[1]: kubelet.service: Consumed 2.052s CPU time, 112.1M memory peak, 0B memory swap peak. Jan 13 20:09:23.101531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:23.527247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:23.541517 (kubelet)[3523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:09:23.642789 kubelet[3523]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:23.644211 kubelet[3523]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:09:23.644211 kubelet[3523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:23.644211 kubelet[3523]: I0113 20:09:23.643123 3523 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:09:23.653428 kubelet[3523]: I0113 20:09:23.653354 3523 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:09:23.653428 kubelet[3523]: I0113 20:09:23.653409 3523 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:09:23.653967 kubelet[3523]: I0113 20:09:23.653817 3523 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:09:23.656910 kubelet[3523]: I0113 20:09:23.656855 3523 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:09:23.660454 kubelet[3523]: I0113 20:09:23.660391 3523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:09:23.684490 kubelet[3523]: I0113 20:09:23.684436 3523 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:09:23.685424 kubelet[3523]: I0113 20:09:23.684831 3523 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:09:23.685424 kubelet[3523]: I0113 20:09:23.684877 3523 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:09:23.685424 kubelet[3523]: I0113 20:09:23.685298 3523 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:09:23.685424 kubelet[3523]: I0113 20:09:23.685320 3523 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:09:23.685424 kubelet[3523]: I0113 20:09:23.685403 3523 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:23.689325 kubelet[3523]: I0113 20:09:23.685608 3523 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:09:23.689325 kubelet[3523]: I0113 20:09:23.685673 3523 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:09:23.689325 kubelet[3523]: I0113 20:09:23.685729 3523 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:09:23.689325 kubelet[3523]: I0113 20:09:23.685764 3523 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:09:23.689832 kubelet[3523]: I0113 20:09:23.689788 3523 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:09:23.690993 kubelet[3523]: I0113 20:09:23.690141 3523 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:09:23.694518 kubelet[3523]: I0113 20:09:23.694463 3523 server.go:1264] "Started kubelet" Jan 13 20:09:23.698160 sudo[3536]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:09:23.699442 sudo[3536]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:09:23.703232 kubelet[3523]: I0113 20:09:23.702958 3523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:09:23.719976 kubelet[3523]: I0113 20:09:23.718081 3523 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:09:23.723071 kubelet[3523]: I0113 20:09:23.723034 3523 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:09:23.725005 kubelet[3523]: I0113 20:09:23.724718 3523 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:09:23.726317 kubelet[3523]: I0113 20:09:23.726290 3523 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:09:23.733432 kubelet[3523]: I0113 20:09:23.732407 3523 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:09:23.734651 kubelet[3523]: I0113 20:09:23.734359 3523 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:09:23.735081 kubelet[3523]: I0113 20:09:23.735061 3523 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:09:23.764109 kubelet[3523]: I0113 20:09:23.763937 3523 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:09:23.764401 kubelet[3523]: I0113 20:09:23.764367 3523 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:09:23.773061 kubelet[3523]: I0113 20:09:23.772006 3523 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:09:23.774872 kubelet[3523]: I0113 20:09:23.774725 3523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:09:23.779082 kubelet[3523]: I0113 20:09:23.778852 3523 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:09:23.779082 kubelet[3523]: I0113 20:09:23.778930 3523 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:09:23.779082 kubelet[3523]: I0113 20:09:23.778996 3523 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:09:23.779289 kubelet[3523]: E0113 20:09:23.779092 3523 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:09:23.835140 kubelet[3523]: E0113 20:09:23.834702 3523 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:09:23.862253 kubelet[3523]: I0113 20:09:23.861895 3523 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-26" Jan 13 20:09:23.879921 kubelet[3523]: E0113 20:09:23.879871 3523 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:09:23.903227 kubelet[3523]: I0113 20:09:23.903188 3523 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-31-26" Jan 13 20:09:23.903631 kubelet[3523]: I0113 20:09:23.903609 3523 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-31-26" Jan 13 20:09:23.987173 kubelet[3523]: I0113 20:09:23.987141 3523 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:09:23.987847 kubelet[3523]: I0113 20:09:23.987528 3523 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:09:23.987847 kubelet[3523]: I0113 20:09:23.987714 3523 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:23.988427 kubelet[3523]: I0113 20:09:23.988403 3523 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:09:23.988835 kubelet[3523]: I0113 20:09:23.988565 3523 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:09:23.988835 kubelet[3523]: I0113 20:09:23.988609 3523 policy_none.go:49] "None policy: Start" Jan 13 20:09:23.991909 kubelet[3523]: I0113 20:09:23.991497 3523 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:09:23.991909 kubelet[3523]: I0113 20:09:23.991735 3523 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:09:23.992898 kubelet[3523]: I0113 20:09:23.992751 3523 state_mem.go:75] "Updated machine memory state" Jan 13 20:09:24.003775 kubelet[3523]: I0113 20:09:24.003619 3523 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:09:24.005287 kubelet[3523]: I0113 20:09:24.004216 3523 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:09:24.009598 kubelet[3523]: I0113 20:09:24.009369 3523 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:09:24.086105 kubelet[3523]: I0113 20:09:24.081722 3523 topology_manager.go:215] "Topology Admit Handler" podUID="888b31470859a9d43b47aa7051293362" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-26" Jan 13 20:09:24.086105 kubelet[3523]: I0113 20:09:24.081872 3523 topology_manager.go:215] "Topology Admit Handler" podUID="edc7fcbdbc95db7d3426b2fd29419cb5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-26" Jan 13 20:09:24.086105 kubelet[3523]: I0113 20:09:24.081971 3523 topology_manager.go:215] "Topology Admit Handler" podUID="e5c7db1c9dba02495840b4594a7d5af2" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:24.097439 kubelet[3523]: E0113 20:09:24.096466 3523 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-31-26\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:24.141459 kubelet[3523]: I0113 20:09:24.141399 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/edc7fcbdbc95db7d3426b2fd29419cb5-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"edc7fcbdbc95db7d3426b2fd29419cb5\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 13 20:09:24.141589 kubelet[3523]: I0113 20:09:24.141467 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5c7db1c9dba02495840b4594a7d5af2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"e5c7db1c9dba02495840b4594a7d5af2\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:24.141589 kubelet[3523]: I0113 20:09:24.141513 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5c7db1c9dba02495840b4594a7d5af2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"e5c7db1c9dba02495840b4594a7d5af2\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:24.141589 kubelet[3523]: I0113 20:09:24.141550 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/edc7fcbdbc95db7d3426b2fd29419cb5-ca-certs\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"edc7fcbdbc95db7d3426b2fd29419cb5\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 13 20:09:24.141589 kubelet[3523]: I0113 20:09:24.141585 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/edc7fcbdbc95db7d3426b2fd29419cb5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-26\" (UID: \"edc7fcbdbc95db7d3426b2fd29419cb5\") " pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 13 20:09:24.141817 kubelet[3523]: I0113 20:09:24.141618 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5c7db1c9dba02495840b4594a7d5af2-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"e5c7db1c9dba02495840b4594a7d5af2\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:24.141817 kubelet[3523]: I0113 20:09:24.141655 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5c7db1c9dba02495840b4594a7d5af2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"e5c7db1c9dba02495840b4594a7d5af2\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:24.141817 kubelet[3523]: I0113 20:09:24.141707 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5c7db1c9dba02495840b4594a7d5af2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-26\" (UID: \"e5c7db1c9dba02495840b4594a7d5af2\") " pod="kube-system/kube-controller-manager-ip-172-31-31-26" Jan 13 20:09:24.141817 kubelet[3523]: I0113 20:09:24.141759 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/888b31470859a9d43b47aa7051293362-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-26\" (UID: \"888b31470859a9d43b47aa7051293362\") " pod="kube-system/kube-scheduler-ip-172-31-31-26" Jan 13 20:09:24.652799 sudo[3536]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:24.687058 kubelet[3523]: I0113 20:09:24.686914 3523 apiserver.go:52] "Watching apiserver" Jan 13 20:09:24.735294 kubelet[3523]: I0113 20:09:24.735171 3523 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:09:24.927560 kubelet[3523]: E0113 20:09:24.926640 3523 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-26\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-26" Jan 13 20:09:24.944432 kubelet[3523]: I0113 20:09:24.944353 3523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-26" podStartSLOduration=0.944331117 podStartE2EDuration="944.331117ms" podCreationTimestamp="2025-01-13 20:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:24.930256221 +0000 UTC m=+1.382529428" watchObservedRunningTime="2025-01-13 20:09:24.944331117 +0000 UTC m=+1.396604324" Jan 13 20:09:24.961713 kubelet[3523]: I0113 20:09:24.960809 3523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-26" podStartSLOduration=2.960788157 podStartE2EDuration="2.960788157s" podCreationTimestamp="2025-01-13 20:09:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:24.960321261 +0000 UTC m=+1.412594468" watchObservedRunningTime="2025-01-13 20:09:24.960788157 +0000 UTC m=+1.413061376" Jan 13 20:09:24.962733 kubelet[3523]: I0113 20:09:24.962367 3523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-26" podStartSLOduration=0.962190513 podStartE2EDuration="962.190513ms" podCreationTimestamp="2025-01-13 20:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:24.946013277 +0000 UTC m=+1.398286484" watchObservedRunningTime="2025-01-13 20:09:24.962190513 +0000 UTC m=+1.414463744" Jan 13 20:09:26.924787 sudo[2265]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:26.947973 sshd[2264]: Connection closed by 147.75.109.163 port 60870 Jan 13 20:09:26.948837 sshd-session[2262]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:26.956646 systemd[1]: sshd@6-172.31.31.26:22-147.75.109.163:60870.service: Deactivated successfully. Jan 13 20:09:26.960727 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:09:26.961328 systemd[1]: session-7.scope: Consumed 11.373s CPU time, 186.5M memory peak, 0B memory swap peak. Jan 13 20:09:26.964307 systemd-logind[1920]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:09:26.966762 systemd-logind[1920]: Removed session 7. Jan 13 20:09:36.079281 kubelet[3523]: I0113 20:09:36.079233 3523 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:09:36.080414 containerd[1931]: time="2025-01-13T20:09:36.079745836Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:09:36.081218 kubelet[3523]: I0113 20:09:36.080888 3523 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:09:36.833161 kubelet[3523]: I0113 20:09:36.832145 3523 topology_manager.go:215] "Topology Admit Handler" podUID="8668a69f-8537-486a-910e-ffaa1d79bcb1" podNamespace="kube-system" podName="kube-proxy-8x9kg" Jan 13 20:09:36.840870 kubelet[3523]: W0113 20:09:36.840806 3523 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-31-26" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-26' and this object Jan 13 20:09:36.841062 kubelet[3523]: E0113 20:09:36.840891 3523 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-31-26" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-26' and this object Jan 13 20:09:36.846698 kubelet[3523]: W0113 20:09:36.846601 3523 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-31-26" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-26' and this object Jan 13 20:09:36.846698 kubelet[3523]: E0113 20:09:36.846661 3523 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-31-26" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-26' and this object Jan 13 20:09:36.847102 kubelet[3523]: I0113 20:09:36.847052 3523 topology_manager.go:215] "Topology Admit Handler" podUID="3d2a6f9f-419e-4136-bdc6-fa8493027611" podNamespace="kube-system" podName="cilium-br8dv" Jan 13 20:09:36.856075 systemd[1]: Created slice kubepods-besteffort-pod8668a69f_8537_486a_910e_ffaa1d79bcb1.slice - libcontainer container kubepods-besteffort-pod8668a69f_8537_486a_910e_ffaa1d79bcb1.slice. Jan 13 20:09:36.880275 systemd[1]: Created slice kubepods-burstable-pod3d2a6f9f_419e_4136_bdc6_fa8493027611.slice - libcontainer container kubepods-burstable-pod3d2a6f9f_419e_4136_bdc6_fa8493027611.slice. Jan 13 20:09:36.924576 kubelet[3523]: I0113 20:09:36.924528 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cni-path\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.924860 kubelet[3523]: I0113 20:09:36.924821 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-host-proc-sys-net\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.925340 kubelet[3523]: I0113 20:09:36.925100 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8668a69f-8537-486a-910e-ffaa1d79bcb1-lib-modules\") pod \"kube-proxy-8x9kg\" (UID: \"8668a69f-8537-486a-910e-ffaa1d79bcb1\") " pod="kube-system/kube-proxy-8x9kg" Jan 13 20:09:36.925340 kubelet[3523]: I0113 20:09:36.925170 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-host-proc-sys-kernel\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.925340 kubelet[3523]: I0113 20:09:36.925237 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkcfg\" (UniqueName: \"kubernetes.io/projected/3d2a6f9f-419e-4136-bdc6-fa8493027611-kube-api-access-gkcfg\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.925340 kubelet[3523]: I0113 20:09:36.925300 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8668a69f-8537-486a-910e-ffaa1d79bcb1-kube-proxy\") pod \"kube-proxy-8x9kg\" (UID: \"8668a69f-8537-486a-910e-ffaa1d79bcb1\") " pod="kube-system/kube-proxy-8x9kg" Jan 13 20:09:36.925599 kubelet[3523]: I0113 20:09:36.925370 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8668a69f-8537-486a-910e-ffaa1d79bcb1-xtables-lock\") pod \"kube-proxy-8x9kg\" (UID: \"8668a69f-8537-486a-910e-ffaa1d79bcb1\") " pod="kube-system/kube-proxy-8x9kg" Jan 13 20:09:36.925599 kubelet[3523]: I0113 20:09:36.925437 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-bpf-maps\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.925599 kubelet[3523]: I0113 20:09:36.925477 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-config-path\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.925599 kubelet[3523]: I0113 20:09:36.925528 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-xtables-lock\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.925599 kubelet[3523]: I0113 20:09:36.925564 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d2a6f9f-419e-4136-bdc6-fa8493027611-clustermesh-secrets\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.925836 kubelet[3523]: I0113 20:09:36.925616 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d2a6f9f-419e-4136-bdc6-fa8493027611-hubble-tls\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.925836 kubelet[3523]: I0113 20:09:36.925660 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-lib-modules\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.925836 kubelet[3523]: I0113 20:09:36.925697 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-hostproc\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.925836 kubelet[3523]: I0113 20:09:36.925758 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-run\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.925836 kubelet[3523]: I0113 20:09:36.925792 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s7l5\" (UniqueName: \"kubernetes.io/projected/8668a69f-8537-486a-910e-ffaa1d79bcb1-kube-api-access-8s7l5\") pod \"kube-proxy-8x9kg\" (UID: \"8668a69f-8537-486a-910e-ffaa1d79bcb1\") " pod="kube-system/kube-proxy-8x9kg" Jan 13 20:09:36.926214 kubelet[3523]: I0113 20:09:36.925835 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-cgroup\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:36.926214 kubelet[3523]: I0113 20:09:36.925867 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-etc-cni-netd\") pod \"cilium-br8dv\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " pod="kube-system/cilium-br8dv" Jan 13 20:09:37.175213 kubelet[3523]: I0113 20:09:37.174078 3523 topology_manager.go:215] "Topology Admit Handler" podUID="4618ec8b-3c4c-49f6-b66a-e7eaa197ff09" podNamespace="kube-system" podName="cilium-operator-599987898-9hsdz" Jan 13 20:09:37.191169 systemd[1]: Created slice kubepods-besteffort-pod4618ec8b_3c4c_49f6_b66a_e7eaa197ff09.slice - libcontainer container kubepods-besteffort-pod4618ec8b_3c4c_49f6_b66a_e7eaa197ff09.slice. Jan 13 20:09:37.229139 kubelet[3523]: I0113 20:09:37.229077 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4618ec8b-3c4c-49f6-b66a-e7eaa197ff09-cilium-config-path\") pod \"cilium-operator-599987898-9hsdz\" (UID: \"4618ec8b-3c4c-49f6-b66a-e7eaa197ff09\") " pod="kube-system/cilium-operator-599987898-9hsdz" Jan 13 20:09:37.229267 kubelet[3523]: I0113 20:09:37.229164 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx5xd\" (UniqueName: \"kubernetes.io/projected/4618ec8b-3c4c-49f6-b66a-e7eaa197ff09-kube-api-access-bx5xd\") pod \"cilium-operator-599987898-9hsdz\" (UID: \"4618ec8b-3c4c-49f6-b66a-e7eaa197ff09\") " pod="kube-system/cilium-operator-599987898-9hsdz" Jan 13 20:09:38.037847 kubelet[3523]: E0113 20:09:38.037724 3523 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:09:38.037847 kubelet[3523]: E0113 20:09:38.037851 3523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8668a69f-8537-486a-910e-ffaa1d79bcb1-kube-proxy podName:8668a69f-8537-486a-910e-ffaa1d79bcb1 nodeName:}" failed. No retries permitted until 2025-01-13 20:09:38.537818918 +0000 UTC m=+14.990092125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/8668a69f-8537-486a-910e-ffaa1d79bcb1-kube-proxy") pod "kube-proxy-8x9kg" (UID: "8668a69f-8537-486a-910e-ffaa1d79bcb1") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:09:38.087122 containerd[1931]: time="2025-01-13T20:09:38.086918850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-br8dv,Uid:3d2a6f9f-419e-4136-bdc6-fa8493027611,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:38.099459 containerd[1931]: time="2025-01-13T20:09:38.099032370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9hsdz,Uid:4618ec8b-3c4c-49f6-b66a-e7eaa197ff09,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:38.146345 containerd[1931]: time="2025-01-13T20:09:38.145710403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:38.146345 containerd[1931]: time="2025-01-13T20:09:38.145806547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:38.146345 containerd[1931]: time="2025-01-13T20:09:38.145843303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:38.146345 containerd[1931]: time="2025-01-13T20:09:38.146023903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:38.170419 containerd[1931]: time="2025-01-13T20:09:38.169498891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:38.170419 containerd[1931]: time="2025-01-13T20:09:38.169597459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:38.170419 containerd[1931]: time="2025-01-13T20:09:38.169641319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:38.170419 containerd[1931]: time="2025-01-13T20:09:38.169835671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:38.207291 systemd[1]: Started cri-containerd-fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885.scope - libcontainer container fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885. Jan 13 20:09:38.228311 systemd[1]: Started cri-containerd-3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339.scope - libcontainer container 3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339. Jan 13 20:09:38.285351 containerd[1931]: time="2025-01-13T20:09:38.285297511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-br8dv,Uid:3d2a6f9f-419e-4136-bdc6-fa8493027611,Namespace:kube-system,Attempt:0,} returns sandbox id \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\"" Jan 13 20:09:38.290341 containerd[1931]: time="2025-01-13T20:09:38.290192275Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:09:38.322270 containerd[1931]: time="2025-01-13T20:09:38.322166335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9hsdz,Uid:4618ec8b-3c4c-49f6-b66a-e7eaa197ff09,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339\"" Jan 13 20:09:38.670527 containerd[1931]: time="2025-01-13T20:09:38.670207905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8x9kg,Uid:8668a69f-8537-486a-910e-ffaa1d79bcb1,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:38.710642 containerd[1931]: time="2025-01-13T20:09:38.710236965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:38.710642 containerd[1931]: time="2025-01-13T20:09:38.710342997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:38.710642 containerd[1931]: time="2025-01-13T20:09:38.710379477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:38.710875 containerd[1931]: time="2025-01-13T20:09:38.710566425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:38.744287 systemd[1]: Started cri-containerd-6a3fa8efccd3e81703c5aecfa437c848c734d0c002ba9182d7c1248baa99db97.scope - libcontainer container 6a3fa8efccd3e81703c5aecfa437c848c734d0c002ba9182d7c1248baa99db97. Jan 13 20:09:38.785766 containerd[1931]: time="2025-01-13T20:09:38.785709598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8x9kg,Uid:8668a69f-8537-486a-910e-ffaa1d79bcb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a3fa8efccd3e81703c5aecfa437c848c734d0c002ba9182d7c1248baa99db97\"" Jan 13 20:09:38.793629 containerd[1931]: time="2025-01-13T20:09:38.793398274Z" level=info msg="CreateContainer within sandbox \"6a3fa8efccd3e81703c5aecfa437c848c734d0c002ba9182d7c1248baa99db97\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:09:38.825396 containerd[1931]: time="2025-01-13T20:09:38.825343186Z" level=info msg="CreateContainer within sandbox \"6a3fa8efccd3e81703c5aecfa437c848c734d0c002ba9182d7c1248baa99db97\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5e963132e109882d81bb936bb38f4abcbcb9b5a4b4ebee32179c1291ccc8d3c0\"" Jan 13 20:09:38.827599 containerd[1931]: time="2025-01-13T20:09:38.826519102Z" level=info msg="StartContainer for \"5e963132e109882d81bb936bb38f4abcbcb9b5a4b4ebee32179c1291ccc8d3c0\"" Jan 13 20:09:38.873294 systemd[1]: Started cri-containerd-5e963132e109882d81bb936bb38f4abcbcb9b5a4b4ebee32179c1291ccc8d3c0.scope - libcontainer container 5e963132e109882d81bb936bb38f4abcbcb9b5a4b4ebee32179c1291ccc8d3c0. Jan 13 20:09:38.936441 containerd[1931]: time="2025-01-13T20:09:38.936055954Z" level=info msg="StartContainer for \"5e963132e109882d81bb936bb38f4abcbcb9b5a4b4ebee32179c1291ccc8d3c0\" returns successfully" Jan 13 20:09:43.811189 kubelet[3523]: I0113 20:09:43.810341 3523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8x9kg" podStartSLOduration=7.810319479 podStartE2EDuration="7.810319479s" podCreationTimestamp="2025-01-13 20:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:38.968891327 +0000 UTC m=+15.421164546" watchObservedRunningTime="2025-01-13 20:09:43.810319479 +0000 UTC m=+20.262592686" Jan 13 20:09:44.460604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188422127.mount: Deactivated successfully. Jan 13 20:09:52.668489 containerd[1931]: time="2025-01-13T20:09:52.668346779Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:52.670837 containerd[1931]: time="2025-01-13T20:09:52.670620539Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650366" Jan 13 20:09:52.674984 containerd[1931]: time="2025-01-13T20:09:52.673740563Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:52.681919 containerd[1931]: time="2025-01-13T20:09:52.681861635Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.391606492s" Jan 13 20:09:52.682986 containerd[1931]: time="2025-01-13T20:09:52.682885499Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:09:52.686388 containerd[1931]: time="2025-01-13T20:09:52.685540871Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:09:52.688282 containerd[1931]: time="2025-01-13T20:09:52.688210271Z" level=info msg="CreateContainer within sandbox \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:09:52.717302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707956317.mount: Deactivated successfully. Jan 13 20:09:52.718856 containerd[1931]: time="2025-01-13T20:09:52.718659191Z" level=info msg="CreateContainer within sandbox \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3\"" Jan 13 20:09:52.721057 containerd[1931]: time="2025-01-13T20:09:52.721008959Z" level=info msg="StartContainer for \"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3\"" Jan 13 20:09:52.777322 systemd[1]: Started cri-containerd-1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3.scope - libcontainer container 1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3. Jan 13 20:09:52.829686 containerd[1931]: time="2025-01-13T20:09:52.829604999Z" level=info msg="StartContainer for \"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3\" returns successfully" Jan 13 20:09:52.844370 systemd[1]: cri-containerd-1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3.scope: Deactivated successfully. Jan 13 20:09:53.706592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3-rootfs.mount: Deactivated successfully. Jan 13 20:09:54.220741 containerd[1931]: time="2025-01-13T20:09:54.220668226Z" level=info msg="shim disconnected" id=1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3 namespace=k8s.io Jan 13 20:09:54.221399 containerd[1931]: time="2025-01-13T20:09:54.221355886Z" level=warning msg="cleaning up after shim disconnected" id=1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3 namespace=k8s.io Jan 13 20:09:54.221469 containerd[1931]: time="2025-01-13T20:09:54.221404678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:55.008516 containerd[1931]: time="2025-01-13T20:09:55.008433082Z" level=info msg="CreateContainer within sandbox \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:09:55.036252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119964830.mount: Deactivated successfully. Jan 13 20:09:55.037814 containerd[1931]: time="2025-01-13T20:09:55.037634818Z" level=info msg="CreateContainer within sandbox \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974\"" Jan 13 20:09:55.041971 containerd[1931]: time="2025-01-13T20:09:55.038743786Z" level=info msg="StartContainer for \"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974\"" Jan 13 20:09:55.095285 systemd[1]: Started cri-containerd-51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974.scope - libcontainer container 51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974. Jan 13 20:09:55.148028 containerd[1931]: time="2025-01-13T20:09:55.147923831Z" level=info msg="StartContainer for \"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974\" returns successfully" Jan 13 20:09:55.172162 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:09:55.172730 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:55.172908 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:09:55.179543 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:09:55.179954 systemd[1]: cri-containerd-51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974.scope: Deactivated successfully. Jan 13 20:09:55.226233 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974-rootfs.mount: Deactivated successfully. Jan 13 20:09:55.230797 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:55.307330 containerd[1931]: time="2025-01-13T20:09:55.307045356Z" level=info msg="shim disconnected" id=51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974 namespace=k8s.io Jan 13 20:09:55.307330 containerd[1931]: time="2025-01-13T20:09:55.307123800Z" level=warning msg="cleaning up after shim disconnected" id=51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974 namespace=k8s.io Jan 13 20:09:55.307330 containerd[1931]: time="2025-01-13T20:09:55.307144548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:56.011319 containerd[1931]: time="2025-01-13T20:09:56.011031815Z" level=info msg="CreateContainer within sandbox \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:09:56.064205 containerd[1931]: time="2025-01-13T20:09:56.062399868Z" level=info msg="CreateContainer within sandbox \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b\"" Jan 13 20:09:56.064993 containerd[1931]: time="2025-01-13T20:09:56.064792428Z" level=info msg="StartContainer for \"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b\"" Jan 13 20:09:56.167285 systemd[1]: Started cri-containerd-5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b.scope - libcontainer container 5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b. Jan 13 20:09:56.232080 containerd[1931]: time="2025-01-13T20:09:56.231542568Z" level=info msg="StartContainer for \"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b\" returns successfully" Jan 13 20:09:56.231892 systemd[1]: cri-containerd-5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b.scope: Deactivated successfully. Jan 13 20:09:56.271507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b-rootfs.mount: Deactivated successfully. Jan 13 20:09:56.284998 containerd[1931]: time="2025-01-13T20:09:56.284809057Z" level=info msg="shim disconnected" id=5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b namespace=k8s.io Jan 13 20:09:56.284998 containerd[1931]: time="2025-01-13T20:09:56.284905285Z" level=warning msg="cleaning up after shim disconnected" id=5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b namespace=k8s.io Jan 13 20:09:56.284998 containerd[1931]: time="2025-01-13T20:09:56.284925241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:57.029079 containerd[1931]: time="2025-01-13T20:09:57.028998264Z" level=info msg="CreateContainer within sandbox \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:09:57.062966 containerd[1931]: time="2025-01-13T20:09:57.062829324Z" level=info msg="CreateContainer within sandbox \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02\"" Jan 13 20:09:57.065698 containerd[1931]: time="2025-01-13T20:09:57.063816337Z" level=info msg="StartContainer for \"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02\"" Jan 13 20:09:57.119259 systemd[1]: Started cri-containerd-ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02.scope - libcontainer container ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02. Jan 13 20:09:57.159626 systemd[1]: cri-containerd-ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02.scope: Deactivated successfully. Jan 13 20:09:57.164397 containerd[1931]: time="2025-01-13T20:09:57.163648897Z" level=info msg="StartContainer for \"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02\" returns successfully" Jan 13 20:09:57.198531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02-rootfs.mount: Deactivated successfully. Jan 13 20:09:57.212238 containerd[1931]: time="2025-01-13T20:09:57.212150797Z" level=info msg="shim disconnected" id=ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02 namespace=k8s.io Jan 13 20:09:57.212690 containerd[1931]: time="2025-01-13T20:09:57.212417077Z" level=warning msg="cleaning up after shim disconnected" id=ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02 namespace=k8s.io Jan 13 20:09:57.212690 containerd[1931]: time="2025-01-13T20:09:57.212441185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:58.033254 containerd[1931]: time="2025-01-13T20:09:58.032534197Z" level=info msg="CreateContainer within sandbox \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:09:58.070497 containerd[1931]: time="2025-01-13T20:09:58.070423238Z" level=info msg="CreateContainer within sandbox \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\"" Jan 13 20:09:58.073494 containerd[1931]: time="2025-01-13T20:09:58.073445762Z" level=info msg="StartContainer for \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\"" Jan 13 20:09:58.130255 systemd[1]: Started cri-containerd-6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3.scope - libcontainer container 6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3. Jan 13 20:09:58.192984 containerd[1931]: time="2025-01-13T20:09:58.192822686Z" level=info msg="StartContainer for \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\" returns successfully" Jan 13 20:09:58.355508 kubelet[3523]: I0113 20:09:58.355347 3523 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:09:58.405635 kubelet[3523]: I0113 20:09:58.405565 3523 topology_manager.go:215] "Topology Admit Handler" podUID="6b18dac6-e808-4d46-a35f-67198057c01d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-72jbh" Jan 13 20:09:58.413197 kubelet[3523]: I0113 20:09:58.411908 3523 topology_manager.go:215] "Topology Admit Handler" podUID="cdcaacee-34ea-4a90-9b05-3ff60f052f77" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5h8xt" Jan 13 20:09:58.431019 systemd[1]: Created slice kubepods-burstable-pod6b18dac6_e808_4d46_a35f_67198057c01d.slice - libcontainer container kubepods-burstable-pod6b18dac6_e808_4d46_a35f_67198057c01d.slice. Jan 13 20:09:58.446186 systemd[1]: Created slice kubepods-burstable-podcdcaacee_34ea_4a90_9b05_3ff60f052f77.slice - libcontainer container kubepods-burstable-podcdcaacee_34ea_4a90_9b05_3ff60f052f77.slice. Jan 13 20:09:58.474026 kubelet[3523]: I0113 20:09:58.473979 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b18dac6-e808-4d46-a35f-67198057c01d-config-volume\") pod \"coredns-7db6d8ff4d-72jbh\" (UID: \"6b18dac6-e808-4d46-a35f-67198057c01d\") " pod="kube-system/coredns-7db6d8ff4d-72jbh" Jan 13 20:09:58.474346 kubelet[3523]: I0113 20:09:58.474320 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdcaacee-34ea-4a90-9b05-3ff60f052f77-config-volume\") pod \"coredns-7db6d8ff4d-5h8xt\" (UID: \"cdcaacee-34ea-4a90-9b05-3ff60f052f77\") " pod="kube-system/coredns-7db6d8ff4d-5h8xt" Jan 13 20:09:58.474622 kubelet[3523]: I0113 20:09:58.474585 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cz2j\" (UniqueName: \"kubernetes.io/projected/6b18dac6-e808-4d46-a35f-67198057c01d-kube-api-access-8cz2j\") pod \"coredns-7db6d8ff4d-72jbh\" (UID: \"6b18dac6-e808-4d46-a35f-67198057c01d\") " pod="kube-system/coredns-7db6d8ff4d-72jbh" Jan 13 20:09:58.474850 kubelet[3523]: I0113 20:09:58.474774 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zsk\" (UniqueName: \"kubernetes.io/projected/cdcaacee-34ea-4a90-9b05-3ff60f052f77-kube-api-access-64zsk\") pod \"coredns-7db6d8ff4d-5h8xt\" (UID: \"cdcaacee-34ea-4a90-9b05-3ff60f052f77\") " pod="kube-system/coredns-7db6d8ff4d-5h8xt" Jan 13 20:09:58.744634 containerd[1931]: time="2025-01-13T20:09:58.743935925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-72jbh,Uid:6b18dac6-e808-4d46-a35f-67198057c01d,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:58.756503 containerd[1931]: time="2025-01-13T20:09:58.755471669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5h8xt,Uid:cdcaacee-34ea-4a90-9b05-3ff60f052f77,Namespace:kube-system,Attempt:0,}" Jan 13 20:10:03.560425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633139573.mount: Deactivated successfully. Jan 13 20:10:05.321729 containerd[1931]: time="2025-01-13T20:10:05.321653782Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:05.323876 containerd[1931]: time="2025-01-13T20:10:05.323803042Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137710" Jan 13 20:10:05.325346 containerd[1931]: time="2025-01-13T20:10:05.325262134Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:10:05.328734 containerd[1931]: time="2025-01-13T20:10:05.328526398Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 12.642922239s" Jan 13 20:10:05.328734 containerd[1931]: time="2025-01-13T20:10:05.328586686Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:10:05.335031 containerd[1931]: time="2025-01-13T20:10:05.334818790Z" level=info msg="CreateContainer within sandbox \"3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:10:05.360841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount805821254.mount: Deactivated successfully. Jan 13 20:10:05.361548 containerd[1931]: time="2025-01-13T20:10:05.360877882Z" level=info msg="CreateContainer within sandbox \"3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\"" Jan 13 20:10:05.363438 containerd[1931]: time="2025-01-13T20:10:05.362472058Z" level=info msg="StartContainer for \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\"" Jan 13 20:10:05.421275 systemd[1]: Started cri-containerd-b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698.scope - libcontainer container b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698. Jan 13 20:10:05.477830 containerd[1931]: time="2025-01-13T20:10:05.477731998Z" level=info msg="StartContainer for \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\" returns successfully" Jan 13 20:10:06.224989 kubelet[3523]: I0113 20:10:06.223451 3523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-br8dv" podStartSLOduration=15.827815438 podStartE2EDuration="30.223431094s" podCreationTimestamp="2025-01-13 20:09:36 +0000 UTC" firstStartedPulling="2025-01-13 20:09:38.288545767 +0000 UTC m=+14.740818974" lastFinishedPulling="2025-01-13 20:09:52.684161435 +0000 UTC m=+29.136434630" observedRunningTime="2025-01-13 20:09:59.086085519 +0000 UTC m=+35.538358762" watchObservedRunningTime="2025-01-13 20:10:06.223431094 +0000 UTC m=+42.675704325" Jan 13 20:10:06.789491 systemd[1]: Started sshd@7-172.31.31.26:22-147.75.109.163:35838.service - OpenSSH per-connection server daemon (147.75.109.163:35838). Jan 13 20:10:06.992540 sshd[4352]: Accepted publickey for core from 147.75.109.163 port 35838 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:06.995316 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:07.008171 systemd-logind[1920]: New session 8 of user core. Jan 13 20:10:07.016243 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:10:07.487986 sshd[4354]: Connection closed by 147.75.109.163 port 35838 Jan 13 20:10:07.487881 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:07.494843 systemd[1]: sshd@7-172.31.31.26:22-147.75.109.163:35838.service: Deactivated successfully. Jan 13 20:10:07.500846 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:10:07.505318 systemd-logind[1920]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:10:07.507221 systemd-logind[1920]: Removed session 8. Jan 13 20:10:09.843245 systemd-networkd[1759]: cilium_host: Link UP Jan 13 20:10:09.843536 systemd-networkd[1759]: cilium_net: Link UP Jan 13 20:10:09.843846 systemd-networkd[1759]: cilium_net: Gained carrier Jan 13 20:10:09.848229 systemd-networkd[1759]: cilium_host: Gained carrier Jan 13 20:10:09.857930 (udev-worker)[4374]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:10:09.859421 (udev-worker)[4379]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:10:10.008745 (udev-worker)[4386]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:10:10.019695 systemd-networkd[1759]: cilium_vxlan: Link UP Jan 13 20:10:10.019715 systemd-networkd[1759]: cilium_vxlan: Gained carrier Jan 13 20:10:10.415820 systemd-networkd[1759]: cilium_net: Gained IPv6LL Jan 13 20:10:10.504982 kernel: NET: Registered PF_ALG protocol family Jan 13 20:10:10.798223 systemd-networkd[1759]: cilium_host: Gained IPv6LL Jan 13 20:10:11.438334 systemd-networkd[1759]: cilium_vxlan: Gained IPv6LL Jan 13 20:10:11.827889 systemd-networkd[1759]: lxc_health: Link UP Jan 13 20:10:11.836018 systemd-networkd[1759]: lxc_health: Gained carrier Jan 13 20:10:12.124374 kubelet[3523]: I0113 20:10:12.123982 3523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-9hsdz" podStartSLOduration=8.118236516 podStartE2EDuration="35.123923871s" podCreationTimestamp="2025-01-13 20:09:37 +0000 UTC" firstStartedPulling="2025-01-13 20:09:38.324778927 +0000 UTC m=+14.777052146" lastFinishedPulling="2025-01-13 20:10:05.330466294 +0000 UTC m=+41.782739501" observedRunningTime="2025-01-13 20:10:06.233058418 +0000 UTC m=+42.685331625" watchObservedRunningTime="2025-01-13 20:10:12.123923871 +0000 UTC m=+48.576197066" Jan 13 20:10:12.372670 systemd-networkd[1759]: lxc58496d85bab2: Link UP Jan 13 20:10:12.381113 kernel: eth0: renamed from tmp278bb Jan 13 20:10:12.389356 systemd-networkd[1759]: lxc58496d85bab2: Gained carrier Jan 13 20:10:12.427984 kernel: eth0: renamed from tmpd0c5a Jan 13 20:10:12.434307 systemd-networkd[1759]: lxcf162fef8f86d: Link UP Jan 13 20:10:12.440037 systemd-networkd[1759]: lxcf162fef8f86d: Gained carrier Jan 13 20:10:12.531482 systemd[1]: Started sshd@8-172.31.31.26:22-147.75.109.163:39874.service - OpenSSH per-connection server daemon (147.75.109.163:39874). Jan 13 20:10:12.748742 sshd[4720]: Accepted publickey for core from 147.75.109.163 port 39874 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:12.752400 sshd-session[4720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:12.765077 systemd-logind[1920]: New session 9 of user core. Jan 13 20:10:12.775904 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:10:13.117996 sshd[4726]: Connection closed by 147.75.109.163 port 39874 Jan 13 20:10:13.118890 sshd-session[4720]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:13.127655 systemd[1]: sshd@8-172.31.31.26:22-147.75.109.163:39874.service: Deactivated successfully. Jan 13 20:10:13.134840 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:10:13.140374 systemd-logind[1920]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:10:13.145023 systemd-logind[1920]: Removed session 9. Jan 13 20:10:13.230193 systemd-networkd[1759]: lxc_health: Gained IPv6LL Jan 13 20:10:13.551162 systemd-networkd[1759]: lxcf162fef8f86d: Gained IPv6LL Jan 13 20:10:14.193378 systemd-networkd[1759]: lxc58496d85bab2: Gained IPv6LL Jan 13 20:10:16.968908 ntpd[1915]: Listen normally on 8 cilium_host 192.168.0.63:123 Jan 13 20:10:16.969343 ntpd[1915]: Listen normally on 9 cilium_net [fe80::dcb3:eff:fee0:efa%4]:123 Jan 13 20:10:16.971191 ntpd[1915]: 13 Jan 20:10:16 ntpd[1915]: Listen normally on 8 cilium_host 192.168.0.63:123 Jan 13 20:10:16.971191 ntpd[1915]: 13 Jan 20:10:16 ntpd[1915]: Listen normally on 9 cilium_net [fe80::dcb3:eff:fee0:efa%4]:123 Jan 13 20:10:16.971191 ntpd[1915]: 13 Jan 20:10:16 ntpd[1915]: Listen normally on 10 cilium_host [fe80::4035:d2ff:fef3:a1b4%5]:123 Jan 13 20:10:16.971191 ntpd[1915]: 13 Jan 20:10:16 ntpd[1915]: Listen normally on 11 cilium_vxlan [fe80::4c99:9ff:fe27:b49d%6]:123 Jan 13 20:10:16.971191 ntpd[1915]: 13 Jan 20:10:16 ntpd[1915]: Listen normally on 12 lxc_health [fe80::a47e:b5ff:fe4a:3203%8]:123 Jan 13 20:10:16.971191 ntpd[1915]: 13 Jan 20:10:16 ntpd[1915]: Listen normally on 13 lxc58496d85bab2 [fe80::4e:2ff:fea4:5807%10]:123 Jan 13 20:10:16.971191 ntpd[1915]: 13 Jan 20:10:16 ntpd[1915]: Listen normally on 14 lxcf162fef8f86d [fe80::8ce7:3cff:fe4c:e8f2%12]:123 Jan 13 20:10:16.969430 ntpd[1915]: Listen normally on 10 cilium_host [fe80::4035:d2ff:fef3:a1b4%5]:123 Jan 13 20:10:16.969498 ntpd[1915]: Listen normally on 11 cilium_vxlan [fe80::4c99:9ff:fe27:b49d%6]:123 Jan 13 20:10:16.969563 ntpd[1915]: Listen normally on 12 lxc_health [fe80::a47e:b5ff:fe4a:3203%8]:123 Jan 13 20:10:16.969629 ntpd[1915]: Listen normally on 13 lxc58496d85bab2 [fe80::4e:2ff:fea4:5807%10]:123 Jan 13 20:10:16.969693 ntpd[1915]: Listen normally on 14 lxcf162fef8f86d [fe80::8ce7:3cff:fe4c:e8f2%12]:123 Jan 13 20:10:18.160129 systemd[1]: Started sshd@9-172.31.31.26:22-147.75.109.163:55050.service - OpenSSH per-connection server daemon (147.75.109.163:55050). Jan 13 20:10:18.350461 sshd[4751]: Accepted publickey for core from 147.75.109.163 port 55050 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:18.353776 sshd-session[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:18.366108 systemd-logind[1920]: New session 10 of user core. Jan 13 20:10:18.368979 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:10:18.658907 sshd[4753]: Connection closed by 147.75.109.163 port 55050 Jan 13 20:10:18.661259 sshd-session[4751]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:18.667597 systemd-logind[1920]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:10:18.672204 systemd[1]: sshd@9-172.31.31.26:22-147.75.109.163:55050.service: Deactivated successfully. Jan 13 20:10:18.680894 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:10:18.686052 systemd-logind[1920]: Removed session 10. Jan 13 20:10:20.926302 containerd[1931]: time="2025-01-13T20:10:20.925873167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:20.926302 containerd[1931]: time="2025-01-13T20:10:20.925991823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:20.926302 containerd[1931]: time="2025-01-13T20:10:20.926019099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:20.926302 containerd[1931]: time="2025-01-13T20:10:20.926168343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:20.958060 containerd[1931]: time="2025-01-13T20:10:20.954490935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:10:20.958060 containerd[1931]: time="2025-01-13T20:10:20.954607935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:10:20.958060 containerd[1931]: time="2025-01-13T20:10:20.954645219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:20.958060 containerd[1931]: time="2025-01-13T20:10:20.954802047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:10:21.008638 systemd[1]: Started cri-containerd-d0c5a8494d52481db3b4fa766f93b2789f43cff12daf71eb8709f0f271e67033.scope - libcontainer container d0c5a8494d52481db3b4fa766f93b2789f43cff12daf71eb8709f0f271e67033. Jan 13 20:10:21.027539 systemd[1]: run-containerd-runc-k8s.io-d0c5a8494d52481db3b4fa766f93b2789f43cff12daf71eb8709f0f271e67033-runc.oDnWbO.mount: Deactivated successfully. Jan 13 20:10:21.069726 systemd[1]: Started cri-containerd-278bb429bb6f8a38c677d50bc69907ef9d8b8fb3e161d863beaed80f888bdf21.scope - libcontainer container 278bb429bb6f8a38c677d50bc69907ef9d8b8fb3e161d863beaed80f888bdf21. Jan 13 20:10:21.166416 containerd[1931]: time="2025-01-13T20:10:21.166183560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5h8xt,Uid:cdcaacee-34ea-4a90-9b05-3ff60f052f77,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0c5a8494d52481db3b4fa766f93b2789f43cff12daf71eb8709f0f271e67033\"" Jan 13 20:10:21.184151 containerd[1931]: time="2025-01-13T20:10:21.183191232Z" level=info msg="CreateContainer within sandbox \"d0c5a8494d52481db3b4fa766f93b2789f43cff12daf71eb8709f0f271e67033\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:10:21.195929 containerd[1931]: time="2025-01-13T20:10:21.195365832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-72jbh,Uid:6b18dac6-e808-4d46-a35f-67198057c01d,Namespace:kube-system,Attempt:0,} returns sandbox id \"278bb429bb6f8a38c677d50bc69907ef9d8b8fb3e161d863beaed80f888bdf21\"" Jan 13 20:10:21.206024 containerd[1931]: time="2025-01-13T20:10:21.205658256Z" level=info msg="CreateContainer within sandbox \"278bb429bb6f8a38c677d50bc69907ef9d8b8fb3e161d863beaed80f888bdf21\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:10:21.224806 containerd[1931]: time="2025-01-13T20:10:21.224598757Z" level=info msg="CreateContainer within sandbox \"d0c5a8494d52481db3b4fa766f93b2789f43cff12daf71eb8709f0f271e67033\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"665fa66c8e548dbf1216f0db490066c0616ee59ab75248f53b61af5427fc0426\"" Jan 13 20:10:21.225626 containerd[1931]: time="2025-01-13T20:10:21.225493501Z" level=info msg="StartContainer for \"665fa66c8e548dbf1216f0db490066c0616ee59ab75248f53b61af5427fc0426\"" Jan 13 20:10:21.240481 containerd[1931]: time="2025-01-13T20:10:21.240356233Z" level=info msg="CreateContainer within sandbox \"278bb429bb6f8a38c677d50bc69907ef9d8b8fb3e161d863beaed80f888bdf21\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"032b0cfc14cdda276f203a98b90ef95051b44bfac231b94a63f1888b2b8400b5\"" Jan 13 20:10:21.243326 containerd[1931]: time="2025-01-13T20:10:21.242552941Z" level=info msg="StartContainer for \"032b0cfc14cdda276f203a98b90ef95051b44bfac231b94a63f1888b2b8400b5\"" Jan 13 20:10:21.311729 systemd[1]: Started cri-containerd-032b0cfc14cdda276f203a98b90ef95051b44bfac231b94a63f1888b2b8400b5.scope - libcontainer container 032b0cfc14cdda276f203a98b90ef95051b44bfac231b94a63f1888b2b8400b5. Jan 13 20:10:21.344447 systemd[1]: Started cri-containerd-665fa66c8e548dbf1216f0db490066c0616ee59ab75248f53b61af5427fc0426.scope - libcontainer container 665fa66c8e548dbf1216f0db490066c0616ee59ab75248f53b61af5427fc0426. Jan 13 20:10:21.417125 containerd[1931]: time="2025-01-13T20:10:21.416500045Z" level=info msg="StartContainer for \"032b0cfc14cdda276f203a98b90ef95051b44bfac231b94a63f1888b2b8400b5\" returns successfully" Jan 13 20:10:21.453200 containerd[1931]: time="2025-01-13T20:10:21.452897702Z" level=info msg="StartContainer for \"665fa66c8e548dbf1216f0db490066c0616ee59ab75248f53b61af5427fc0426\" returns successfully" Jan 13 20:10:22.131109 kubelet[3523]: I0113 20:10:22.129779 3523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-72jbh" podStartSLOduration=45.129757537 podStartE2EDuration="45.129757537s" podCreationTimestamp="2025-01-13 20:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:10:22.129093301 +0000 UTC m=+58.581366544" watchObservedRunningTime="2025-01-13 20:10:22.129757537 +0000 UTC m=+58.582030744" Jan 13 20:10:22.157919 kubelet[3523]: I0113 20:10:22.157022 3523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5h8xt" podStartSLOduration=45.156996505 podStartE2EDuration="45.156996505s" podCreationTimestamp="2025-01-13 20:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:10:22.153902689 +0000 UTC m=+58.606175932" watchObservedRunningTime="2025-01-13 20:10:22.156996505 +0000 UTC m=+58.609269736" Jan 13 20:10:23.702600 systemd[1]: Started sshd@10-172.31.31.26:22-147.75.109.163:55066.service - OpenSSH per-connection server daemon (147.75.109.163:55066). Jan 13 20:10:23.883024 sshd[4936]: Accepted publickey for core from 147.75.109.163 port 55066 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:23.885620 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:23.894051 systemd-logind[1920]: New session 11 of user core. Jan 13 20:10:23.904230 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:10:24.155925 sshd[4940]: Connection closed by 147.75.109.163 port 55066 Jan 13 20:10:24.155797 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:24.161430 systemd-logind[1920]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:10:24.162258 systemd[1]: sshd@10-172.31.31.26:22-147.75.109.163:55066.service: Deactivated successfully. Jan 13 20:10:24.165995 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:10:24.169653 systemd-logind[1920]: Removed session 11. Jan 13 20:10:29.199461 systemd[1]: Started sshd@11-172.31.31.26:22-147.75.109.163:33506.service - OpenSSH per-connection server daemon (147.75.109.163:33506). Jan 13 20:10:29.390627 sshd[4956]: Accepted publickey for core from 147.75.109.163 port 33506 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:29.393105 sshd-session[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:29.402069 systemd-logind[1920]: New session 12 of user core. Jan 13 20:10:29.410269 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:10:29.653704 sshd[4958]: Connection closed by 147.75.109.163 port 33506 Jan 13 20:10:29.654577 sshd-session[4956]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:29.661051 systemd[1]: sshd@11-172.31.31.26:22-147.75.109.163:33506.service: Deactivated successfully. Jan 13 20:10:29.665107 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:10:29.667605 systemd-logind[1920]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:10:29.669477 systemd-logind[1920]: Removed session 12. Jan 13 20:10:34.696513 systemd[1]: Started sshd@12-172.31.31.26:22-147.75.109.163:33522.service - OpenSSH per-connection server daemon (147.75.109.163:33522). Jan 13 20:10:34.875510 sshd[4971]: Accepted publickey for core from 147.75.109.163 port 33522 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:34.878123 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:34.885933 systemd-logind[1920]: New session 13 of user core. Jan 13 20:10:34.891319 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:10:35.135269 sshd[4973]: Connection closed by 147.75.109.163 port 33522 Jan 13 20:10:35.136150 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:35.142495 systemd[1]: sshd@12-172.31.31.26:22-147.75.109.163:33522.service: Deactivated successfully. Jan 13 20:10:35.148603 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:10:35.150696 systemd-logind[1920]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:10:35.153074 systemd-logind[1920]: Removed session 13. Jan 13 20:10:40.176466 systemd[1]: Started sshd@13-172.31.31.26:22-147.75.109.163:42830.service - OpenSSH per-connection server daemon (147.75.109.163:42830). Jan 13 20:10:40.360002 sshd[4988]: Accepted publickey for core from 147.75.109.163 port 42830 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:40.362482 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:40.370752 systemd-logind[1920]: New session 14 of user core. Jan 13 20:10:40.376226 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:10:40.617516 sshd[4990]: Connection closed by 147.75.109.163 port 42830 Jan 13 20:10:40.618059 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:40.630155 systemd[1]: sshd@13-172.31.31.26:22-147.75.109.163:42830.service: Deactivated successfully. Jan 13 20:10:40.634877 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:10:40.636881 systemd-logind[1920]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:10:40.657537 systemd[1]: Started sshd@14-172.31.31.26:22-147.75.109.163:42842.service - OpenSSH per-connection server daemon (147.75.109.163:42842). Jan 13 20:10:40.660748 systemd-logind[1920]: Removed session 14. Jan 13 20:10:40.842075 sshd[5002]: Accepted publickey for core from 147.75.109.163 port 42842 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:40.844668 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:40.853234 systemd-logind[1920]: New session 15 of user core. Jan 13 20:10:40.859217 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:10:41.185623 sshd[5005]: Connection closed by 147.75.109.163 port 42842 Jan 13 20:10:41.187622 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:41.198426 systemd[1]: sshd@14-172.31.31.26:22-147.75.109.163:42842.service: Deactivated successfully. Jan 13 20:10:41.206551 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:10:41.209307 systemd-logind[1920]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:10:41.235827 systemd[1]: Started sshd@15-172.31.31.26:22-147.75.109.163:42850.service - OpenSSH per-connection server daemon (147.75.109.163:42850). Jan 13 20:10:41.238479 systemd-logind[1920]: Removed session 15. Jan 13 20:10:41.426561 sshd[5014]: Accepted publickey for core from 147.75.109.163 port 42850 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:41.429047 sshd-session[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:41.437246 systemd-logind[1920]: New session 16 of user core. Jan 13 20:10:41.443233 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:10:41.684221 sshd[5016]: Connection closed by 147.75.109.163 port 42850 Jan 13 20:10:41.684701 sshd-session[5014]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:41.692181 systemd[1]: sshd@15-172.31.31.26:22-147.75.109.163:42850.service: Deactivated successfully. Jan 13 20:10:41.695843 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:10:41.698379 systemd-logind[1920]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:10:41.700214 systemd-logind[1920]: Removed session 16. Jan 13 20:10:46.722477 systemd[1]: Started sshd@16-172.31.31.26:22-147.75.109.163:42852.service - OpenSSH per-connection server daemon (147.75.109.163:42852). Jan 13 20:10:46.902722 sshd[5028]: Accepted publickey for core from 147.75.109.163 port 42852 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:46.905289 sshd-session[5028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:46.913733 systemd-logind[1920]: New session 17 of user core. Jan 13 20:10:46.920221 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:10:47.157618 sshd[5030]: Connection closed by 147.75.109.163 port 42852 Jan 13 20:10:47.158505 sshd-session[5028]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:47.164798 systemd[1]: sshd@16-172.31.31.26:22-147.75.109.163:42852.service: Deactivated successfully. Jan 13 20:10:47.169771 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:10:47.171382 systemd-logind[1920]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:10:47.173701 systemd-logind[1920]: Removed session 17. Jan 13 20:10:52.196474 systemd[1]: Started sshd@17-172.31.31.26:22-147.75.109.163:50042.service - OpenSSH per-connection server daemon (147.75.109.163:50042). Jan 13 20:10:52.383621 sshd[5042]: Accepted publickey for core from 147.75.109.163 port 50042 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:52.386169 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:52.393782 systemd-logind[1920]: New session 18 of user core. Jan 13 20:10:52.400192 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:10:52.644465 sshd[5044]: Connection closed by 147.75.109.163 port 50042 Jan 13 20:10:52.645401 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:52.650476 systemd[1]: sshd@17-172.31.31.26:22-147.75.109.163:50042.service: Deactivated successfully. Jan 13 20:10:52.653501 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:10:52.657825 systemd-logind[1920]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:10:52.660420 systemd-logind[1920]: Removed session 18. Jan 13 20:10:57.687563 systemd[1]: Started sshd@18-172.31.31.26:22-147.75.109.163:48442.service - OpenSSH per-connection server daemon (147.75.109.163:48442). Jan 13 20:10:57.871343 sshd[5055]: Accepted publickey for core from 147.75.109.163 port 48442 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:57.873861 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:57.884385 systemd-logind[1920]: New session 19 of user core. Jan 13 20:10:57.889187 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:10:58.131454 sshd[5057]: Connection closed by 147.75.109.163 port 48442 Jan 13 20:10:58.130518 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:58.136707 systemd[1]: sshd@18-172.31.31.26:22-147.75.109.163:48442.service: Deactivated successfully. Jan 13 20:10:58.141111 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:10:58.142861 systemd-logind[1920]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:10:58.145363 systemd-logind[1920]: Removed session 19. Jan 13 20:10:58.170066 systemd[1]: Started sshd@19-172.31.31.26:22-147.75.109.163:48444.service - OpenSSH per-connection server daemon (147.75.109.163:48444). Jan 13 20:10:58.361524 sshd[5068]: Accepted publickey for core from 147.75.109.163 port 48444 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:58.364000 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:58.372127 systemd-logind[1920]: New session 20 of user core. Jan 13 20:10:58.383245 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:10:58.678049 sshd[5070]: Connection closed by 147.75.109.163 port 48444 Jan 13 20:10:58.678792 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:58.686188 systemd[1]: sshd@19-172.31.31.26:22-147.75.109.163:48444.service: Deactivated successfully. Jan 13 20:10:58.690576 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:10:58.692180 systemd-logind[1920]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:10:58.694331 systemd-logind[1920]: Removed session 20. Jan 13 20:10:58.716448 systemd[1]: Started sshd@20-172.31.31.26:22-147.75.109.163:48448.service - OpenSSH per-connection server daemon (147.75.109.163:48448). Jan 13 20:10:58.906262 sshd[5079]: Accepted publickey for core from 147.75.109.163 port 48448 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:58.908771 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:58.916919 systemd-logind[1920]: New session 21 of user core. Jan 13 20:10:58.927235 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:11:01.462746 sshd[5081]: Connection closed by 147.75.109.163 port 48448 Jan 13 20:11:01.463398 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:01.473818 systemd[1]: sshd@20-172.31.31.26:22-147.75.109.163:48448.service: Deactivated successfully. Jan 13 20:11:01.483116 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:11:01.489293 systemd-logind[1920]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:11:01.517622 systemd[1]: Started sshd@21-172.31.31.26:22-147.75.109.163:48452.service - OpenSSH per-connection server daemon (147.75.109.163:48452). Jan 13 20:11:01.521224 systemd-logind[1920]: Removed session 21. Jan 13 20:11:01.707915 sshd[5098]: Accepted publickey for core from 147.75.109.163 port 48452 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:01.710534 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:01.719368 systemd-logind[1920]: New session 22 of user core. Jan 13 20:11:01.726211 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:11:02.211317 sshd[5100]: Connection closed by 147.75.109.163 port 48452 Jan 13 20:11:02.212205 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:02.222420 systemd[1]: sshd@21-172.31.31.26:22-147.75.109.163:48452.service: Deactivated successfully. Jan 13 20:11:02.226389 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:11:02.228531 systemd-logind[1920]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:11:02.230664 systemd-logind[1920]: Removed session 22. Jan 13 20:11:02.248519 systemd[1]: Started sshd@22-172.31.31.26:22-147.75.109.163:48454.service - OpenSSH per-connection server daemon (147.75.109.163:48454). Jan 13 20:11:02.435224 sshd[5109]: Accepted publickey for core from 147.75.109.163 port 48454 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:02.437677 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:02.445249 systemd-logind[1920]: New session 23 of user core. Jan 13 20:11:02.455214 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:11:02.686933 sshd[5111]: Connection closed by 147.75.109.163 port 48454 Jan 13 20:11:02.687815 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:02.694675 systemd[1]: sshd@22-172.31.31.26:22-147.75.109.163:48454.service: Deactivated successfully. Jan 13 20:11:02.699234 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:11:02.701137 systemd-logind[1920]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:11:02.703415 systemd-logind[1920]: Removed session 23. Jan 13 20:11:07.726472 systemd[1]: Started sshd@23-172.31.31.26:22-147.75.109.163:33190.service - OpenSSH per-connection server daemon (147.75.109.163:33190). Jan 13 20:11:07.913922 sshd[5121]: Accepted publickey for core from 147.75.109.163 port 33190 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:07.916362 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:07.925102 systemd-logind[1920]: New session 24 of user core. Jan 13 20:11:07.931215 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:11:08.171048 sshd[5123]: Connection closed by 147.75.109.163 port 33190 Jan 13 20:11:08.171557 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:08.181241 systemd[1]: sshd@23-172.31.31.26:22-147.75.109.163:33190.service: Deactivated successfully. Jan 13 20:11:08.185868 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:11:08.187343 systemd-logind[1920]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:11:08.190681 systemd-logind[1920]: Removed session 24. Jan 13 20:11:13.211446 systemd[1]: Started sshd@24-172.31.31.26:22-147.75.109.163:33194.service - OpenSSH per-connection server daemon (147.75.109.163:33194). Jan 13 20:11:13.401902 sshd[5139]: Accepted publickey for core from 147.75.109.163 port 33194 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:13.404383 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:13.413003 systemd-logind[1920]: New session 25 of user core. Jan 13 20:11:13.423217 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:11:13.664928 sshd[5141]: Connection closed by 147.75.109.163 port 33194 Jan 13 20:11:13.665776 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:13.671915 systemd[1]: sshd@24-172.31.31.26:22-147.75.109.163:33194.service: Deactivated successfully. Jan 13 20:11:13.676694 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:11:13.678575 systemd-logind[1920]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:11:13.680601 systemd-logind[1920]: Removed session 25. Jan 13 20:11:18.709509 systemd[1]: Started sshd@25-172.31.31.26:22-147.75.109.163:48704.service - OpenSSH per-connection server daemon (147.75.109.163:48704). Jan 13 20:11:18.887177 sshd[5151]: Accepted publickey for core from 147.75.109.163 port 48704 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:18.889296 sshd-session[5151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:18.898515 systemd-logind[1920]: New session 26 of user core. Jan 13 20:11:18.901220 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:11:19.135663 sshd[5153]: Connection closed by 147.75.109.163 port 48704 Jan 13 20:11:19.136557 sshd-session[5151]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:19.144555 systemd[1]: sshd@25-172.31.31.26:22-147.75.109.163:48704.service: Deactivated successfully. Jan 13 20:11:19.148252 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:11:19.149786 systemd-logind[1920]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:11:19.152164 systemd-logind[1920]: Removed session 26. Jan 13 20:11:24.173481 systemd[1]: Started sshd@26-172.31.31.26:22-147.75.109.163:48716.service - OpenSSH per-connection server daemon (147.75.109.163:48716). Jan 13 20:11:24.362104 sshd[5166]: Accepted publickey for core from 147.75.109.163 port 48716 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:24.364685 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:24.373030 systemd-logind[1920]: New session 27 of user core. Jan 13 20:11:24.380209 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:11:24.632167 sshd[5168]: Connection closed by 147.75.109.163 port 48716 Jan 13 20:11:24.631438 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:24.636636 systemd[1]: sshd@26-172.31.31.26:22-147.75.109.163:48716.service: Deactivated successfully. Jan 13 20:11:24.640211 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:11:24.643706 systemd-logind[1920]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:11:24.646087 systemd-logind[1920]: Removed session 27. Jan 13 20:11:24.669477 systemd[1]: Started sshd@27-172.31.31.26:22-147.75.109.163:48722.service - OpenSSH per-connection server daemon (147.75.109.163:48722). Jan 13 20:11:24.858403 sshd[5178]: Accepted publickey for core from 147.75.109.163 port 48722 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:24.861377 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:24.869141 systemd-logind[1920]: New session 28 of user core. Jan 13 20:11:24.875193 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:11:28.333055 containerd[1931]: time="2025-01-13T20:11:28.332981238Z" level=info msg="StopContainer for \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\" with timeout 30 (s)" Jan 13 20:11:28.336399 containerd[1931]: time="2025-01-13T20:11:28.336323442Z" level=info msg="Stop container \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\" with signal terminated" Jan 13 20:11:28.365005 systemd[1]: run-containerd-runc-k8s.io-6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3-runc.PJXOW1.mount: Deactivated successfully. Jan 13 20:11:28.387810 systemd[1]: cri-containerd-b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698.scope: Deactivated successfully. Jan 13 20:11:28.390460 containerd[1931]: time="2025-01-13T20:11:28.390217722Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:11:28.418523 containerd[1931]: time="2025-01-13T20:11:28.418311618Z" level=info msg="StopContainer for \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\" with timeout 2 (s)" Jan 13 20:11:28.420552 containerd[1931]: time="2025-01-13T20:11:28.420471414Z" level=info msg="Stop container \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\" with signal terminated" Jan 13 20:11:28.439366 systemd-networkd[1759]: lxc_health: Link DOWN Jan 13 20:11:28.439380 systemd-networkd[1759]: lxc_health: Lost carrier Jan 13 20:11:28.458842 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698-rootfs.mount: Deactivated successfully. Jan 13 20:11:28.471369 systemd[1]: cri-containerd-6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3.scope: Deactivated successfully. Jan 13 20:11:28.471799 systemd[1]: cri-containerd-6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3.scope: Consumed 14.270s CPU time. Jan 13 20:11:28.489806 containerd[1931]: time="2025-01-13T20:11:28.489642079Z" level=info msg="shim disconnected" id=b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698 namespace=k8s.io Jan 13 20:11:28.490333 containerd[1931]: time="2025-01-13T20:11:28.490167499Z" level=warning msg="cleaning up after shim disconnected" id=b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698 namespace=k8s.io Jan 13 20:11:28.490333 containerd[1931]: time="2025-01-13T20:11:28.490200079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:28.516700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3-rootfs.mount: Deactivated successfully. Jan 13 20:11:28.527206 containerd[1931]: time="2025-01-13T20:11:28.527124763Z" level=info msg="shim disconnected" id=6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3 namespace=k8s.io Jan 13 20:11:28.527206 containerd[1931]: time="2025-01-13T20:11:28.527202667Z" level=warning msg="cleaning up after shim disconnected" id=6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3 namespace=k8s.io Jan 13 20:11:28.527626 containerd[1931]: time="2025-01-13T20:11:28.527223991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:28.532909 containerd[1931]: time="2025-01-13T20:11:28.532811311Z" level=info msg="StopContainer for \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\" returns successfully" Jan 13 20:11:28.534687 containerd[1931]: time="2025-01-13T20:11:28.534639727Z" level=info msg="StopPodSandbox for \"3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339\"" Jan 13 20:11:28.535013 containerd[1931]: time="2025-01-13T20:11:28.534767491Z" level=info msg="Container to stop \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:28.539055 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339-shm.mount: Deactivated successfully. Jan 13 20:11:28.554668 systemd[1]: cri-containerd-3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339.scope: Deactivated successfully. Jan 13 20:11:28.574672 containerd[1931]: time="2025-01-13T20:11:28.573991567Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:11:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:11:28.579874 containerd[1931]: time="2025-01-13T20:11:28.579641947Z" level=info msg="StopContainer for \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\" returns successfully" Jan 13 20:11:28.581050 containerd[1931]: time="2025-01-13T20:11:28.580938727Z" level=info msg="StopPodSandbox for \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\"" Jan 13 20:11:28.581196 containerd[1931]: time="2025-01-13T20:11:28.581098051Z" level=info msg="Container to stop \"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:28.581196 containerd[1931]: time="2025-01-13T20:11:28.581125699Z" level=info msg="Container to stop \"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:28.581196 containerd[1931]: time="2025-01-13T20:11:28.581148271Z" level=info msg="Container to stop \"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:28.581353 containerd[1931]: time="2025-01-13T20:11:28.581196235Z" level=info msg="Container to stop \"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:28.581353 containerd[1931]: time="2025-01-13T20:11:28.581217235Z" level=info msg="Container to stop \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:28.595254 systemd[1]: cri-containerd-fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885.scope: Deactivated successfully. Jan 13 20:11:28.618974 containerd[1931]: time="2025-01-13T20:11:28.616922275Z" level=info msg="shim disconnected" id=3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339 namespace=k8s.io Jan 13 20:11:28.618974 containerd[1931]: time="2025-01-13T20:11:28.617040319Z" level=warning msg="cleaning up after shim disconnected" id=3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339 namespace=k8s.io Jan 13 20:11:28.618974 containerd[1931]: time="2025-01-13T20:11:28.617062423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:28.654264 containerd[1931]: time="2025-01-13T20:11:28.654156151Z" level=info msg="shim disconnected" id=fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885 namespace=k8s.io Jan 13 20:11:28.654264 containerd[1931]: time="2025-01-13T20:11:28.654233239Z" level=warning msg="cleaning up after shim disconnected" id=fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885 namespace=k8s.io Jan 13 20:11:28.654264 containerd[1931]: time="2025-01-13T20:11:28.654255331Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:28.655900 containerd[1931]: time="2025-01-13T20:11:28.655815439Z" level=info msg="TearDown network for sandbox \"3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339\" successfully" Jan 13 20:11:28.655900 containerd[1931]: time="2025-01-13T20:11:28.655877419Z" level=info msg="StopPodSandbox for \"3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339\" returns successfully" Jan 13 20:11:28.686207 containerd[1931]: time="2025-01-13T20:11:28.686136344Z" level=info msg="TearDown network for sandbox \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" successfully" Jan 13 20:11:28.686356 containerd[1931]: time="2025-01-13T20:11:28.686209784Z" level=info msg="StopPodSandbox for \"fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885\" returns successfully" Jan 13 20:11:28.759982 kubelet[3523]: I0113 20:11:28.757059 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-lib-modules\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.759982 kubelet[3523]: I0113 20:11:28.757136 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-host-proc-sys-net\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.759982 kubelet[3523]: I0113 20:11:28.757173 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-bpf-maps\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.759982 kubelet[3523]: I0113 20:11:28.757208 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-xtables-lock\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.759982 kubelet[3523]: I0113 20:11:28.757249 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4618ec8b-3c4c-49f6-b66a-e7eaa197ff09-cilium-config-path\") pod \"4618ec8b-3c4c-49f6-b66a-e7eaa197ff09\" (UID: \"4618ec8b-3c4c-49f6-b66a-e7eaa197ff09\") " Jan 13 20:11:28.759982 kubelet[3523]: I0113 20:11:28.757285 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-run\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.760832 kubelet[3523]: I0113 20:11:28.757324 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkcfg\" (UniqueName: \"kubernetes.io/projected/3d2a6f9f-419e-4136-bdc6-fa8493027611-kube-api-access-gkcfg\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.760832 kubelet[3523]: I0113 20:11:28.757360 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-config-path\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.760832 kubelet[3523]: I0113 20:11:28.757392 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-hostproc\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.760832 kubelet[3523]: I0113 20:11:28.757426 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-cgroup\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.760832 kubelet[3523]: I0113 20:11:28.757461 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx5xd\" (UniqueName: \"kubernetes.io/projected/4618ec8b-3c4c-49f6-b66a-e7eaa197ff09-kube-api-access-bx5xd\") pod \"4618ec8b-3c4c-49f6-b66a-e7eaa197ff09\" (UID: \"4618ec8b-3c4c-49f6-b66a-e7eaa197ff09\") " Jan 13 20:11:28.760832 kubelet[3523]: I0113 20:11:28.757500 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-host-proc-sys-kernel\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.761184 kubelet[3523]: I0113 20:11:28.757541 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d2a6f9f-419e-4136-bdc6-fa8493027611-clustermesh-secrets\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.761184 kubelet[3523]: I0113 20:11:28.757580 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d2a6f9f-419e-4136-bdc6-fa8493027611-hubble-tls\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.761184 kubelet[3523]: I0113 20:11:28.757612 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cni-path\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.761184 kubelet[3523]: I0113 20:11:28.757644 3523 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-etc-cni-netd\") pod \"3d2a6f9f-419e-4136-bdc6-fa8493027611\" (UID: \"3d2a6f9f-419e-4136-bdc6-fa8493027611\") " Jan 13 20:11:28.761184 kubelet[3523]: I0113 20:11:28.757751 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:28.761184 kubelet[3523]: I0113 20:11:28.757816 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:28.761524 kubelet[3523]: I0113 20:11:28.757852 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:28.761524 kubelet[3523]: I0113 20:11:28.757889 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:28.761524 kubelet[3523]: I0113 20:11:28.757924 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:28.761524 kubelet[3523]: I0113 20:11:28.759044 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:28.761524 kubelet[3523]: I0113 20:11:28.759114 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:28.767424 kubelet[3523]: I0113 20:11:28.767356 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-hostproc" (OuterVolumeSpecName: "hostproc") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:28.767584 kubelet[3523]: I0113 20:11:28.767536 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2a6f9f-419e-4136-bdc6-fa8493027611-kube-api-access-gkcfg" (OuterVolumeSpecName: "kube-api-access-gkcfg") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "kube-api-access-gkcfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:11:28.771405 kubelet[3523]: I0113 20:11:28.771226 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:28.778128 kubelet[3523]: I0113 20:11:28.777937 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cni-path" (OuterVolumeSpecName: "cni-path") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:28.782325 kubelet[3523]: I0113 20:11:28.782176 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d2a6f9f-419e-4136-bdc6-fa8493027611-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:11:28.791487 kubelet[3523]: I0113 20:11:28.791260 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4618ec8b-3c4c-49f6-b66a-e7eaa197ff09-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4618ec8b-3c4c-49f6-b66a-e7eaa197ff09" (UID: "4618ec8b-3c4c-49f6-b66a-e7eaa197ff09"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:11:28.792926 kubelet[3523]: I0113 20:11:28.792781 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:11:28.795276 kubelet[3523]: I0113 20:11:28.795169 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4618ec8b-3c4c-49f6-b66a-e7eaa197ff09-kube-api-access-bx5xd" (OuterVolumeSpecName: "kube-api-access-bx5xd") pod "4618ec8b-3c4c-49f6-b66a-e7eaa197ff09" (UID: "4618ec8b-3c4c-49f6-b66a-e7eaa197ff09"). InnerVolumeSpecName "kube-api-access-bx5xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:11:28.796510 kubelet[3523]: I0113 20:11:28.796422 3523 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2a6f9f-419e-4136-bdc6-fa8493027611-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3d2a6f9f-419e-4136-bdc6-fa8493027611" (UID: "3d2a6f9f-419e-4136-bdc6-fa8493027611"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:11:28.858898 kubelet[3523]: I0113 20:11:28.858722 3523 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cni-path\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.858898 kubelet[3523]: I0113 20:11:28.858772 3523 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-etc-cni-netd\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.858898 kubelet[3523]: I0113 20:11:28.858796 3523 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-lib-modules\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.858898 kubelet[3523]: I0113 20:11:28.858818 3523 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-host-proc-sys-net\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.858898 kubelet[3523]: I0113 20:11:28.858843 3523 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-bpf-maps\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.858898 kubelet[3523]: I0113 20:11:28.858863 3523 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-xtables-lock\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.859785 kubelet[3523]: I0113 20:11:28.859480 3523 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4618ec8b-3c4c-49f6-b66a-e7eaa197ff09-cilium-config-path\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.859785 kubelet[3523]: I0113 20:11:28.859547 3523 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-run\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.859785 kubelet[3523]: I0113 20:11:28.859572 3523 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gkcfg\" (UniqueName: \"kubernetes.io/projected/3d2a6f9f-419e-4136-bdc6-fa8493027611-kube-api-access-gkcfg\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.859785 kubelet[3523]: I0113 20:11:28.859592 3523 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-config-path\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.859785 kubelet[3523]: I0113 20:11:28.859613 3523 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-hostproc\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.859785 kubelet[3523]: I0113 20:11:28.859633 3523 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-cilium-cgroup\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.859785 kubelet[3523]: I0113 20:11:28.859671 3523 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bx5xd\" (UniqueName: \"kubernetes.io/projected/4618ec8b-3c4c-49f6-b66a-e7eaa197ff09-kube-api-access-bx5xd\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.859785 kubelet[3523]: I0113 20:11:28.859692 3523 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d2a6f9f-419e-4136-bdc6-fa8493027611-host-proc-sys-kernel\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.860322 kubelet[3523]: I0113 20:11:28.859729 3523 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d2a6f9f-419e-4136-bdc6-fa8493027611-clustermesh-secrets\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:28.860322 kubelet[3523]: I0113 20:11:28.859749 3523 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d2a6f9f-419e-4136-bdc6-fa8493027611-hubble-tls\") on node \"ip-172-31-31-26\" DevicePath \"\"" Jan 13 20:11:29.041491 kubelet[3523]: E0113 20:11:29.041416 3523 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:11:29.290607 kubelet[3523]: I0113 20:11:29.290448 3523 scope.go:117] "RemoveContainer" containerID="b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698" Jan 13 20:11:29.296737 containerd[1931]: time="2025-01-13T20:11:29.295330003Z" level=info msg="RemoveContainer for \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\"" Jan 13 20:11:29.309013 systemd[1]: Removed slice kubepods-besteffort-pod4618ec8b_3c4c_49f6_b66a_e7eaa197ff09.slice - libcontainer container kubepods-besteffort-pod4618ec8b_3c4c_49f6_b66a_e7eaa197ff09.slice. Jan 13 20:11:29.316391 systemd[1]: Removed slice kubepods-burstable-pod3d2a6f9f_419e_4136_bdc6_fa8493027611.slice - libcontainer container kubepods-burstable-pod3d2a6f9f_419e_4136_bdc6_fa8493027611.slice. Jan 13 20:11:29.316906 containerd[1931]: time="2025-01-13T20:11:29.316378711Z" level=info msg="RemoveContainer for \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\" returns successfully" Jan 13 20:11:29.317258 systemd[1]: kubepods-burstable-pod3d2a6f9f_419e_4136_bdc6_fa8493027611.slice: Consumed 14.413s CPU time. Jan 13 20:11:29.318272 kubelet[3523]: I0113 20:11:29.317350 3523 scope.go:117] "RemoveContainer" containerID="b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698" Jan 13 20:11:29.318344 containerd[1931]: time="2025-01-13T20:11:29.318254227Z" level=error msg="ContainerStatus for \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\": not found" Jan 13 20:11:29.319077 kubelet[3523]: E0113 20:11:29.318650 3523 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\": not found" containerID="b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698" Jan 13 20:11:29.319077 kubelet[3523]: I0113 20:11:29.318712 3523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698"} err="failed to get container status \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\": rpc error: code = NotFound desc = an error occurred when try to find container \"b21d361163134563b3b045217e0c9ca95f95fe7b69c2d05103e2bdddce55e698\": not found" Jan 13 20:11:29.319077 kubelet[3523]: I0113 20:11:29.318850 3523 scope.go:117] "RemoveContainer" containerID="6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3" Jan 13 20:11:29.323409 containerd[1931]: time="2025-01-13T20:11:29.323308255Z" level=info msg="RemoveContainer for \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\"" Jan 13 20:11:29.332978 containerd[1931]: time="2025-01-13T20:11:29.332425999Z" level=info msg="RemoveContainer for \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\" returns successfully" Jan 13 20:11:29.336299 kubelet[3523]: I0113 20:11:29.335505 3523 scope.go:117] "RemoveContainer" containerID="ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02" Jan 13 20:11:29.340537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c5ca15a6af730122f2034c813295a560c322e40ea798483b0f5c4be5a1c8339-rootfs.mount: Deactivated successfully. Jan 13 20:11:29.341569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885-rootfs.mount: Deactivated successfully. Jan 13 20:11:29.341713 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fca84a2aaa1b5bd968dfbd460f4210c3f1cce2090e84c8db75b9e704cb519885-shm.mount: Deactivated successfully. Jan 13 20:11:29.341843 systemd[1]: var-lib-kubelet-pods-4618ec8b\x2d3c4c\x2d49f6\x2db66a\x2de7eaa197ff09-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbx5xd.mount: Deactivated successfully. Jan 13 20:11:29.346543 containerd[1931]: time="2025-01-13T20:11:29.345568255Z" level=info msg="RemoveContainer for \"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02\"" Jan 13 20:11:29.342154 systemd[1]: var-lib-kubelet-pods-3d2a6f9f\x2d419e\x2d4136\x2dbdc6\x2dfa8493027611-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgkcfg.mount: Deactivated successfully. Jan 13 20:11:29.342308 systemd[1]: var-lib-kubelet-pods-3d2a6f9f\x2d419e\x2d4136\x2dbdc6\x2dfa8493027611-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:11:29.342438 systemd[1]: var-lib-kubelet-pods-3d2a6f9f\x2d419e\x2d4136\x2dbdc6\x2dfa8493027611-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:11:29.358718 containerd[1931]: time="2025-01-13T20:11:29.357601111Z" level=info msg="RemoveContainer for \"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02\" returns successfully" Jan 13 20:11:29.360436 kubelet[3523]: I0113 20:11:29.360100 3523 scope.go:117] "RemoveContainer" containerID="5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b" Jan 13 20:11:29.363807 containerd[1931]: time="2025-01-13T20:11:29.363476419Z" level=info msg="RemoveContainer for \"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b\"" Jan 13 20:11:29.373616 containerd[1931]: time="2025-01-13T20:11:29.373503079Z" level=info msg="RemoveContainer for \"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b\" returns successfully" Jan 13 20:11:29.374316 kubelet[3523]: I0113 20:11:29.374101 3523 scope.go:117] "RemoveContainer" containerID="51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974" Jan 13 20:11:29.376186 containerd[1931]: time="2025-01-13T20:11:29.376118443Z" level=info msg="RemoveContainer for \"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974\"" Jan 13 20:11:29.381365 containerd[1931]: time="2025-01-13T20:11:29.381311551Z" level=info msg="RemoveContainer for \"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974\" returns successfully" Jan 13 20:11:29.381737 kubelet[3523]: I0113 20:11:29.381701 3523 scope.go:117] "RemoveContainer" containerID="1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3" Jan 13 20:11:29.383366 containerd[1931]: time="2025-01-13T20:11:29.383310091Z" level=info msg="RemoveContainer for \"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3\"" Jan 13 20:11:29.388189 containerd[1931]: time="2025-01-13T20:11:29.388122343Z" level=info msg="RemoveContainer for \"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3\" returns successfully" Jan 13 20:11:29.388656 kubelet[3523]: I0113 20:11:29.388505 3523 scope.go:117] "RemoveContainer" containerID="6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3" Jan 13 20:11:29.389096 containerd[1931]: time="2025-01-13T20:11:29.389044135Z" level=error msg="ContainerStatus for \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\": not found" Jan 13 20:11:29.389555 kubelet[3523]: E0113 20:11:29.389486 3523 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\": not found" containerID="6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3" Jan 13 20:11:29.389668 kubelet[3523]: I0113 20:11:29.389558 3523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3"} err="failed to get container status \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c3e8b60e15dc205282017d33e7c0255aa5e73511a16a2f12eb0b72019d45ac3\": not found" Jan 13 20:11:29.389668 kubelet[3523]: I0113 20:11:29.389598 3523 scope.go:117] "RemoveContainer" containerID="ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02" Jan 13 20:11:29.390115 containerd[1931]: time="2025-01-13T20:11:29.390038743Z" level=error msg="ContainerStatus for \"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02\": not found" Jan 13 20:11:29.390304 kubelet[3523]: E0113 20:11:29.390259 3523 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02\": not found" containerID="ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02" Jan 13 20:11:29.390372 kubelet[3523]: I0113 20:11:29.390308 3523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02"} err="failed to get container status \"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac85caebdb2fa1fe8f2582b5926f0bb5af610e1c2bea2102c419f19a6969fe02\": not found" Jan 13 20:11:29.390372 kubelet[3523]: I0113 20:11:29.390359 3523 scope.go:117] "RemoveContainer" containerID="5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b" Jan 13 20:11:29.390739 containerd[1931]: time="2025-01-13T20:11:29.390678991Z" level=error msg="ContainerStatus for \"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b\": not found" Jan 13 20:11:29.391367 kubelet[3523]: E0113 20:11:29.391135 3523 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b\": not found" containerID="5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b" Jan 13 20:11:29.391367 kubelet[3523]: I0113 20:11:29.391208 3523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b"} err="failed to get container status \"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b7040b0537097e785953ca620e76aaccae6fa364610dc4a8cf840a5a11e884b\": not found" Jan 13 20:11:29.391367 kubelet[3523]: I0113 20:11:29.391291 3523 scope.go:117] "RemoveContainer" containerID="51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974" Jan 13 20:11:29.392368 containerd[1931]: time="2025-01-13T20:11:29.391924603Z" level=error msg="ContainerStatus for \"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974\": not found" Jan 13 20:11:29.392485 kubelet[3523]: E0113 20:11:29.392171 3523 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974\": not found" containerID="51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974" Jan 13 20:11:29.392485 kubelet[3523]: I0113 20:11:29.392210 3523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974"} err="failed to get container status \"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974\": rpc error: code = NotFound desc = an error occurred when try to find container \"51c40237ee58ad71944527e1afd0f58013cda7cd25de6b3cdd94369f8a37b974\": not found" Jan 13 20:11:29.392485 kubelet[3523]: I0113 20:11:29.392241 3523 scope.go:117] "RemoveContainer" containerID="1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3" Jan 13 20:11:29.393598 containerd[1931]: time="2025-01-13T20:11:29.393296071Z" level=error msg="ContainerStatus for \"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3\": not found" Jan 13 20:11:29.393784 kubelet[3523]: E0113 20:11:29.393742 3523 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3\": not found" containerID="1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3" Jan 13 20:11:29.393852 kubelet[3523]: I0113 20:11:29.393793 3523 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3"} err="failed to get container status \"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3\": rpc error: code = NotFound desc = an error occurred when try to find container \"1cd3e7f2ee72ad4b2c9449c9436c6d4cf26085971d941034863e1f912142dca3\": not found" Jan 13 20:11:29.785644 kubelet[3523]: I0113 20:11:29.784734 3523 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d2a6f9f-419e-4136-bdc6-fa8493027611" path="/var/lib/kubelet/pods/3d2a6f9f-419e-4136-bdc6-fa8493027611/volumes" Jan 13 20:11:29.787286 kubelet[3523]: I0113 20:11:29.786932 3523 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4618ec8b-3c4c-49f6-b66a-e7eaa197ff09" path="/var/lib/kubelet/pods/4618ec8b-3c4c-49f6-b66a-e7eaa197ff09/volumes" Jan 13 20:11:30.233031 sshd[5180]: Connection closed by 147.75.109.163 port 48722 Jan 13 20:11:30.232796 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:30.240243 systemd[1]: sshd@27-172.31.31.26:22-147.75.109.163:48722.service: Deactivated successfully. Jan 13 20:11:30.243856 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:11:30.244301 systemd[1]: session-28.scope: Consumed 2.678s CPU time. Jan 13 20:11:30.245678 systemd-logind[1920]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:11:30.248231 systemd-logind[1920]: Removed session 28. Jan 13 20:11:30.272494 systemd[1]: Started sshd@28-172.31.31.26:22-147.75.109.163:39246.service - OpenSSH per-connection server daemon (147.75.109.163:39246). Jan 13 20:11:30.466562 sshd[5340]: Accepted publickey for core from 147.75.109.163 port 39246 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:30.469158 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:30.476460 systemd-logind[1920]: New session 29 of user core. Jan 13 20:11:30.489219 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:11:30.968887 ntpd[1915]: Deleting interface #12 lxc_health, fe80::a47e:b5ff:fe4a:3203%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Jan 13 20:11:30.969546 ntpd[1915]: 13 Jan 20:11:30 ntpd[1915]: Deleting interface #12 lxc_health, fe80::a47e:b5ff:fe4a:3203%8#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs Jan 13 20:11:32.743559 sshd[5342]: Connection closed by 147.75.109.163 port 39246 Jan 13 20:11:32.744513 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:32.758456 systemd[1]: sshd@28-172.31.31.26:22-147.75.109.163:39246.service: Deactivated successfully. Jan 13 20:11:32.769767 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:11:32.771582 systemd[1]: session-29.scope: Consumed 2.072s CPU time. Jan 13 20:11:32.773246 systemd-logind[1920]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:11:32.796454 systemd[1]: Started sshd@29-172.31.31.26:22-147.75.109.163:39250.service - OpenSSH per-connection server daemon (147.75.109.163:39250). Jan 13 20:11:32.800201 systemd-logind[1920]: Removed session 29. Jan 13 20:11:32.827228 kubelet[3523]: I0113 20:11:32.825897 3523 topology_manager.go:215] "Topology Admit Handler" podUID="2cca785e-42e8-4773-9b84-6e840073a2b7" podNamespace="kube-system" podName="cilium-4sgbd" Jan 13 20:11:32.831171 kubelet[3523]: E0113 20:11:32.829580 3523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d2a6f9f-419e-4136-bdc6-fa8493027611" containerName="mount-cgroup" Jan 13 20:11:32.831171 kubelet[3523]: E0113 20:11:32.829650 3523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d2a6f9f-419e-4136-bdc6-fa8493027611" containerName="clean-cilium-state" Jan 13 20:11:32.831171 kubelet[3523]: E0113 20:11:32.829670 3523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d2a6f9f-419e-4136-bdc6-fa8493027611" containerName="cilium-agent" Jan 13 20:11:32.831171 kubelet[3523]: E0113 20:11:32.829688 3523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4618ec8b-3c4c-49f6-b66a-e7eaa197ff09" containerName="cilium-operator" Jan 13 20:11:32.831171 kubelet[3523]: E0113 20:11:32.829704 3523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d2a6f9f-419e-4136-bdc6-fa8493027611" containerName="apply-sysctl-overwrites" Jan 13 20:11:32.831171 kubelet[3523]: E0113 20:11:32.829746 3523 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3d2a6f9f-419e-4136-bdc6-fa8493027611" containerName="mount-bpf-fs" Jan 13 20:11:32.831171 kubelet[3523]: I0113 20:11:32.829795 3523 memory_manager.go:354] "RemoveStaleState removing state" podUID="4618ec8b-3c4c-49f6-b66a-e7eaa197ff09" containerName="cilium-operator" Jan 13 20:11:32.831171 kubelet[3523]: I0113 20:11:32.831031 3523 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d2a6f9f-419e-4136-bdc6-fa8493027611" containerName="cilium-agent" Jan 13 20:11:32.875670 systemd[1]: Created slice kubepods-burstable-pod2cca785e_42e8_4773_9b84_6e840073a2b7.slice - libcontainer container kubepods-burstable-pod2cca785e_42e8_4773_9b84_6e840073a2b7.slice. Jan 13 20:11:32.886093 kubelet[3523]: I0113 20:11:32.885298 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2cca785e-42e8-4773-9b84-6e840073a2b7-cilium-config-path\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.886093 kubelet[3523]: I0113 20:11:32.885368 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2cca785e-42e8-4773-9b84-6e840073a2b7-cni-path\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.886093 kubelet[3523]: I0113 20:11:32.885414 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2cca785e-42e8-4773-9b84-6e840073a2b7-bpf-maps\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.886093 kubelet[3523]: I0113 20:11:32.885451 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cca785e-42e8-4773-9b84-6e840073a2b7-lib-modules\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.886093 kubelet[3523]: I0113 20:11:32.885487 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cca785e-42e8-4773-9b84-6e840073a2b7-xtables-lock\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.887394 kubelet[3523]: I0113 20:11:32.885528 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2cca785e-42e8-4773-9b84-6e840073a2b7-cilium-run\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.887394 kubelet[3523]: I0113 20:11:32.887159 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2cca785e-42e8-4773-9b84-6e840073a2b7-cilium-cgroup\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.887394 kubelet[3523]: I0113 20:11:32.887197 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2cca785e-42e8-4773-9b84-6e840073a2b7-clustermesh-secrets\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.887394 kubelet[3523]: I0113 20:11:32.887233 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2cca785e-42e8-4773-9b84-6e840073a2b7-hubble-tls\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.887394 kubelet[3523]: I0113 20:11:32.887270 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2cca785e-42e8-4773-9b84-6e840073a2b7-cilium-ipsec-secrets\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.887394 kubelet[3523]: I0113 20:11:32.887304 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtbs\" (UniqueName: \"kubernetes.io/projected/2cca785e-42e8-4773-9b84-6e840073a2b7-kube-api-access-6gtbs\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.887702 kubelet[3523]: I0113 20:11:32.887340 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2cca785e-42e8-4773-9b84-6e840073a2b7-hostproc\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.887702 kubelet[3523]: I0113 20:11:32.887379 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2cca785e-42e8-4773-9b84-6e840073a2b7-host-proc-sys-net\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.887702 kubelet[3523]: I0113 20:11:32.887413 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2cca785e-42e8-4773-9b84-6e840073a2b7-host-proc-sys-kernel\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:32.887702 kubelet[3523]: I0113 20:11:32.887447 3523 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cca785e-42e8-4773-9b84-6e840073a2b7-etc-cni-netd\") pod \"cilium-4sgbd\" (UID: \"2cca785e-42e8-4773-9b84-6e840073a2b7\") " pod="kube-system/cilium-4sgbd" Jan 13 20:11:33.077810 sshd[5352]: Accepted publickey for core from 147.75.109.163 port 39250 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:33.080345 sshd-session[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:33.088044 systemd-logind[1920]: New session 30 of user core. Jan 13 20:11:33.105215 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 20:11:33.192589 containerd[1931]: time="2025-01-13T20:11:33.192514714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4sgbd,Uid:2cca785e-42e8-4773-9b84-6e840073a2b7,Namespace:kube-system,Attempt:0,}" Jan 13 20:11:33.223166 sshd[5358]: Connection closed by 147.75.109.163 port 39250 Jan 13 20:11:33.224454 sshd-session[5352]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:33.230452 systemd[1]: sshd@29-172.31.31.26:22-147.75.109.163:39250.service: Deactivated successfully. Jan 13 20:11:33.239721 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 20:11:33.246129 systemd-logind[1920]: Session 30 logged out. Waiting for processes to exit. Jan 13 20:11:33.247751 containerd[1931]: time="2025-01-13T20:11:33.247430842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:11:33.248322 containerd[1931]: time="2025-01-13T20:11:33.247719514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:11:33.250369 containerd[1931]: time="2025-01-13T20:11:33.249397906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:11:33.250369 containerd[1931]: time="2025-01-13T20:11:33.249586882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:11:33.284548 systemd[1]: Started sshd@30-172.31.31.26:22-147.75.109.163:39262.service - OpenSSH per-connection server daemon (147.75.109.163:39262). Jan 13 20:11:33.286823 systemd-logind[1920]: Removed session 30. Jan 13 20:11:33.298085 systemd[1]: Started cri-containerd-ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3.scope - libcontainer container ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3. Jan 13 20:11:33.353796 containerd[1931]: time="2025-01-13T20:11:33.353318759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4sgbd,Uid:2cca785e-42e8-4773-9b84-6e840073a2b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3\"" Jan 13 20:11:33.360477 containerd[1931]: time="2025-01-13T20:11:33.360346523Z" level=info msg="CreateContainer within sandbox \"ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:11:33.384882 containerd[1931]: time="2025-01-13T20:11:33.382640999Z" level=info msg="CreateContainer within sandbox \"ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eae6bc16a4e6aee5b159fb9336ce1609af438628614702b6b3335303d8c9c248\"" Jan 13 20:11:33.385283 containerd[1931]: time="2025-01-13T20:11:33.385243415Z" level=info msg="StartContainer for \"eae6bc16a4e6aee5b159fb9336ce1609af438628614702b6b3335303d8c9c248\"" Jan 13 20:11:33.428244 systemd[1]: Started cri-containerd-eae6bc16a4e6aee5b159fb9336ce1609af438628614702b6b3335303d8c9c248.scope - libcontainer container eae6bc16a4e6aee5b159fb9336ce1609af438628614702b6b3335303d8c9c248. Jan 13 20:11:33.478405 containerd[1931]: time="2025-01-13T20:11:33.478244435Z" level=info msg="StartContainer for \"eae6bc16a4e6aee5b159fb9336ce1609af438628614702b6b3335303d8c9c248\" returns successfully" Jan 13 20:11:33.484354 sshd[5389]: Accepted publickey for core from 147.75.109.163 port 39262 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:33.487122 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:33.499665 systemd-logind[1920]: New session 31 of user core. Jan 13 20:11:33.499904 systemd[1]: cri-containerd-eae6bc16a4e6aee5b159fb9336ce1609af438628614702b6b3335303d8c9c248.scope: Deactivated successfully. Jan 13 20:11:33.510465 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 13 20:11:33.567365 containerd[1931]: time="2025-01-13T20:11:33.567263328Z" level=info msg="shim disconnected" id=eae6bc16a4e6aee5b159fb9336ce1609af438628614702b6b3335303d8c9c248 namespace=k8s.io Jan 13 20:11:33.567365 containerd[1931]: time="2025-01-13T20:11:33.567343368Z" level=warning msg="cleaning up after shim disconnected" id=eae6bc16a4e6aee5b159fb9336ce1609af438628614702b6b3335303d8c9c248 namespace=k8s.io Jan 13 20:11:33.567365 containerd[1931]: time="2025-01-13T20:11:33.567365664Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:34.043923 kubelet[3523]: E0113 20:11:34.043860 3523 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:11:34.329585 containerd[1931]: time="2025-01-13T20:11:34.328541352Z" level=info msg="CreateContainer within sandbox \"ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:11:34.356015 containerd[1931]: time="2025-01-13T20:11:34.355178772Z" level=info msg="CreateContainer within sandbox \"ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e809bd3c4e3774123d6723dce66c0c2ff61c8908aeaafa03fbb7ea6b10d7b8d8\"" Jan 13 20:11:34.356502 containerd[1931]: time="2025-01-13T20:11:34.356439036Z" level=info msg="StartContainer for \"e809bd3c4e3774123d6723dce66c0c2ff61c8908aeaafa03fbb7ea6b10d7b8d8\"" Jan 13 20:11:34.422267 systemd[1]: Started cri-containerd-e809bd3c4e3774123d6723dce66c0c2ff61c8908aeaafa03fbb7ea6b10d7b8d8.scope - libcontainer container e809bd3c4e3774123d6723dce66c0c2ff61c8908aeaafa03fbb7ea6b10d7b8d8. Jan 13 20:11:34.480269 containerd[1931]: time="2025-01-13T20:11:34.479811696Z" level=info msg="StartContainer for \"e809bd3c4e3774123d6723dce66c0c2ff61c8908aeaafa03fbb7ea6b10d7b8d8\" returns successfully" Jan 13 20:11:34.500799 systemd[1]: cri-containerd-e809bd3c4e3774123d6723dce66c0c2ff61c8908aeaafa03fbb7ea6b10d7b8d8.scope: Deactivated successfully. Jan 13 20:11:34.572058 containerd[1931]: time="2025-01-13T20:11:34.571380325Z" level=info msg="shim disconnected" id=e809bd3c4e3774123d6723dce66c0c2ff61c8908aeaafa03fbb7ea6b10d7b8d8 namespace=k8s.io Jan 13 20:11:34.572058 containerd[1931]: time="2025-01-13T20:11:34.571706749Z" level=warning msg="cleaning up after shim disconnected" id=e809bd3c4e3774123d6723dce66c0c2ff61c8908aeaafa03fbb7ea6b10d7b8d8 namespace=k8s.io Jan 13 20:11:34.572058 containerd[1931]: time="2025-01-13T20:11:34.571730473Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:34.999201 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e809bd3c4e3774123d6723dce66c0c2ff61c8908aeaafa03fbb7ea6b10d7b8d8-rootfs.mount: Deactivated successfully. Jan 13 20:11:35.335311 containerd[1931]: time="2025-01-13T20:11:35.334578529Z" level=info msg="CreateContainer within sandbox \"ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:11:35.367067 containerd[1931]: time="2025-01-13T20:11:35.367003537Z" level=info msg="CreateContainer within sandbox \"ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c7ca7468130971f3ab3634006e1bb6aa920503519af9b75b264c2e8f87aff8a4\"" Jan 13 20:11:35.369069 containerd[1931]: time="2025-01-13T20:11:35.367773433Z" level=info msg="StartContainer for \"c7ca7468130971f3ab3634006e1bb6aa920503519af9b75b264c2e8f87aff8a4\"" Jan 13 20:11:35.440286 systemd[1]: Started cri-containerd-c7ca7468130971f3ab3634006e1bb6aa920503519af9b75b264c2e8f87aff8a4.scope - libcontainer container c7ca7468130971f3ab3634006e1bb6aa920503519af9b75b264c2e8f87aff8a4. Jan 13 20:11:35.515586 containerd[1931]: time="2025-01-13T20:11:35.515509238Z" level=info msg="StartContainer for \"c7ca7468130971f3ab3634006e1bb6aa920503519af9b75b264c2e8f87aff8a4\" returns successfully" Jan 13 20:11:35.523232 systemd[1]: cri-containerd-c7ca7468130971f3ab3634006e1bb6aa920503519af9b75b264c2e8f87aff8a4.scope: Deactivated successfully. Jan 13 20:11:35.575447 containerd[1931]: time="2025-01-13T20:11:35.575351606Z" level=info msg="shim disconnected" id=c7ca7468130971f3ab3634006e1bb6aa920503519af9b75b264c2e8f87aff8a4 namespace=k8s.io Jan 13 20:11:35.575447 containerd[1931]: time="2025-01-13T20:11:35.575426798Z" level=warning msg="cleaning up after shim disconnected" id=c7ca7468130971f3ab3634006e1bb6aa920503519af9b75b264c2e8f87aff8a4 namespace=k8s.io Jan 13 20:11:35.575447 containerd[1931]: time="2025-01-13T20:11:35.575448974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:36.000435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7ca7468130971f3ab3634006e1bb6aa920503519af9b75b264c2e8f87aff8a4-rootfs.mount: Deactivated successfully. Jan 13 20:11:36.339861 containerd[1931]: time="2025-01-13T20:11:36.338915558Z" level=info msg="CreateContainer within sandbox \"ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:11:36.364362 containerd[1931]: time="2025-01-13T20:11:36.364279706Z" level=info msg="CreateContainer within sandbox \"ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c0ea19bef94134f6ce7ea51168d9567dfd778c2c5b8732b20959586828543574\"" Jan 13 20:11:36.366300 containerd[1931]: time="2025-01-13T20:11:36.365749046Z" level=info msg="StartContainer for \"c0ea19bef94134f6ce7ea51168d9567dfd778c2c5b8732b20959586828543574\"" Jan 13 20:11:36.430398 systemd[1]: Started cri-containerd-c0ea19bef94134f6ce7ea51168d9567dfd778c2c5b8732b20959586828543574.scope - libcontainer container c0ea19bef94134f6ce7ea51168d9567dfd778c2c5b8732b20959586828543574. Jan 13 20:11:36.482659 containerd[1931]: time="2025-01-13T20:11:36.482516714Z" level=info msg="StartContainer for \"c0ea19bef94134f6ce7ea51168d9567dfd778c2c5b8732b20959586828543574\" returns successfully" Jan 13 20:11:36.484032 systemd[1]: cri-containerd-c0ea19bef94134f6ce7ea51168d9567dfd778c2c5b8732b20959586828543574.scope: Deactivated successfully. Jan 13 20:11:36.542292 containerd[1931]: time="2025-01-13T20:11:36.542167623Z" level=info msg="shim disconnected" id=c0ea19bef94134f6ce7ea51168d9567dfd778c2c5b8732b20959586828543574 namespace=k8s.io Jan 13 20:11:36.542292 containerd[1931]: time="2025-01-13T20:11:36.542267439Z" level=warning msg="cleaning up after shim disconnected" id=c0ea19bef94134f6ce7ea51168d9567dfd778c2c5b8732b20959586828543574 namespace=k8s.io Jan 13 20:11:36.542292 containerd[1931]: time="2025-01-13T20:11:36.542288715Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:36.999406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0ea19bef94134f6ce7ea51168d9567dfd778c2c5b8732b20959586828543574-rootfs.mount: Deactivated successfully. Jan 13 20:11:37.241092 kubelet[3523]: I0113 20:11:37.241018 3523 setters.go:580] "Node became not ready" node="ip-172-31-31-26" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:11:37Z","lastTransitionTime":"2025-01-13T20:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:11:37.352621 containerd[1931]: time="2025-01-13T20:11:37.352375191Z" level=info msg="CreateContainer within sandbox \"ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:11:37.404858 containerd[1931]: time="2025-01-13T20:11:37.404791347Z" level=info msg="CreateContainer within sandbox \"ff03e02994fdd6d7efdc5adcca5ccce97d3d56e67ffc83d311c60e564be7a7e3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"05106e9209356a39d08eca19fd5890a15c0b45eb4ba45a6e3f44480858e04c5b\"" Jan 13 20:11:37.407585 containerd[1931]: time="2025-01-13T20:11:37.405702663Z" level=info msg="StartContainer for \"05106e9209356a39d08eca19fd5890a15c0b45eb4ba45a6e3f44480858e04c5b\"" Jan 13 20:11:37.462266 systemd[1]: run-containerd-runc-k8s.io-05106e9209356a39d08eca19fd5890a15c0b45eb4ba45a6e3f44480858e04c5b-runc.RBtabs.mount: Deactivated successfully. Jan 13 20:11:37.480303 systemd[1]: Started cri-containerd-05106e9209356a39d08eca19fd5890a15c0b45eb4ba45a6e3f44480858e04c5b.scope - libcontainer container 05106e9209356a39d08eca19fd5890a15c0b45eb4ba45a6e3f44480858e04c5b. Jan 13 20:11:37.596863 containerd[1931]: time="2025-01-13T20:11:37.596506048Z" level=info msg="StartContainer for \"05106e9209356a39d08eca19fd5890a15c0b45eb4ba45a6e3f44480858e04c5b\" returns successfully" Jan 13 20:11:38.396336 kubelet[3523]: I0113 20:11:38.395827 3523 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4sgbd" podStartSLOduration=6.395797912 podStartE2EDuration="6.395797912s" podCreationTimestamp="2025-01-13 20:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:11:38.39029548 +0000 UTC m=+134.842568711" watchObservedRunningTime="2025-01-13 20:11:38.395797912 +0000 UTC m=+134.848071131" Jan 13 20:11:38.416994 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:11:42.683215 systemd-networkd[1759]: lxc_health: Link UP Jan 13 20:11:42.693502 (udev-worker)[6200]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:11:42.698109 systemd-networkd[1759]: lxc_health: Gained carrier Jan 13 20:11:44.585517 systemd[1]: run-containerd-runc-k8s.io-05106e9209356a39d08eca19fd5890a15c0b45eb4ba45a6e3f44480858e04c5b-runc.gMG4Kp.mount: Deactivated successfully. Jan 13 20:11:44.686706 systemd-networkd[1759]: lxc_health: Gained IPv6LL Jan 13 20:11:46.953780 systemd[1]: run-containerd-runc-k8s.io-05106e9209356a39d08eca19fd5890a15c0b45eb4ba45a6e3f44480858e04c5b-runc.Ks9gfE.mount: Deactivated successfully. Jan 13 20:11:46.969575 ntpd[1915]: Listen normally on 15 lxc_health [fe80::cd:bcff:fe11:d4b9%14]:123 Jan 13 20:11:46.972070 ntpd[1915]: 13 Jan 20:11:46 ntpd[1915]: Listen normally on 15 lxc_health [fe80::cd:bcff:fe11:d4b9%14]:123 Jan 13 20:11:49.353402 sshd[5450]: Connection closed by 147.75.109.163 port 39262 Jan 13 20:11:49.356433 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:49.363622 systemd[1]: sshd@30-172.31.31.26:22-147.75.109.163:39262.service: Deactivated successfully. Jan 13 20:11:49.369880 systemd[1]: session-31.scope: Deactivated successfully. Jan 13 20:11:49.374537 systemd-logind[1920]: Session 31 logged out. Waiting for processes to exit. Jan 13 20:11:49.378615 systemd-logind[1920]: Removed session 31. Jan 13 20:12:03.293036 systemd[1]: cri-containerd-9008ee33a6c9a135dc48d26b7eae4b7591efdac731f5ce8cced4106c956bc129.scope: Deactivated successfully. Jan 13 20:12:03.294356 systemd[1]: cri-containerd-9008ee33a6c9a135dc48d26b7eae4b7591efdac731f5ce8cced4106c956bc129.scope: Consumed 4.748s CPU time, 22.3M memory peak, 0B memory swap peak. Jan 13 20:12:03.332524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9008ee33a6c9a135dc48d26b7eae4b7591efdac731f5ce8cced4106c956bc129-rootfs.mount: Deactivated successfully. Jan 13 20:12:03.341890 containerd[1931]: time="2025-01-13T20:12:03.341767276Z" level=info msg="shim disconnected" id=9008ee33a6c9a135dc48d26b7eae4b7591efdac731f5ce8cced4106c956bc129 namespace=k8s.io Jan 13 20:12:03.341890 containerd[1931]: time="2025-01-13T20:12:03.341887228Z" level=warning msg="cleaning up after shim disconnected" id=9008ee33a6c9a135dc48d26b7eae4b7591efdac731f5ce8cced4106c956bc129 namespace=k8s.io Jan 13 20:12:03.342928 containerd[1931]: time="2025-01-13T20:12:03.341910472Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:12:03.426980 kubelet[3523]: I0113 20:12:03.426833 3523 scope.go:117] "RemoveContainer" containerID="9008ee33a6c9a135dc48d26b7eae4b7591efdac731f5ce8cced4106c956bc129" Jan 13 20:12:03.433197 containerd[1931]: time="2025-01-13T20:12:03.433128868Z" level=info msg="CreateContainer within sandbox \"5d5636e8ac7771bbfd61be600a93bf36e21a062cc1787563f102d2204324322e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 20:12:03.461590 containerd[1931]: time="2025-01-13T20:12:03.461485972Z" level=info msg="CreateContainer within sandbox \"5d5636e8ac7771bbfd61be600a93bf36e21a062cc1787563f102d2204324322e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1f1053648a27b1663a0003671011086b96301736b2a04045e8d82ee6c10e00e4\"" Jan 13 20:12:03.462440 containerd[1931]: time="2025-01-13T20:12:03.462392284Z" level=info msg="StartContainer for \"1f1053648a27b1663a0003671011086b96301736b2a04045e8d82ee6c10e00e4\"" Jan 13 20:12:03.518253 systemd[1]: Started cri-containerd-1f1053648a27b1663a0003671011086b96301736b2a04045e8d82ee6c10e00e4.scope - libcontainer container 1f1053648a27b1663a0003671011086b96301736b2a04045e8d82ee6c10e00e4. Jan 13 20:12:03.589455 containerd[1931]: time="2025-01-13T20:12:03.588584333Z" level=info msg="StartContainer for \"1f1053648a27b1663a0003671011086b96301736b2a04045e8d82ee6c10e00e4\" returns successfully" Jan 13 20:12:07.995418 kubelet[3523]: E0113 20:12:07.995112 3523 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-26?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:12:09.068485 systemd[1]: cri-containerd-d433c3c27db7fcca9eed8b1132a2db5820f7178b09bb8a224887324c127fb68e.scope: Deactivated successfully. Jan 13 20:12:09.069104 systemd[1]: cri-containerd-d433c3c27db7fcca9eed8b1132a2db5820f7178b09bb8a224887324c127fb68e.scope: Consumed 4.477s CPU time, 15.7M memory peak, 0B memory swap peak. Jan 13 20:12:09.112054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d433c3c27db7fcca9eed8b1132a2db5820f7178b09bb8a224887324c127fb68e-rootfs.mount: Deactivated successfully. Jan 13 20:12:09.124673 containerd[1931]: time="2025-01-13T20:12:09.124569044Z" level=info msg="shim disconnected" id=d433c3c27db7fcca9eed8b1132a2db5820f7178b09bb8a224887324c127fb68e namespace=k8s.io Jan 13 20:12:09.124673 containerd[1931]: time="2025-01-13T20:12:09.124648124Z" level=warning msg="cleaning up after shim disconnected" id=d433c3c27db7fcca9eed8b1132a2db5820f7178b09bb8a224887324c127fb68e namespace=k8s.io Jan 13 20:12:09.124673 containerd[1931]: time="2025-01-13T20:12:09.124675280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:12:09.448446 kubelet[3523]: I0113 20:12:09.447990 3523 scope.go:117] "RemoveContainer" containerID="d433c3c27db7fcca9eed8b1132a2db5820f7178b09bb8a224887324c127fb68e" Jan 13 20:12:09.452178 containerd[1931]: time="2025-01-13T20:12:09.452124634Z" level=info msg="CreateContainer within sandbox \"1caf2efc16eabf313fb7d30b8a08437c5b2c651ebf405d1e7a7f9a744963efb2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 20:12:09.484478 containerd[1931]: time="2025-01-13T20:12:09.484403074Z" level=info msg="CreateContainer within sandbox \"1caf2efc16eabf313fb7d30b8a08437c5b2c651ebf405d1e7a7f9a744963efb2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e1cef937ac2b8ef14984ba493b9fe1cadee0e7ee99e2482f5582613cb2a6901f\"" Jan 13 20:12:09.486504 containerd[1931]: time="2025-01-13T20:12:09.485377462Z" level=info msg="StartContainer for \"e1cef937ac2b8ef14984ba493b9fe1cadee0e7ee99e2482f5582613cb2a6901f\"" Jan 13 20:12:09.552244 systemd[1]: Started cri-containerd-e1cef937ac2b8ef14984ba493b9fe1cadee0e7ee99e2482f5582613cb2a6901f.scope - libcontainer container e1cef937ac2b8ef14984ba493b9fe1cadee0e7ee99e2482f5582613cb2a6901f. Jan 13 20:12:09.617282 containerd[1931]: time="2025-01-13T20:12:09.617214947Z" level=info msg="StartContainer for \"e1cef937ac2b8ef14984ba493b9fe1cadee0e7ee99e2482f5582613cb2a6901f\" returns successfully" Jan 13 20:12:10.109693 systemd[1]: run-containerd-runc-k8s.io-e1cef937ac2b8ef14984ba493b9fe1cadee0e7ee99e2482f5582613cb2a6901f-runc.aHVruU.mount: Deactivated successfully.