Jan 13 21:10:36.225059 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 21:10:36.225106 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:10:36.225132 kernel: KASLR disabled due to lack of seed Jan 13 21:10:36.225149 kernel: efi: EFI v2.7 by EDK II Jan 13 21:10:36.225167 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 13 21:10:36.225183 kernel: ACPI: Early table checksum verification disabled Jan 13 21:10:36.225238 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 21:10:36.225256 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 21:10:36.225272 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 21:10:36.225289 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 21:10:36.225314 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 21:10:36.225331 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 21:10:36.225346 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 21:10:36.225363 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 21:10:36.225382 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 21:10:36.225405 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 21:10:36.225423 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 21:10:36.225442 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 21:10:36.225460 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 21:10:36.225477 kernel: printk: bootconsole [uart0] enabled Jan 13 21:10:36.225494 kernel: NUMA: Failed to initialise from firmware Jan 13 21:10:36.225511 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 21:10:36.225528 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 21:10:36.225544 kernel: Zone ranges: Jan 13 21:10:36.225560 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 21:10:36.225578 kernel: DMA32 empty Jan 13 21:10:36.225600 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 21:10:36.225618 kernel: Movable zone start for each node Jan 13 21:10:36.225635 kernel: Early memory node ranges Jan 13 21:10:36.225655 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 21:10:36.225673 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 21:10:36.225690 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 21:10:36.225710 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 21:10:36.225727 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 21:10:36.225745 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 21:10:36.225765 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 21:10:36.225784 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 21:10:36.225802 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 21:10:36.225827 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 21:10:36.225846 kernel: psci: probing for conduit method from ACPI. Jan 13 21:10:36.225872 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 21:10:36.225892 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:10:36.225911 kernel: psci: Trusted OS migration not required Jan 13 21:10:36.225935 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:10:36.225954 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:10:36.225972 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:10:36.225991 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 21:10:36.226009 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:10:36.226027 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:10:36.226045 kernel: CPU features: detected: Spectre-v2 Jan 13 21:10:36.226063 kernel: CPU features: detected: Spectre-v3a Jan 13 21:10:36.226082 kernel: CPU features: detected: Spectre-BHB Jan 13 21:10:36.226100 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 21:10:36.226118 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 21:10:36.226146 kernel: alternatives: applying boot alternatives Jan 13 21:10:36.226167 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:10:36.230247 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:10:36.230305 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:10:36.230325 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:10:36.230344 kernel: Fallback order for Node 0: 0 Jan 13 21:10:36.230363 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 21:10:36.230382 kernel: Policy zone: Normal Jan 13 21:10:36.230400 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:10:36.230420 kernel: software IO TLB: area num 2. Jan 13 21:10:36.230439 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 21:10:36.230474 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 13 21:10:36.230496 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:10:36.230516 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:10:36.230536 kernel: rcu: RCU event tracing is enabled. Jan 13 21:10:36.230557 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:10:36.230577 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:10:36.230597 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:10:36.230618 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:10:36.230637 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:10:36.230655 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:10:36.230674 kernel: GICv3: 96 SPIs implemented Jan 13 21:10:36.230701 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:10:36.230721 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:10:36.230740 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 21:10:36.230759 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 21:10:36.230777 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 21:10:36.230801 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:10:36.230820 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:10:36.230840 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 21:10:36.230860 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 21:10:36.230881 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 21:10:36.230900 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:10:36.230921 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 21:10:36.230950 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 21:10:36.230969 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 21:10:36.230988 kernel: Console: colour dummy device 80x25 Jan 13 21:10:36.231007 kernel: printk: console [tty1] enabled Jan 13 21:10:36.231025 kernel: ACPI: Core revision 20230628 Jan 13 21:10:36.231044 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 21:10:36.231063 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:10:36.231081 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:10:36.231100 kernel: landlock: Up and running. Jan 13 21:10:36.231125 kernel: SELinux: Initializing. Jan 13 21:10:36.231145 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:10:36.231164 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:10:36.231183 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:10:36.231278 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:10:36.231299 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:10:36.231319 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:10:36.231338 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 21:10:36.231357 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 21:10:36.231389 kernel: Remapping and enabling EFI services. Jan 13 21:10:36.231411 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:10:36.231429 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:10:36.231448 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 21:10:36.231466 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 21:10:36.231485 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 21:10:36.231503 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:10:36.231522 kernel: SMP: Total of 2 processors activated. Jan 13 21:10:36.231539 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:10:36.231565 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 21:10:36.231584 kernel: CPU features: detected: CRC32 instructions Jan 13 21:10:36.231602 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:10:36.231632 kernel: alternatives: applying system-wide alternatives Jan 13 21:10:36.231656 kernel: devtmpfs: initialized Jan 13 21:10:36.231676 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:10:36.231695 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:10:36.231714 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:10:36.231733 kernel: SMBIOS 3.0.0 present. Jan 13 21:10:36.231752 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 21:10:36.231777 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:10:36.231796 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:10:36.231815 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:10:36.231834 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:10:36.231853 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:10:36.231872 kernel: audit: type=2000 audit(0.296:1): state=initialized audit_enabled=0 res=1 Jan 13 21:10:36.231891 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:10:36.231915 kernel: cpuidle: using governor menu Jan 13 21:10:36.231934 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:10:36.231955 kernel: ASID allocator initialised with 65536 entries Jan 13 21:10:36.231973 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:10:36.231991 kernel: Serial: AMBA PL011 UART driver Jan 13 21:10:36.232010 kernel: Modules: 17520 pages in range for non-PLT usage Jan 13 21:10:36.232028 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:10:36.232047 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:10:36.232066 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:10:36.232090 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:10:36.232109 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:10:36.232128 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:10:36.232146 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:10:36.232166 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:10:36.232185 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:10:36.237750 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:10:36.237775 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:10:36.237797 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:10:36.237831 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:10:36.237851 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:10:36.237871 kernel: ACPI: Interpreter enabled Jan 13 21:10:36.237890 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:10:36.237909 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:10:36.237929 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 21:10:36.238313 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:10:36.238583 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:10:36.238852 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:10:36.239077 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 21:10:36.239397 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 21:10:36.239436 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 21:10:36.239457 kernel: acpiphp: Slot [1] registered Jan 13 21:10:36.239477 kernel: acpiphp: Slot [2] registered Jan 13 21:10:36.239496 kernel: acpiphp: Slot [3] registered Jan 13 21:10:36.239516 kernel: acpiphp: Slot [4] registered Jan 13 21:10:36.239545 kernel: acpiphp: Slot [5] registered Jan 13 21:10:36.239566 kernel: acpiphp: Slot [6] registered Jan 13 21:10:36.239585 kernel: acpiphp: Slot [7] registered Jan 13 21:10:36.239604 kernel: acpiphp: Slot [8] registered Jan 13 21:10:36.239624 kernel: acpiphp: Slot [9] registered Jan 13 21:10:36.239644 kernel: acpiphp: Slot [10] registered Jan 13 21:10:36.239663 kernel: acpiphp: Slot [11] registered Jan 13 21:10:36.239682 kernel: acpiphp: Slot [12] registered Jan 13 21:10:36.239701 kernel: acpiphp: Slot [13] registered Jan 13 21:10:36.239726 kernel: acpiphp: Slot [14] registered Jan 13 21:10:36.239746 kernel: acpiphp: Slot [15] registered Jan 13 21:10:36.239765 kernel: acpiphp: Slot [16] registered Jan 13 21:10:36.239784 kernel: acpiphp: Slot [17] registered Jan 13 21:10:36.239803 kernel: acpiphp: Slot [18] registered Jan 13 21:10:36.239822 kernel: acpiphp: Slot [19] registered Jan 13 21:10:36.239840 kernel: acpiphp: Slot [20] registered Jan 13 21:10:36.239859 kernel: acpiphp: Slot [21] registered Jan 13 21:10:36.239877 kernel: acpiphp: Slot [22] registered Jan 13 21:10:36.239896 kernel: acpiphp: Slot [23] registered Jan 13 21:10:36.239920 kernel: acpiphp: Slot [24] registered Jan 13 21:10:36.239940 kernel: acpiphp: Slot [25] registered Jan 13 21:10:36.239959 kernel: acpiphp: Slot [26] registered Jan 13 21:10:36.239977 kernel: acpiphp: Slot [27] registered Jan 13 21:10:36.239995 kernel: acpiphp: Slot [28] registered Jan 13 21:10:36.240014 kernel: acpiphp: Slot [29] registered Jan 13 21:10:36.240034 kernel: acpiphp: Slot [30] registered Jan 13 21:10:36.240053 kernel: acpiphp: Slot [31] registered Jan 13 21:10:36.240077 kernel: PCI host bridge to bus 0000:00 Jan 13 21:10:36.240463 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 21:10:36.240685 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:10:36.240895 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 21:10:36.241099 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 21:10:36.245525 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 21:10:36.245926 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 21:10:36.246266 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 21:10:36.246570 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 21:10:36.246801 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 21:10:36.247025 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:10:36.249408 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 21:10:36.249677 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 21:10:36.249898 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 21:10:36.250124 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 21:10:36.251576 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:10:36.251913 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 21:10:36.252155 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 21:10:36.252483 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 21:10:36.252736 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 21:10:36.252989 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 21:10:36.255352 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 21:10:36.255592 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:10:36.255786 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 21:10:36.255816 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:10:36.255838 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:10:36.255858 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:10:36.255878 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:10:36.255897 kernel: iommu: Default domain type: Translated Jan 13 21:10:36.255929 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:10:36.255950 kernel: efivars: Registered efivars operations Jan 13 21:10:36.255970 kernel: vgaarb: loaded Jan 13 21:10:36.255988 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:10:36.256007 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:10:36.256026 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:10:36.256045 kernel: pnp: PnP ACPI init Jan 13 21:10:36.257424 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 21:10:36.257480 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:10:36.257514 kernel: NET: Registered PF_INET protocol family Jan 13 21:10:36.257537 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:10:36.257557 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:10:36.257577 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:10:36.257597 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:10:36.257618 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:10:36.257638 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:10:36.257658 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:10:36.257677 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:10:36.257706 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:10:36.257727 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:10:36.257748 kernel: kvm [1]: HYP mode not available Jan 13 21:10:36.257770 kernel: Initialise system trusted keyrings Jan 13 21:10:36.257790 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:10:36.257813 kernel: Key type asymmetric registered Jan 13 21:10:36.257838 kernel: Asymmetric key parser 'x509' registered Jan 13 21:10:36.257860 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:10:36.257881 kernel: io scheduler mq-deadline registered Jan 13 21:10:36.257911 kernel: io scheduler kyber registered Jan 13 21:10:36.257933 kernel: io scheduler bfq registered Jan 13 21:10:36.258309 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 21:10:36.258362 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:10:36.258382 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:10:36.258402 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 21:10:36.258422 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 21:10:36.258441 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:10:36.258475 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 21:10:36.258736 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 21:10:36.258766 kernel: printk: console [ttyS0] disabled Jan 13 21:10:36.258785 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 21:10:36.258804 kernel: printk: console [ttyS0] enabled Jan 13 21:10:36.258823 kernel: printk: bootconsole [uart0] disabled Jan 13 21:10:36.258842 kernel: thunder_xcv, ver 1.0 Jan 13 21:10:36.258861 kernel: thunder_bgx, ver 1.0 Jan 13 21:10:36.258879 kernel: nicpf, ver 1.0 Jan 13 21:10:36.258905 kernel: nicvf, ver 1.0 Jan 13 21:10:36.259137 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:10:36.259998 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:10:35 UTC (1736802635) Jan 13 21:10:36.260054 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:10:36.260076 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 21:10:36.260097 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:10:36.260118 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:10:36.260137 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:10:36.260170 kernel: Segment Routing with IPv6 Jan 13 21:10:36.260222 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:10:36.260248 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:10:36.260268 kernel: Key type dns_resolver registered Jan 13 21:10:36.260288 kernel: registered taskstats version 1 Jan 13 21:10:36.260308 kernel: Loading compiled-in X.509 certificates Jan 13 21:10:36.260327 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:10:36.260347 kernel: Key type .fscrypt registered Jan 13 21:10:36.260365 kernel: Key type fscrypt-provisioning registered Jan 13 21:10:36.260394 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:10:36.260414 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:10:36.260433 kernel: ima: No architecture policies found Jan 13 21:10:36.260452 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:10:36.260471 kernel: clk: Disabling unused clocks Jan 13 21:10:36.260490 kernel: Freeing unused kernel memory: 39360K Jan 13 21:10:36.260509 kernel: Run /init as init process Jan 13 21:10:36.260527 kernel: with arguments: Jan 13 21:10:36.260547 kernel: /init Jan 13 21:10:36.260567 kernel: with environment: Jan 13 21:10:36.260593 kernel: HOME=/ Jan 13 21:10:36.260613 kernel: TERM=linux Jan 13 21:10:36.260632 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:10:36.260657 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:10:36.260682 systemd[1]: Detected virtualization amazon. Jan 13 21:10:36.260703 systemd[1]: Detected architecture arm64. Jan 13 21:10:36.260724 systemd[1]: Running in initrd. Jan 13 21:10:36.260752 systemd[1]: No hostname configured, using default hostname. Jan 13 21:10:36.260773 systemd[1]: Hostname set to <localhost>. Jan 13 21:10:36.260795 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:10:36.260815 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:10:36.260835 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:10:36.260856 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:10:36.260878 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:10:36.260900 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:10:36.260928 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:10:36.260950 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:10:36.260974 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:10:36.260996 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:10:36.261017 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:10:36.261038 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:10:36.261059 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:10:36.261087 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:10:36.261107 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:10:36.261127 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:10:36.261149 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:10:36.261171 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:10:36.261243 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:10:36.261274 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:10:36.261296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:10:36.261328 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:10:36.261349 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:10:36.261371 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:10:36.261393 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:10:36.264379 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:10:36.264416 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:10:36.264437 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:10:36.264458 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:10:36.264480 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:10:36.264517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:36.264538 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:10:36.264559 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:10:36.264580 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:10:36.264666 systemd-journald[251]: Collecting audit messages is disabled. Jan 13 21:10:36.264724 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:10:36.264746 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:10:36.264768 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:10:36.264796 systemd-journald[251]: Journal started Jan 13 21:10:36.264836 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2301f4aea5273e5bf7021b76549df2) is 8.0M, max 75.3M, 67.3M free. Jan 13 21:10:36.246422 systemd-modules-load[252]: Inserted module 'overlay' Jan 13 21:10:36.272244 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:10:36.273274 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:36.296251 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:10:36.300888 kernel: Bridge firewalling registered Jan 13 21:10:36.299943 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 13 21:10:36.303080 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:10:36.310471 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:10:36.327640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:10:36.344637 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:10:36.366292 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:10:36.372245 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:10:36.388143 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:36.400471 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:10:36.408089 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:10:36.423539 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:10:36.451353 dracut-cmdline[286]: dracut-dracut-053 Jan 13 21:10:36.458276 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:10:36.516870 systemd-resolved[288]: Positive Trust Anchors: Jan 13 21:10:36.516915 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:10:36.516975 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:10:36.627260 kernel: SCSI subsystem initialized Jan 13 21:10:36.637225 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:10:36.648253 kernel: iscsi: registered transport (tcp) Jan 13 21:10:36.671241 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:10:36.672230 kernel: QLogic iSCSI HBA Driver Jan 13 21:10:36.743236 kernel: random: crng init done Jan 13 21:10:36.743473 systemd-resolved[288]: Defaulting to hostname 'linux'. Jan 13 21:10:36.747115 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:10:36.750684 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:10:36.780653 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:10:36.801710 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:10:36.835712 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:10:36.835792 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:10:36.837841 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:10:36.909282 kernel: raid6: neonx8 gen() 6710 MB/s Jan 13 21:10:36.926255 kernel: raid6: neonx4 gen() 6524 MB/s Jan 13 21:10:36.943253 kernel: raid6: neonx2 gen() 5436 MB/s Jan 13 21:10:36.960267 kernel: raid6: neonx1 gen() 3940 MB/s Jan 13 21:10:36.977252 kernel: raid6: int64x8 gen() 3796 MB/s Jan 13 21:10:36.994250 kernel: raid6: int64x4 gen() 3706 MB/s Jan 13 21:10:37.011276 kernel: raid6: int64x2 gen() 3602 MB/s Jan 13 21:10:37.029078 kernel: raid6: int64x1 gen() 2727 MB/s Jan 13 21:10:37.029152 kernel: raid6: using algorithm neonx8 gen() 6710 MB/s Jan 13 21:10:37.047062 kernel: raid6: .... xor() 4676 MB/s, rmw enabled Jan 13 21:10:37.047150 kernel: raid6: using neon recovery algorithm Jan 13 21:10:37.056408 kernel: xor: measuring software checksum speed Jan 13 21:10:37.056496 kernel: 8regs : 10983 MB/sec Jan 13 21:10:37.057545 kernel: 32regs : 11793 MB/sec Jan 13 21:10:37.058773 kernel: arm64_neon : 9518 MB/sec Jan 13 21:10:37.058841 kernel: xor: using function: 32regs (11793 MB/sec) Jan 13 21:10:37.148245 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:10:37.172049 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:10:37.180541 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:10:37.227872 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jan 13 21:10:37.238478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:10:37.256873 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:10:37.303728 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Jan 13 21:10:37.359616 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:10:37.368537 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:10:37.483376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:10:37.505535 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:10:37.563019 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:10:37.569072 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:10:37.571352 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:10:37.573916 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:10:37.585112 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:10:37.628234 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:10:37.682879 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:10:37.682951 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 21:10:37.722719 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 21:10:37.722987 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 21:10:37.725267 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:45:e0:13:ed:03 Jan 13 21:10:37.693352 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:10:37.693618 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:37.696209 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:10:37.698474 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:10:37.698748 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:37.700992 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:37.720751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:37.723918 (udev-worker)[522]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:10:37.768351 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 21:10:37.768410 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 21:10:37.778254 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 21:10:37.789300 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:37.800356 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:10:37.800394 kernel: GPT:9289727 != 16777215 Jan 13 21:10:37.800420 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:10:37.800446 kernel: GPT:9289727 != 16777215 Jan 13 21:10:37.800471 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:10:37.800497 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:37.804595 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:10:37.846349 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:37.895242 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (527) Jan 13 21:10:37.966824 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:10:38.004482 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 21:10:38.023502 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (534) Jan 13 21:10:38.047127 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 21:10:38.104456 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 21:10:38.109440 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 21:10:38.121614 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:10:38.148951 disk-uuid[661]: Primary Header is updated. Jan 13 21:10:38.148951 disk-uuid[661]: Secondary Entries is updated. Jan 13 21:10:38.148951 disk-uuid[661]: Secondary Header is updated. Jan 13 21:10:38.162252 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:38.172251 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:38.179256 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:39.181416 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:39.184526 disk-uuid[662]: The operation has completed successfully. Jan 13 21:10:39.381675 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:10:39.383496 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:10:39.421283 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:10:39.430518 sh[1005]: Success Jan 13 21:10:39.456473 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:10:39.583531 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:10:39.590126 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:10:39.595291 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:10:39.640025 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:10:39.640111 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:39.640141 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:10:39.642925 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:10:39.642996 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:10:39.746234 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:10:39.760381 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:10:39.761706 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:10:39.778517 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:10:39.785541 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:10:39.818219 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:39.818308 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:39.819834 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:10:39.827299 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:10:39.844555 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:10:39.847351 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:39.856513 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:10:39.867770 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:10:39.975144 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:10:39.987544 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:10:40.035393 systemd-networkd[1198]: lo: Link UP Jan 13 21:10:40.035882 systemd-networkd[1198]: lo: Gained carrier Jan 13 21:10:40.038712 systemd-networkd[1198]: Enumeration completed Jan 13 21:10:40.039588 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:40.039595 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:10:40.045128 systemd-networkd[1198]: eth0: Link UP Jan 13 21:10:40.045139 systemd-networkd[1198]: eth0: Gained carrier Jan 13 21:10:40.045748 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:40.068080 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:10:40.075861 systemd[1]: Reached target network.target - Network. Jan 13 21:10:40.085332 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.25.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:10:40.344558 ignition[1106]: Ignition 2.19.0 Jan 13 21:10:40.344580 ignition[1106]: Stage: fetch-offline Jan 13 21:10:40.345116 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:40.345141 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:40.346169 ignition[1106]: Ignition finished successfully Jan 13 21:10:40.356281 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:10:40.371598 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:10:40.403567 ignition[1206]: Ignition 2.19.0 Jan 13 21:10:40.403595 ignition[1206]: Stage: fetch Jan 13 21:10:40.404270 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:40.404296 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:40.404454 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:40.417391 ignition[1206]: PUT result: OK Jan 13 21:10:40.420749 ignition[1206]: parsed url from cmdline: "" Jan 13 21:10:40.420963 ignition[1206]: no config URL provided Jan 13 21:10:40.420987 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:10:40.421135 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:10:40.421356 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:40.425302 ignition[1206]: PUT result: OK Jan 13 21:10:40.425439 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 21:10:40.432902 ignition[1206]: GET result: OK Jan 13 21:10:40.433507 ignition[1206]: parsing config with SHA512: fc6117dbca03be2b17ef90e358b8d257380ee31131a58647785708dd9f9ca76c7472ad1ebef358bb4a1f264a56786ae3478ff70f47ef73816a1b572c0891e81a Jan 13 21:10:40.443091 unknown[1206]: fetched base config from "system" Jan 13 21:10:40.443464 unknown[1206]: fetched base config from "system" Jan 13 21:10:40.444534 ignition[1206]: fetch: fetch complete Jan 13 21:10:40.443479 unknown[1206]: fetched user config from "aws" Jan 13 21:10:40.444546 ignition[1206]: fetch: fetch passed Jan 13 21:10:40.445586 ignition[1206]: Ignition finished successfully Jan 13 21:10:40.456274 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:10:40.464544 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:10:40.497382 ignition[1213]: Ignition 2.19.0 Jan 13 21:10:40.497415 ignition[1213]: Stage: kargs Jan 13 21:10:40.498946 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:40.498972 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:40.499779 ignition[1213]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:40.505972 ignition[1213]: PUT result: OK Jan 13 21:10:40.522216 ignition[1213]: kargs: kargs passed Jan 13 21:10:40.522350 ignition[1213]: Ignition finished successfully Jan 13 21:10:40.527254 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:10:40.540460 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:10:40.565068 ignition[1219]: Ignition 2.19.0 Jan 13 21:10:40.565097 ignition[1219]: Stage: disks Jan 13 21:10:40.566807 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:40.566834 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:40.567947 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:40.572465 ignition[1219]: PUT result: OK Jan 13 21:10:40.578816 ignition[1219]: disks: disks passed Jan 13 21:10:40.579116 ignition[1219]: Ignition finished successfully Jan 13 21:10:40.584262 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:10:40.586952 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:10:40.590983 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:10:40.593295 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:10:40.593421 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:10:40.593701 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:10:40.616556 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:10:40.664781 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:10:40.671766 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:10:40.683451 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:10:40.784244 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:10:40.785618 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:10:40.788853 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:10:40.803673 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:10:40.809471 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:10:40.813884 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:10:40.814183 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:10:40.814264 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:10:40.841183 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:10:40.852580 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:10:40.860223 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1247) Jan 13 21:10:40.866825 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:40.866904 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:40.868563 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:10:40.878246 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:10:40.881178 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:10:41.324923 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:10:41.334018 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:10:41.343364 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:10:41.363790 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:10:41.562486 systemd-networkd[1198]: eth0: Gained IPv6LL Jan 13 21:10:41.696167 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:10:41.709377 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:10:41.715517 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:10:41.732380 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:10:41.737481 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:41.780892 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:10:41.789112 ignition[1360]: INFO : Ignition 2.19.0 Jan 13 21:10:41.789112 ignition[1360]: INFO : Stage: mount Jan 13 21:10:41.789112 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:41.789112 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:41.797013 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:41.797013 ignition[1360]: INFO : PUT result: OK Jan 13 21:10:41.803812 ignition[1360]: INFO : mount: mount passed Jan 13 21:10:41.805666 ignition[1360]: INFO : Ignition finished successfully Jan 13 21:10:41.808988 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:10:41.820434 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:10:41.842653 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:10:41.867248 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1372) Jan 13 21:10:41.871262 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:41.871309 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:41.871349 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:10:41.877222 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:10:41.880668 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:10:41.915559 ignition[1389]: INFO : Ignition 2.19.0 Jan 13 21:10:41.915559 ignition[1389]: INFO : Stage: files Jan 13 21:10:41.918724 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:41.918724 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:41.918724 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:41.926067 ignition[1389]: INFO : PUT result: OK Jan 13 21:10:41.930229 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:10:41.944142 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:10:41.944142 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:10:41.971626 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:10:41.974625 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:10:41.977669 unknown[1389]: wrote ssh authorized keys file for user: core Jan 13 21:10:41.979966 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:10:41.992658 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:10:41.992658 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 21:10:42.064080 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:10:42.538592 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:10:42.542905 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:10:42.542905 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 21:10:42.878374 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:10:43.006892 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:10:43.011324 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:10:43.011324 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:10:43.011324 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:10:43.021769 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:10:43.021769 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:10:43.021769 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:10:43.021769 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:10:43.021769 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:10:43.021769 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:10:43.021769 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:10:43.021769 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:10:43.021769 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:10:43.021769 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:10:43.021769 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 13 21:10:43.535119 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:10:44.339503 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 21:10:44.339503 ignition[1389]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:10:44.346414 ignition[1389]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:10:44.346414 ignition[1389]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:10:44.346414 ignition[1389]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:10:44.346414 ignition[1389]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:10:44.346414 ignition[1389]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:10:44.346414 ignition[1389]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:10:44.346414 ignition[1389]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:10:44.346414 ignition[1389]: INFO : files: files passed Jan 13 21:10:44.346414 ignition[1389]: INFO : Ignition finished successfully Jan 13 21:10:44.374989 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:10:44.384580 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:10:44.397505 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:10:44.402622 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:10:44.404417 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:10:44.440764 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:10:44.440764 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:10:44.446775 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:10:44.451779 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:10:44.454639 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:10:44.471476 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:10:44.526454 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:10:44.526831 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:10:44.535003 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:10:44.537003 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:10:44.539013 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:10:44.557687 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:10:44.586691 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:10:44.601987 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:10:44.624565 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:10:44.627320 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:10:44.631701 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:10:44.636873 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:10:44.637112 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:10:44.639831 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:10:44.641876 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:10:44.644115 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:10:44.651674 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:10:44.654018 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:10:44.656438 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:10:44.660089 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:10:44.662646 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:10:44.664841 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:10:44.666974 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:10:44.668665 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:10:44.668902 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:10:44.671402 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:10:44.673684 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:10:44.676056 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:10:44.676581 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:10:44.705482 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:10:44.705734 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:10:44.708838 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:10:44.709102 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:10:44.718721 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:10:44.719424 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:10:44.732607 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:10:44.735494 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:10:44.735766 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:10:44.758819 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:10:44.765404 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:10:44.765735 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:10:44.775530 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:10:44.777412 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:10:44.796667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:10:44.799673 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:10:44.824702 ignition[1442]: INFO : Ignition 2.19.0 Jan 13 21:10:44.824702 ignition[1442]: INFO : Stage: umount Jan 13 21:10:44.831926 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:44.831926 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:44.831926 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:44.831926 ignition[1442]: INFO : PUT result: OK Jan 13 21:10:44.831074 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:10:44.844768 ignition[1442]: INFO : umount: umount passed Jan 13 21:10:44.844768 ignition[1442]: INFO : Ignition finished successfully Jan 13 21:10:44.848676 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:10:44.848904 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:10:44.854924 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:10:44.856609 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:10:44.859888 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:10:44.860058 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:10:44.862159 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:10:44.863610 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:10:44.865971 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:10:44.866406 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:10:44.869135 systemd[1]: Stopped target network.target - Network. Jan 13 21:10:44.870761 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:10:44.870855 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:10:44.873078 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:10:44.874712 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:10:44.891888 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:10:44.894524 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:10:44.896560 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:10:44.913225 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:10:44.913314 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:10:44.915765 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:10:44.915838 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:10:44.929490 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:10:44.929607 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:10:44.931626 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:10:44.931739 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:10:44.933786 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:10:44.933886 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:10:44.936156 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:10:44.938461 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:10:44.946835 systemd-networkd[1198]: eth0: DHCPv6 lease lost Jan 13 21:10:44.951061 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:10:44.953302 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:10:44.958972 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:10:44.959086 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:10:44.985453 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:10:44.987340 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:10:44.987656 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:10:44.994393 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:10:44.997025 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:10:44.997254 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:10:45.010818 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:10:45.010942 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:10:45.019231 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:10:45.019372 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:10:45.030502 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:10:45.030610 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:10:45.047108 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:10:45.047456 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:10:45.055615 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:10:45.055811 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:10:45.060806 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:10:45.060936 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:10:45.066936 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:10:45.067005 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:10:45.068985 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:10:45.069082 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:10:45.071419 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:10:45.071504 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:10:45.089894 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:10:45.090011 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:45.106619 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:10:45.110904 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:10:45.111032 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:10:45.115978 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:10:45.116076 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:45.122757 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:10:45.122961 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:10:45.139814 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:10:45.151489 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:10:45.167149 systemd[1]: Switching root. Jan 13 21:10:45.204459 systemd-journald[251]: Journal stopped Jan 13 21:10:47.659112 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 13 21:10:47.659365 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:10:47.659428 kernel: SELinux: policy capability open_perms=1 Jan 13 21:10:47.659464 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:10:47.659498 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:10:47.659530 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:10:47.659564 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:10:47.659597 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:10:47.659629 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:10:47.659668 kernel: audit: type=1403 audit(1736802645.722:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:10:47.659714 systemd[1]: Successfully loaded SELinux policy in 77.441ms. Jan 13 21:10:47.659764 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.448ms. Jan 13 21:10:47.659801 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:10:47.659837 systemd[1]: Detected virtualization amazon. Jan 13 21:10:47.659868 systemd[1]: Detected architecture arm64. Jan 13 21:10:47.659900 systemd[1]: Detected first boot. Jan 13 21:10:47.659934 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:10:47.659967 zram_generator::config[1485]: No configuration found. Jan 13 21:10:47.660006 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:10:47.660039 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:10:47.660071 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:10:47.660107 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:10:47.660141 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:10:47.660177 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:10:47.660486 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:10:47.660548 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:10:47.660588 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:10:47.660621 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:10:47.660658 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:10:47.660692 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:10:47.660724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:10:47.660761 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:10:47.660794 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:10:47.660828 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:10:47.660862 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:10:47.660901 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:10:47.660934 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:10:47.660965 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:10:47.661001 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:10:47.661035 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:10:47.661070 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:10:47.661101 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:10:47.661139 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:10:47.661175 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:10:47.661277 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:10:47.661325 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:10:47.661362 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:10:47.661401 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:10:47.661437 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:10:47.661470 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:10:47.661506 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:10:47.661538 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:10:47.661582 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:10:47.661617 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:10:47.661648 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:10:47.661681 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:10:47.661716 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:10:47.661757 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:10:47.661790 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:10:47.661826 systemd[1]: Reached target machines.target - Containers. Jan 13 21:10:47.661860 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:10:47.661898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:10:47.661930 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:10:47.661962 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:10:47.661993 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:10:47.662029 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:10:47.662061 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:10:47.662103 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:10:47.662139 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:10:47.662176 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:10:47.662262 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:10:47.662302 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:10:47.662334 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:10:47.662366 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:10:47.662396 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:10:47.662427 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:10:47.662456 kernel: loop: module loaded Jan 13 21:10:47.662488 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:10:47.662528 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:10:47.662560 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:10:47.662594 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:10:47.662625 systemd[1]: Stopped verity-setup.service. Jan 13 21:10:47.662658 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:10:47.662693 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:10:47.662726 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:10:47.662760 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:10:47.662791 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:10:47.662827 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:10:47.662858 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:10:47.662887 kernel: ACPI: bus type drm_connector registered Jan 13 21:10:47.662919 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:10:47.662950 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:10:47.662985 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:10:47.663017 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:10:47.663050 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:10:47.663082 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:10:47.663113 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:10:47.663167 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:10:47.663263 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:10:47.663301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:10:47.663339 kernel: fuse: init (API version 7.39) Jan 13 21:10:47.663371 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:10:47.663408 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:10:47.663439 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:10:47.663470 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:10:47.663504 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:10:47.663539 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:10:47.663572 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:10:47.663654 systemd-journald[1570]: Collecting audit messages is disabled. Jan 13 21:10:47.663707 systemd-journald[1570]: Journal started Jan 13 21:10:47.663758 systemd-journald[1570]: Runtime Journal (/run/log/journal/ec2301f4aea5273e5bf7021b76549df2) is 8.0M, max 75.3M, 67.3M free. Jan 13 21:10:47.667310 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:10:46.994399 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:10:47.053105 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 21:10:47.053989 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:10:47.693234 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:10:47.698843 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:10:47.698936 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:10:47.711586 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:10:47.728137 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:10:47.737382 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:10:47.743546 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:10:47.754773 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:10:47.754883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:10:47.770404 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:10:47.770499 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:10:47.783157 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:10:47.796609 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:10:47.812327 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:10:47.819306 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:10:47.823963 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:10:47.826612 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:10:47.839905 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:10:47.891753 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:10:47.915842 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:10:47.928278 kernel: loop0: detected capacity change from 0 to 114328 Jan 13 21:10:47.927099 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:10:47.945581 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:10:47.955919 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:10:47.971708 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:10:47.978118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:10:48.006772 systemd-journald[1570]: Time spent on flushing to /var/log/journal/ec2301f4aea5273e5bf7021b76549df2 is 91.260ms for 917 entries. Jan 13 21:10:48.006772 systemd-journald[1570]: System Journal (/var/log/journal/ec2301f4aea5273e5bf7021b76549df2) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:10:48.115308 systemd-journald[1570]: Received client request to flush runtime journal. Jan 13 21:10:48.115661 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:10:48.036766 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:10:48.049626 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:10:48.082536 udevadm[1626]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:10:48.123021 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:10:48.158650 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:10:48.163908 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:10:48.187407 systemd-tmpfiles[1629]: ACLs are not supported, ignoring. Jan 13 21:10:48.187450 systemd-tmpfiles[1629]: ACLs are not supported, ignoring. Jan 13 21:10:48.195245 kernel: loop1: detected capacity change from 0 to 114432 Jan 13 21:10:48.202325 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:10:48.301240 kernel: loop2: detected capacity change from 0 to 189592 Jan 13 21:10:48.493594 kernel: loop3: detected capacity change from 0 to 52536 Jan 13 21:10:48.534249 kernel: loop4: detected capacity change from 0 to 114328 Jan 13 21:10:48.561252 kernel: loop5: detected capacity change from 0 to 114432 Jan 13 21:10:48.591769 kernel: loop6: detected capacity change from 0 to 189592 Jan 13 21:10:48.624232 kernel: loop7: detected capacity change from 0 to 52536 Jan 13 21:10:48.645081 (sd-merge)[1643]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 21:10:48.647154 (sd-merge)[1643]: Merged extensions into '/usr'. Jan 13 21:10:48.655649 systemd[1]: Reloading requested from client PID 1596 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:10:48.655872 systemd[1]: Reloading... Jan 13 21:10:48.851231 zram_generator::config[1668]: No configuration found. Jan 13 21:10:49.247968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:10:49.369072 systemd[1]: Reloading finished in 712 ms. Jan 13 21:10:49.414294 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:10:49.417697 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:10:49.433702 systemd[1]: Starting ensure-sysext.service... Jan 13 21:10:49.444491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:10:49.451709 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:10:49.479489 systemd[1]: Reloading requested from client PID 1721 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:10:49.479512 systemd[1]: Reloading... Jan 13 21:10:49.518513 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:10:49.519998 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:10:49.523506 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:10:49.524308 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Jan 13 21:10:49.524559 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Jan 13 21:10:49.536357 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:10:49.539259 systemd-tmpfiles[1722]: Skipping /boot Jan 13 21:10:49.566418 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:10:49.566447 systemd-tmpfiles[1722]: Skipping /boot Jan 13 21:10:49.618885 systemd-udevd[1723]: Using default interface naming scheme 'v255'. Jan 13 21:10:49.740244 zram_generator::config[1757]: No configuration found. Jan 13 21:10:49.747714 ldconfig[1592]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:10:49.928376 (udev-worker)[1769]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:10:50.116407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:10:50.149232 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1774) Jan 13 21:10:50.274356 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:10:50.275071 systemd[1]: Reloading finished in 794 ms. Jan 13 21:10:50.318661 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:10:50.324274 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:10:50.327510 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:10:50.436242 systemd[1]: Finished ensure-sysext.service. Jan 13 21:10:50.475299 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:10:50.485426 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:10:50.498669 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:10:50.516721 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:10:50.519470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:10:50.525392 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:10:50.534747 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:10:50.542633 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:10:50.556596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:10:50.565581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:10:50.567940 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:10:50.574649 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:10:50.583560 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:10:50.593545 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:10:50.603585 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:10:50.606429 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:10:50.613563 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:10:50.622567 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:50.628139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:10:50.628608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:10:50.631641 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:10:50.631976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:10:50.639552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:10:50.668224 lvm[1922]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:10:50.699439 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:10:50.703508 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:10:50.703964 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:10:50.710278 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:10:50.712870 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:10:50.718128 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:10:50.781261 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:10:50.811581 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:10:50.825716 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:10:50.831106 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:10:50.833769 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:10:50.842555 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:10:50.850224 augenrules[1957]: No rules Jan 13 21:10:50.855665 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:10:50.873287 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:10:50.878385 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:10:50.881313 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:10:50.889095 lvm[1963]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:10:50.890860 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:10:50.920279 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:10:50.944592 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:10:50.999361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:51.060480 systemd-networkd[1935]: lo: Link UP Jan 13 21:10:51.060507 systemd-networkd[1935]: lo: Gained carrier Jan 13 21:10:51.062808 systemd-resolved[1936]: Positive Trust Anchors: Jan 13 21:10:51.063485 systemd-resolved[1936]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:10:51.063562 systemd-resolved[1936]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:10:51.063630 systemd-networkd[1935]: Enumeration completed Jan 13 21:10:51.063829 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:10:51.069180 systemd-networkd[1935]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:51.069821 systemd-networkd[1935]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:10:51.072120 systemd-networkd[1935]: eth0: Link UP Jan 13 21:10:51.072788 systemd-networkd[1935]: eth0: Gained carrier Jan 13 21:10:51.072841 systemd-networkd[1935]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:51.073519 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:10:51.086337 systemd-networkd[1935]: eth0: DHCPv4 address 172.31.25.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:10:51.086864 systemd-resolved[1936]: Defaulting to hostname 'linux'. Jan 13 21:10:51.090623 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:10:51.093285 systemd[1]: Reached target network.target - Network. Jan 13 21:10:51.095428 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:10:51.099404 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:10:51.101598 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:10:51.104002 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:10:51.106766 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:10:51.109734 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:10:51.112421 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:10:51.114968 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:10:51.115025 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:10:51.116968 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:10:51.120916 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:10:51.126245 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:10:51.151940 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:10:51.155412 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:10:51.157982 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:10:51.160283 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:10:51.162928 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:10:51.162990 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:10:51.174366 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:10:51.181353 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:10:51.190654 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:10:51.199370 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:10:51.207531 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:10:51.211970 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:10:51.220563 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:10:51.236465 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:10:51.248004 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:10:51.256012 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:10:51.265513 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:10:51.295871 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:10:51.309915 jq[1986]: false Jan 13 21:10:51.340774 extend-filesystems[1987]: Found loop4 Jan 13 21:10:51.340774 extend-filesystems[1987]: Found loop5 Jan 13 21:10:51.340774 extend-filesystems[1987]: Found loop6 Jan 13 21:10:51.340774 extend-filesystems[1987]: Found loop7 Jan 13 21:10:51.340774 extend-filesystems[1987]: Found nvme0n1 Jan 13 21:10:51.340774 extend-filesystems[1987]: Found nvme0n1p1 Jan 13 21:10:51.340774 extend-filesystems[1987]: Found nvme0n1p2 Jan 13 21:10:51.340774 extend-filesystems[1987]: Found nvme0n1p3 Jan 13 21:10:51.340774 extend-filesystems[1987]: Found usr Jan 13 21:10:51.340774 extend-filesystems[1987]: Found nvme0n1p4 Jan 13 21:10:51.340774 extend-filesystems[1987]: Found nvme0n1p6 Jan 13 21:10:51.340774 extend-filesystems[1987]: Found nvme0n1p7 Jan 13 21:10:51.340774 extend-filesystems[1987]: Found nvme0n1p9 Jan 13 21:10:51.340774 extend-filesystems[1987]: Checking size of /dev/nvme0n1p9 Jan 13 21:10:51.343629 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:10:51.348909 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:10:51.350969 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:10:51.373500 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:10:51.400591 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:10:51.410893 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:10:51.412103 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:10:51.424018 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:10:51.424910 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:10:51.455402 dbus-daemon[1985]: [system] SELinux support is enabled Jan 13 21:10:51.463557 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:10:51.474848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:10:51.486501 extend-filesystems[1987]: Resized partition /dev/nvme0n1p9 Jan 13 21:10:51.476145 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:10:51.495677 jq[2002]: true Jan 13 21:10:51.496101 extend-filesystems[2022]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:10:51.481376 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:10:51.497762 dbus-daemon[1985]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1935 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:10:51.481422 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:10:51.511347 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 21:10:51.518128 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:10:51.535379 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:33 UTC 2025 (1): Starting Jan 13 21:10:51.535379 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:10:51.535379 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: ---------------------------------------------------- Jan 13 21:10:51.535379 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:10:51.535379 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:10:51.535379 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: corporation. Support and training for ntp-4 are Jan 13 21:10:51.535379 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: available at https://www.nwtime.org/support Jan 13 21:10:51.535379 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: ---------------------------------------------------- Jan 13 21:10:51.531846 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:33 UTC 2025 (1): Starting Jan 13 21:10:51.531902 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:10:51.536569 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:10:51.531926 ntpd[1989]: ---------------------------------------------------- Jan 13 21:10:51.531948 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:10:51.531970 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:10:51.556470 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: proto: precision = 0.096 usec (-23) Jan 13 21:10:51.556470 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: basedate set to 2025-01-01 Jan 13 21:10:51.556470 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: gps base set to 2025-01-05 (week 2348) Jan 13 21:10:51.556470 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:10:51.556470 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:10:51.531990 ntpd[1989]: corporation. Support and training for ntp-4 are Jan 13 21:10:51.532010 ntpd[1989]: available at https://www.nwtime.org/support Jan 13 21:10:51.556835 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:10:51.556835 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: Listen normally on 3 eth0 172.31.25.188:123 Jan 13 21:10:51.532030 ntpd[1989]: ---------------------------------------------------- Jan 13 21:10:51.556974 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: Listen normally on 4 lo [::1]:123 Jan 13 21:10:51.556974 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: bind(21) AF_INET6 fe80::445:e0ff:fe13:ed03%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:10:51.541378 ntpd[1989]: proto: precision = 0.096 usec (-23) Jan 13 21:10:51.557131 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: unable to create socket on eth0 (5) for fe80::445:e0ff:fe13:ed03%2#123 Jan 13 21:10:51.557131 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: failed to init interface for address fe80::445:e0ff:fe13:ed03%2 Jan 13 21:10:51.543374 ntpd[1989]: basedate set to 2025-01-01 Jan 13 21:10:51.565521 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Jan 13 21:10:51.543410 ntpd[1989]: gps base set to 2025-01-05 (week 2348) Jan 13 21:10:51.554482 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:10:51.555366 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:10:51.556723 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:10:51.556795 ntpd[1989]: Listen normally on 3 eth0 172.31.25.188:123 Jan 13 21:10:51.556861 ntpd[1989]: Listen normally on 4 lo [::1]:123 Jan 13 21:10:51.556938 ntpd[1989]: bind(21) AF_INET6 fe80::445:e0ff:fe13:ed03%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:10:51.556982 ntpd[1989]: unable to create socket on eth0 (5) for fe80::445:e0ff:fe13:ed03%2#123 Jan 13 21:10:51.557011 ntpd[1989]: failed to init interface for address fe80::445:e0ff:fe13:ed03%2 Jan 13 21:10:51.559414 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Jan 13 21:10:51.572996 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:51.587648 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:51.587648 ntpd[1989]: 13 Jan 21:10:51 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:51.587760 update_engine[1999]: I20250113 21:10:51.576886 1999 main.cc:92] Flatcar Update Engine starting Jan 13 21:10:51.573117 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:51.597850 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:10:51.608703 update_engine[1999]: I20250113 21:10:51.603713 1999 update_check_scheduler.cc:74] Next update check in 11m30s Jan 13 21:10:51.607524 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:10:51.613940 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 21:10:51.621002 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:10:51.621464 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:10:51.632819 tar[2006]: linux-arm64/helm Jan 13 21:10:51.627885 (ntainerd)[2031]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:10:51.638966 extend-filesystems[2022]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 21:10:51.638966 extend-filesystems[2022]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:10:51.638966 extend-filesystems[2022]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 21:10:51.666760 extend-filesystems[1987]: Resized filesystem in /dev/nvme0n1p9 Jan 13 21:10:51.641974 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:10:51.643617 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetch successful Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetch successful Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetch successful Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetch successful Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetch failed with 404: resource not found Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetch successful Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetch successful Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetch successful Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetch successful Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 21:10:51.711867 coreos-metadata[1984]: Jan 13 21:10:51.711 INFO Fetch successful Jan 13 21:10:51.739109 jq[2024]: true Jan 13 21:10:51.749939 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1756) Jan 13 21:10:51.800838 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:10:51.883813 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:10:51.964378 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:10:51.969426 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:10:52.043979 bash[2124]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:10:52.054349 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:10:52.080587 systemd-logind[1997]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:10:52.080651 systemd-logind[1997]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 21:10:52.082722 systemd-logind[1997]: New seat seat0. Jan 13 21:10:52.097118 systemd[1]: Starting sshkeys.service... Jan 13 21:10:52.100514 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:10:52.133860 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:10:52.148983 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:10:52.249361 locksmithd[2037]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:10:52.335730 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:10:52.336077 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:10:52.337650 dbus-daemon[1985]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2027 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:10:52.363985 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:10:52.396036 polkitd[2163]: Started polkitd version 121 Jan 13 21:10:52.424504 polkitd[2163]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:10:52.424639 polkitd[2163]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:10:52.425779 polkitd[2163]: Finished loading, compiling and executing 2 rules Jan 13 21:10:52.448534 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:10:52.454674 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:10:52.462337 polkitd[2163]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:10:52.539627 ntpd[1989]: bind(24) AF_INET6 fe80::445:e0ff:fe13:ed03%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:10:52.539767 ntpd[1989]: unable to create socket on eth0 (6) for fe80::445:e0ff:fe13:ed03%2#123 Jan 13 21:10:52.540249 ntpd[1989]: 13 Jan 21:10:52 ntpd[1989]: bind(24) AF_INET6 fe80::445:e0ff:fe13:ed03%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:10:52.540249 ntpd[1989]: 13 Jan 21:10:52 ntpd[1989]: unable to create socket on eth0 (6) for fe80::445:e0ff:fe13:ed03%2#123 Jan 13 21:10:52.540249 ntpd[1989]: 13 Jan 21:10:52 ntpd[1989]: failed to init interface for address fe80::445:e0ff:fe13:ed03%2 Jan 13 21:10:52.539800 ntpd[1989]: failed to init interface for address fe80::445:e0ff:fe13:ed03%2 Jan 13 21:10:52.571232 coreos-metadata[2146]: Jan 13 21:10:52.567 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:10:52.571232 coreos-metadata[2146]: Jan 13 21:10:52.568 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 21:10:52.571232 coreos-metadata[2146]: Jan 13 21:10:52.570 INFO Fetch successful Jan 13 21:10:52.571232 coreos-metadata[2146]: Jan 13 21:10:52.570 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:10:52.573713 coreos-metadata[2146]: Jan 13 21:10:52.572 INFO Fetch successful Jan 13 21:10:52.578743 unknown[2146]: wrote ssh authorized keys file for user: core Jan 13 21:10:52.614152 systemd-hostnamed[2027]: Hostname set to <ip-172-31-25-188> (transient) Jan 13 21:10:52.614649 systemd-resolved[1936]: System hostname changed to 'ip-172-31-25-188'. Jan 13 21:10:52.653294 update-ssh-keys[2183]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:10:52.656308 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:10:52.672012 systemd[1]: Finished sshkeys.service. Jan 13 21:10:52.718055 containerd[2031]: time="2025-01-13T21:10:52.717908329Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:10:52.842219 containerd[2031]: time="2025-01-13T21:10:52.840107198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:52.846453 containerd[2031]: time="2025-01-13T21:10:52.846361262Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:52.846453 containerd[2031]: time="2025-01-13T21:10:52.846449618Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:10:52.846654 containerd[2031]: time="2025-01-13T21:10:52.846491102Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:10:52.849234 containerd[2031]: time="2025-01-13T21:10:52.846848390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:10:52.849234 containerd[2031]: time="2025-01-13T21:10:52.846913706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:52.849234 containerd[2031]: time="2025-01-13T21:10:52.847084946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:52.849234 containerd[2031]: time="2025-01-13T21:10:52.847138538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:52.851228 containerd[2031]: time="2025-01-13T21:10:52.850277330Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:52.851228 containerd[2031]: time="2025-01-13T21:10:52.851226638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:52.851391 containerd[2031]: time="2025-01-13T21:10:52.851271338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:52.851391 containerd[2031]: time="2025-01-13T21:10:52.851298686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:52.851595 containerd[2031]: time="2025-01-13T21:10:52.851535086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:52.852101 containerd[2031]: time="2025-01-13T21:10:52.852038606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:52.854047 containerd[2031]: time="2025-01-13T21:10:52.853975226Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:52.854047 containerd[2031]: time="2025-01-13T21:10:52.854039570Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:10:52.855170 containerd[2031]: time="2025-01-13T21:10:52.855064610Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:10:52.855459 containerd[2031]: time="2025-01-13T21:10:52.855277418Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:10:52.862222 containerd[2031]: time="2025-01-13T21:10:52.861635198Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:10:52.862222 containerd[2031]: time="2025-01-13T21:10:52.861751322Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:10:52.862222 containerd[2031]: time="2025-01-13T21:10:52.861789662Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:10:52.862222 containerd[2031]: time="2025-01-13T21:10:52.861826826Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:10:52.862222 containerd[2031]: time="2025-01-13T21:10:52.861859118Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:10:52.862222 containerd[2031]: time="2025-01-13T21:10:52.862122614Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:10:52.863686 containerd[2031]: time="2025-01-13T21:10:52.863616062Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:10:52.865217 containerd[2031]: time="2025-01-13T21:10:52.863945738Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:10:52.865217 containerd[2031]: time="2025-01-13T21:10:52.864013838Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:10:52.865217 containerd[2031]: time="2025-01-13T21:10:52.864049298Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:10:52.865217 containerd[2031]: time="2025-01-13T21:10:52.864082562Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:10:52.865217 containerd[2031]: time="2025-01-13T21:10:52.864114254Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:10:52.865217 containerd[2031]: time="2025-01-13T21:10:52.864144338Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:10:52.865612 containerd[2031]: time="2025-01-13T21:10:52.864176534Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:10:52.865681 containerd[2031]: time="2025-01-13T21:10:52.865642514Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:10:52.865730 containerd[2031]: time="2025-01-13T21:10:52.865691798Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:10:52.865778 containerd[2031]: time="2025-01-13T21:10:52.865736822Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:10:52.865778 containerd[2031]: time="2025-01-13T21:10:52.865767194Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:10:52.865885 containerd[2031]: time="2025-01-13T21:10:52.865809422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.865885 containerd[2031]: time="2025-01-13T21:10:52.865857626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.865974 containerd[2031]: time="2025-01-13T21:10:52.865892954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.865974 containerd[2031]: time="2025-01-13T21:10:52.865931990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.865974 containerd[2031]: time="2025-01-13T21:10:52.865962674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.866134 containerd[2031]: time="2025-01-13T21:10:52.865994606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.866134 containerd[2031]: time="2025-01-13T21:10:52.866023274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.866134 containerd[2031]: time="2025-01-13T21:10:52.866055554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.866134 containerd[2031]: time="2025-01-13T21:10:52.866086466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.866134 containerd[2031]: time="2025-01-13T21:10:52.866122310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.866466 containerd[2031]: time="2025-01-13T21:10:52.866156522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.866466 containerd[2031]: time="2025-01-13T21:10:52.866210354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.866466 containerd[2031]: time="2025-01-13T21:10:52.866251586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.866466 containerd[2031]: time="2025-01-13T21:10:52.866288678Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:10:52.866466 containerd[2031]: time="2025-01-13T21:10:52.866339126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.866466 containerd[2031]: time="2025-01-13T21:10:52.866369774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.866466 containerd[2031]: time="2025-01-13T21:10:52.866396726Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:10:52.868578 containerd[2031]: time="2025-01-13T21:10:52.866538986Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:10:52.868578 containerd[2031]: time="2025-01-13T21:10:52.866587406Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:10:52.868578 containerd[2031]: time="2025-01-13T21:10:52.866615474Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:10:52.868578 containerd[2031]: time="2025-01-13T21:10:52.866650154Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:10:52.868578 containerd[2031]: time="2025-01-13T21:10:52.866675018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.868578 containerd[2031]: time="2025-01-13T21:10:52.866704454Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:10:52.868578 containerd[2031]: time="2025-01-13T21:10:52.866728910Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:10:52.868578 containerd[2031]: time="2025-01-13T21:10:52.866758478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:10:52.869588 containerd[2031]: time="2025-01-13T21:10:52.869415026Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:10:52.869588 containerd[2031]: time="2025-01-13T21:10:52.869585870Z" level=info msg="Connect containerd service" Jan 13 21:10:52.869937 containerd[2031]: time="2025-01-13T21:10:52.869669450Z" level=info msg="using legacy CRI server" Jan 13 21:10:52.869937 containerd[2031]: time="2025-01-13T21:10:52.869691902Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:10:52.869937 containerd[2031]: time="2025-01-13T21:10:52.869857778Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:10:52.873212 containerd[2031]: time="2025-01-13T21:10:52.873114926Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:10:52.877231 containerd[2031]: time="2025-01-13T21:10:52.874711070Z" level=info msg="Start subscribing containerd event" Jan 13 21:10:52.877231 containerd[2031]: time="2025-01-13T21:10:52.874861742Z" level=info msg="Start recovering state" Jan 13 21:10:52.877231 containerd[2031]: time="2025-01-13T21:10:52.875036330Z" level=info msg="Start event monitor" Jan 13 21:10:52.877231 containerd[2031]: time="2025-01-13T21:10:52.875070326Z" level=info msg="Start snapshots syncer" Jan 13 21:10:52.877231 containerd[2031]: time="2025-01-13T21:10:52.875096390Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:10:52.877231 containerd[2031]: time="2025-01-13T21:10:52.875226806Z" level=info msg="Start streaming server" Jan 13 21:10:52.879294 containerd[2031]: time="2025-01-13T21:10:52.876172874Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:10:52.879664 containerd[2031]: time="2025-01-13T21:10:52.879615530Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:10:52.882529 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:10:52.882968 containerd[2031]: time="2025-01-13T21:10:52.882749282Z" level=info msg="containerd successfully booted in 0.168127s" Jan 13 21:10:53.018373 systemd-networkd[1935]: eth0: Gained IPv6LL Jan 13 21:10:53.028997 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:10:53.035278 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:10:53.055819 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 21:10:53.068436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:10:53.079856 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:10:53.185277 amazon-ssm-agent[2192]: Initializing new seelog logger Jan 13 21:10:53.188239 amazon-ssm-agent[2192]: New Seelog Logger Creation Complete Jan 13 21:10:53.188239 amazon-ssm-agent[2192]: 2025/01/13 21:10:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:53.188239 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:53.189077 amazon-ssm-agent[2192]: 2025/01/13 21:10:53 processing appconfig overrides Jan 13 21:10:53.194039 amazon-ssm-agent[2192]: 2025/01/13 21:10:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:53.194039 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:53.194039 amazon-ssm-agent[2192]: 2025/01/13 21:10:53 processing appconfig overrides Jan 13 21:10:53.194039 amazon-ssm-agent[2192]: 2025/01/13 21:10:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:53.194039 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:53.194039 amazon-ssm-agent[2192]: 2025/01/13 21:10:53 processing appconfig overrides Jan 13 21:10:53.194039 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO Proxy environment variables: Jan 13 21:10:53.201335 amazon-ssm-agent[2192]: 2025/01/13 21:10:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:53.201335 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:53.201335 amazon-ssm-agent[2192]: 2025/01/13 21:10:53 processing appconfig overrides Jan 13 21:10:53.204338 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:10:53.297286 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO https_proxy: Jan 13 21:10:53.397290 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO http_proxy: Jan 13 21:10:53.410601 tar[2006]: linux-arm64/LICENSE Jan 13 21:10:53.410601 tar[2006]: linux-arm64/README.md Jan 13 21:10:53.451612 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:10:53.497318 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO no_proxy: Jan 13 21:10:53.599416 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO Checking if agent identity type OnPrem can be assumed Jan 13 21:10:53.698054 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO Checking if agent identity type EC2 can be assumed Jan 13 21:10:53.800266 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO Agent will take identity from EC2 Jan 13 21:10:53.901296 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:10:53.998921 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:10:54.097954 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:10:54.179365 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 21:10:54.181004 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 21:10:54.181004 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 21:10:54.181004 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 21:10:54.181004 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO [Registrar] Starting registrar module Jan 13 21:10:54.181525 amazon-ssm-agent[2192]: 2025-01-13 21:10:53 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 21:10:54.181525 amazon-ssm-agent[2192]: 2025-01-13 21:10:54 INFO [EC2Identity] EC2 registration was successful. Jan 13 21:10:54.181525 amazon-ssm-agent[2192]: 2025-01-13 21:10:54 INFO [CredentialRefresher] credentialRefresher has started Jan 13 21:10:54.181525 amazon-ssm-agent[2192]: 2025-01-13 21:10:54 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 21:10:54.181525 amazon-ssm-agent[2192]: 2025-01-13 21:10:54 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 21:10:54.197169 amazon-ssm-agent[2192]: 2025-01-13 21:10:54 INFO [CredentialRefresher] Next credential rotation will be in 31.7999503379 minutes Jan 13 21:10:54.374170 sshd_keygen[2030]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:10:54.423614 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:10:54.434759 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:10:54.454938 systemd[1]: Started sshd@0-172.31.25.188:22-139.178.89.65:48750.service - OpenSSH per-connection server daemon (139.178.89.65:48750). Jan 13 21:10:54.469818 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:10:54.470310 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:10:54.484431 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:10:54.516050 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:10:54.529870 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:10:54.542855 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:10:54.543605 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:10:54.713911 sshd[2222]: Accepted publickey for core from 139.178.89.65 port 48750 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:54.719724 sshd[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:54.739059 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:10:54.757686 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:10:54.765789 systemd-logind[1997]: New session 1 of user core. Jan 13 21:10:54.789108 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:10:54.801767 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:10:54.814716 (systemd)[2233]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:10:55.048868 systemd[2233]: Queued start job for default target default.target. Jan 13 21:10:55.056856 systemd[2233]: Created slice app.slice - User Application Slice. Jan 13 21:10:55.056929 systemd[2233]: Reached target paths.target - Paths. Jan 13 21:10:55.056964 systemd[2233]: Reached target timers.target - Timers. Jan 13 21:10:55.059907 systemd[2233]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:10:55.099886 systemd[2233]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:10:55.100139 systemd[2233]: Reached target sockets.target - Sockets. Jan 13 21:10:55.100178 systemd[2233]: Reached target basic.target - Basic System. Jan 13 21:10:55.100776 systemd[2233]: Reached target default.target - Main User Target. Jan 13 21:10:55.100856 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:10:55.101603 systemd[2233]: Startup finished in 273ms. Jan 13 21:10:55.114528 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:10:55.215897 amazon-ssm-agent[2192]: 2025-01-13 21:10:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 21:10:55.289838 systemd[1]: Started sshd@1-172.31.25.188:22-139.178.89.65:55280.service - OpenSSH per-connection server daemon (139.178.89.65:55280). Jan 13 21:10:55.318376 amazon-ssm-agent[2192]: 2025-01-13 21:10:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2244) started Jan 13 21:10:55.419305 amazon-ssm-agent[2192]: 2025-01-13 21:10:55 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 21:10:55.487939 sshd[2249]: Accepted publickey for core from 139.178.89.65 port 55280 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:55.492496 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:55.502133 systemd-logind[1997]: New session 2 of user core. Jan 13 21:10:55.507524 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:10:55.532660 ntpd[1989]: Listen normally on 7 eth0 [fe80::445:e0ff:fe13:ed03%2]:123 Jan 13 21:10:55.533496 ntpd[1989]: 13 Jan 21:10:55 ntpd[1989]: Listen normally on 7 eth0 [fe80::445:e0ff:fe13:ed03%2]:123 Jan 13 21:10:55.641165 sshd[2249]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:55.649987 systemd[1]: sshd@1-172.31.25.188:22-139.178.89.65:55280.service: Deactivated successfully. Jan 13 21:10:55.659656 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:10:55.664238 systemd-logind[1997]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:10:55.677072 systemd-logind[1997]: Removed session 2. Jan 13 21:10:55.684972 systemd[1]: Started sshd@2-172.31.25.188:22-139.178.89.65:55284.service - OpenSSH per-connection server daemon (139.178.89.65:55284). Jan 13 21:10:55.696548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:10:55.699851 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:10:55.702166 systemd[1]: Startup finished in 1.264s (kernel) + 9.892s (initrd) + 10.055s (userspace) = 21.211s. Jan 13 21:10:55.718650 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:10:55.875660 sshd[2266]: Accepted publickey for core from 139.178.89.65 port 55284 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:10:55.878585 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:10:55.889595 systemd-logind[1997]: New session 3 of user core. Jan 13 21:10:55.895465 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:10:56.027516 sshd[2266]: pam_unix(sshd:session): session closed for user core Jan 13 21:10:56.034138 systemd[1]: sshd@2-172.31.25.188:22-139.178.89.65:55284.service: Deactivated successfully. Jan 13 21:10:56.037855 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:10:56.041446 systemd-logind[1997]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:10:56.045560 systemd-logind[1997]: Removed session 3. Jan 13 21:10:57.030696 kubelet[2268]: E0113 21:10:57.030608 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:10:57.033646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:10:57.033981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:10:57.034490 systemd[1]: kubelet.service: Consumed 1.303s CPU time. Jan 13 21:10:58.194350 systemd-resolved[1936]: Clock change detected. Flushing caches. Jan 13 21:11:05.735466 systemd[1]: Started sshd@3-172.31.25.188:22-139.178.89.65:48098.service - OpenSSH per-connection server daemon (139.178.89.65:48098). Jan 13 21:11:05.905244 sshd[2285]: Accepted publickey for core from 139.178.89.65 port 48098 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:11:05.907775 sshd[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:11:05.916179 systemd-logind[1997]: New session 4 of user core. Jan 13 21:11:05.922244 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:11:06.050461 sshd[2285]: pam_unix(sshd:session): session closed for user core Jan 13 21:11:06.057110 systemd-logind[1997]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:11:06.058333 systemd[1]: sshd@3-172.31.25.188:22-139.178.89.65:48098.service: Deactivated successfully. Jan 13 21:11:06.062622 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:11:06.064734 systemd-logind[1997]: Removed session 4. Jan 13 21:11:06.092518 systemd[1]: Started sshd@4-172.31.25.188:22-139.178.89.65:48102.service - OpenSSH per-connection server daemon (139.178.89.65:48102). Jan 13 21:11:06.267098 sshd[2292]: Accepted publickey for core from 139.178.89.65 port 48102 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:11:06.269676 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:11:06.277234 systemd-logind[1997]: New session 5 of user core. Jan 13 21:11:06.287275 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:11:06.405307 sshd[2292]: pam_unix(sshd:session): session closed for user core Jan 13 21:11:06.412466 systemd[1]: sshd@4-172.31.25.188:22-139.178.89.65:48102.service: Deactivated successfully. Jan 13 21:11:06.416740 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:11:06.418391 systemd-logind[1997]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:11:06.421102 systemd-logind[1997]: Removed session 5. Jan 13 21:11:06.446084 systemd[1]: Started sshd@5-172.31.25.188:22-139.178.89.65:48110.service - OpenSSH per-connection server daemon (139.178.89.65:48110). Jan 13 21:11:06.620471 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 48110 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:11:06.623224 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:11:06.632254 systemd-logind[1997]: New session 6 of user core. Jan 13 21:11:06.639305 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:11:06.744941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:11:06.755422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:06.770343 sshd[2299]: pam_unix(sshd:session): session closed for user core Jan 13 21:11:06.777583 systemd[1]: sshd@5-172.31.25.188:22-139.178.89.65:48110.service: Deactivated successfully. Jan 13 21:11:06.784097 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:11:06.788614 systemd-logind[1997]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:11:06.810578 systemd[1]: Started sshd@6-172.31.25.188:22-139.178.89.65:48122.service - OpenSSH per-connection server daemon (139.178.89.65:48122). Jan 13 21:11:06.813147 systemd-logind[1997]: Removed session 6. Jan 13 21:11:06.996088 sshd[2309]: Accepted publickey for core from 139.178.89.65 port 48122 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:11:07.000238 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:11:07.014084 systemd-logind[1997]: New session 7 of user core. Jan 13 21:11:07.019315 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:11:07.083129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:07.099794 (kubelet)[2317]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:11:07.157765 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:11:07.159408 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:11:07.179245 sudo[2322]: pam_unix(sudo:session): session closed for user root Jan 13 21:11:07.197281 kubelet[2317]: E0113 21:11:07.197175 2317 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:11:07.205217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:11:07.205579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:11:07.206377 sshd[2309]: pam_unix(sshd:session): session closed for user core Jan 13 21:11:07.212708 systemd[1]: sshd@6-172.31.25.188:22-139.178.89.65:48122.service: Deactivated successfully. Jan 13 21:11:07.217267 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:11:07.220532 systemd-logind[1997]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:11:07.222974 systemd-logind[1997]: Removed session 7. Jan 13 21:11:07.244493 systemd[1]: Started sshd@7-172.31.25.188:22-139.178.89.65:48134.service - OpenSSH per-connection server daemon (139.178.89.65:48134). Jan 13 21:11:07.413679 sshd[2330]: Accepted publickey for core from 139.178.89.65 port 48134 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:11:07.416680 sshd[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:11:07.424488 systemd-logind[1997]: New session 8 of user core. Jan 13 21:11:07.436338 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:11:07.543324 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:11:07.544469 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:11:07.551180 sudo[2334]: pam_unix(sudo:session): session closed for user root Jan 13 21:11:07.562053 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:11:07.563261 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:11:07.585555 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:11:07.599161 auditctl[2337]: No rules Jan 13 21:11:07.600115 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:11:07.600617 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:11:07.613917 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:11:07.659824 augenrules[2355]: No rules Jan 13 21:11:07.662469 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:11:07.665285 sudo[2333]: pam_unix(sudo:session): session closed for user root Jan 13 21:11:07.689224 sshd[2330]: pam_unix(sshd:session): session closed for user core Jan 13 21:11:07.694387 systemd[1]: sshd@7-172.31.25.188:22-139.178.89.65:48134.service: Deactivated successfully. Jan 13 21:11:07.698408 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:11:07.702383 systemd-logind[1997]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:11:07.704764 systemd-logind[1997]: Removed session 8. Jan 13 21:11:07.733520 systemd[1]: Started sshd@8-172.31.25.188:22-139.178.89.65:48136.service - OpenSSH per-connection server daemon (139.178.89.65:48136). Jan 13 21:11:07.903658 sshd[2363]: Accepted publickey for core from 139.178.89.65 port 48136 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:11:07.906482 sshd[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:11:07.916351 systemd-logind[1997]: New session 9 of user core. Jan 13 21:11:07.924307 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:11:08.027887 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:11:08.028592 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:11:08.622519 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:11:08.631626 (dockerd)[2382]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:11:09.123724 dockerd[2382]: time="2025-01-13T21:11:09.123609190Z" level=info msg="Starting up" Jan 13 21:11:09.308481 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1318256222-merged.mount: Deactivated successfully. Jan 13 21:11:09.336817 dockerd[2382]: time="2025-01-13T21:11:09.336723731Z" level=info msg="Loading containers: start." Jan 13 21:11:09.555039 kernel: Initializing XFRM netlink socket Jan 13 21:11:09.598185 (udev-worker)[2404]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:11:09.683090 systemd-networkd[1935]: docker0: Link UP Jan 13 21:11:09.707367 dockerd[2382]: time="2025-01-13T21:11:09.707274193Z" level=info msg="Loading containers: done." Jan 13 21:11:09.728626 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3599015452-merged.mount: Deactivated successfully. Jan 13 21:11:09.734701 dockerd[2382]: time="2025-01-13T21:11:09.734638285Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:11:09.734873 dockerd[2382]: time="2025-01-13T21:11:09.734781337Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:11:09.735007 dockerd[2382]: time="2025-01-13T21:11:09.734967553Z" level=info msg="Daemon has completed initialization" Jan 13 21:11:09.791843 dockerd[2382]: time="2025-01-13T21:11:09.791134921Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:11:09.791517 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:11:11.101093 containerd[2031]: time="2025-01-13T21:11:11.101033880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 21:11:11.751780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469298868.mount: Deactivated successfully. Jan 13 21:11:13.035039 containerd[2031]: time="2025-01-13T21:11:13.034250989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:13.036887 containerd[2031]: time="2025-01-13T21:11:13.036819685Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615585" Jan 13 21:11:13.037182 containerd[2031]: time="2025-01-13T21:11:13.037145893Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:13.042971 containerd[2031]: time="2025-01-13T21:11:13.042917977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:13.045573 containerd[2031]: time="2025-01-13T21:11:13.045292729Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 1.944194301s" Jan 13 21:11:13.045573 containerd[2031]: time="2025-01-13T21:11:13.045354361Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Jan 13 21:11:13.046743 containerd[2031]: time="2025-01-13T21:11:13.046445281Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 21:11:14.485920 containerd[2031]: time="2025-01-13T21:11:14.485839816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:14.487785 containerd[2031]: time="2025-01-13T21:11:14.487686184Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470096" Jan 13 21:11:14.488920 containerd[2031]: time="2025-01-13T21:11:14.488805628Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:14.496239 containerd[2031]: time="2025-01-13T21:11:14.495066928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:14.497958 containerd[2031]: time="2025-01-13T21:11:14.497870008Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.451366839s" Jan 13 21:11:14.497958 containerd[2031]: time="2025-01-13T21:11:14.497944492Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Jan 13 21:11:14.498675 containerd[2031]: time="2025-01-13T21:11:14.498612244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 21:11:15.709033 containerd[2031]: time="2025-01-13T21:11:15.707214042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:15.710039 containerd[2031]: time="2025-01-13T21:11:15.709974343Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024202" Jan 13 21:11:15.711731 containerd[2031]: time="2025-01-13T21:11:15.711657775Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:15.716783 containerd[2031]: time="2025-01-13T21:11:15.716683903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:15.719305 containerd[2031]: time="2025-01-13T21:11:15.719236051Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.220551459s" Jan 13 21:11:15.719461 containerd[2031]: time="2025-01-13T21:11:15.719301943Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Jan 13 21:11:15.720409 containerd[2031]: time="2025-01-13T21:11:15.719894971Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:11:17.013754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1372646328.mount: Deactivated successfully. Jan 13 21:11:17.422627 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:11:17.430452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:17.754241 containerd[2031]: time="2025-01-13T21:11:17.754053189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:17.757145 containerd[2031]: time="2025-01-13T21:11:17.757049721Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771426" Jan 13 21:11:17.762468 containerd[2031]: time="2025-01-13T21:11:17.762387465Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:17.769598 containerd[2031]: time="2025-01-13T21:11:17.769189533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:17.773594 containerd[2031]: time="2025-01-13T21:11:17.773348877Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 2.053386658s" Jan 13 21:11:17.773594 containerd[2031]: time="2025-01-13T21:11:17.773423397Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 13 21:11:17.774893 containerd[2031]: time="2025-01-13T21:11:17.774595953Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:11:17.812284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:17.828587 (kubelet)[2599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:11:17.910174 kubelet[2599]: E0113 21:11:17.910081 2599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:11:17.913547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:11:17.913870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:11:18.350076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1405074481.mount: Deactivated successfully. Jan 13 21:11:19.456950 containerd[2031]: time="2025-01-13T21:11:19.456448509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:19.458780 containerd[2031]: time="2025-01-13T21:11:19.458702829Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 13 21:11:19.460731 containerd[2031]: time="2025-01-13T21:11:19.460663089Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:19.465727 containerd[2031]: time="2025-01-13T21:11:19.465645477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:19.468564 containerd[2031]: time="2025-01-13T21:11:19.468353853Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.693620728s" Jan 13 21:11:19.468564 containerd[2031]: time="2025-01-13T21:11:19.468415689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 21:11:19.470073 containerd[2031]: time="2025-01-13T21:11:19.469873965Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 21:11:20.025532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3701116460.mount: Deactivated successfully. Jan 13 21:11:20.033938 containerd[2031]: time="2025-01-13T21:11:20.033859712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:20.035817 containerd[2031]: time="2025-01-13T21:11:20.035688140Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 13 21:11:20.037382 containerd[2031]: time="2025-01-13T21:11:20.037275968Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:20.042568 containerd[2031]: time="2025-01-13T21:11:20.041974412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:20.048060 containerd[2031]: time="2025-01-13T21:11:20.047519192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 577.581879ms" Jan 13 21:11:20.048060 containerd[2031]: time="2025-01-13T21:11:20.047603444Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 21:11:20.051610 containerd[2031]: time="2025-01-13T21:11:20.051533720Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 21:11:20.628919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012280061.mount: Deactivated successfully. Jan 13 21:11:22.305980 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:11:22.667214 containerd[2031]: time="2025-01-13T21:11:22.667048333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:22.672152 containerd[2031]: time="2025-01-13T21:11:22.672076969Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 13 21:11:22.678823 containerd[2031]: time="2025-01-13T21:11:22.677333293Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:22.683509 containerd[2031]: time="2025-01-13T21:11:22.683403769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:22.686056 containerd[2031]: time="2025-01-13T21:11:22.685981597Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.634376993s" Jan 13 21:11:22.686346 containerd[2031]: time="2025-01-13T21:11:22.686203033Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 13 21:11:27.921551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:11:27.933176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:28.249413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:28.258461 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:11:28.338443 kubelet[2738]: E0113 21:11:28.338378 2738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:11:28.342486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:11:28.343074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:11:30.462954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:30.473522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:30.538685 systemd[1]: Reloading requested from client PID 2753 ('systemctl') (unit session-9.scope)... Jan 13 21:11:30.538724 systemd[1]: Reloading... Jan 13 21:11:30.790042 zram_generator::config[2796]: No configuration found. Jan 13 21:11:31.018224 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:11:31.187883 systemd[1]: Reloading finished in 648 ms. Jan 13 21:11:31.284313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:31.296628 (kubelet)[2847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:11:31.301250 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:31.302143 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:11:31.303633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:31.314940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:31.640397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:31.640870 (kubelet)[2859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:11:31.729029 kubelet[2859]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:31.729029 kubelet[2859]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:11:31.729029 kubelet[2859]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:31.729029 kubelet[2859]: I0113 21:11:31.728766 2859 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:11:32.408785 kubelet[2859]: I0113 21:11:32.408721 2859 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:11:32.408785 kubelet[2859]: I0113 21:11:32.408770 2859 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:11:32.409292 kubelet[2859]: I0113 21:11:32.409251 2859 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:11:32.467319 kubelet[2859]: E0113 21:11:32.467255 2859 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.188:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:11:32.468553 kubelet[2859]: I0113 21:11:32.468292 2859 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:11:32.480194 kubelet[2859]: E0113 21:11:32.480084 2859 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:11:32.480194 kubelet[2859]: I0113 21:11:32.480145 2859 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:11:32.487309 kubelet[2859]: I0113 21:11:32.487240 2859 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:11:32.487540 kubelet[2859]: I0113 21:11:32.487509 2859 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:11:32.487884 kubelet[2859]: I0113 21:11:32.487821 2859 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:11:32.488199 kubelet[2859]: I0113 21:11:32.487877 2859 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-188","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:11:32.488393 kubelet[2859]: I0113 21:11:32.488249 2859 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:11:32.488393 kubelet[2859]: I0113 21:11:32.488271 2859 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:11:32.488518 kubelet[2859]: I0113 21:11:32.488451 2859 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:32.492917 kubelet[2859]: I0113 21:11:32.492411 2859 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:11:32.492917 kubelet[2859]: I0113 21:11:32.492467 2859 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:11:32.492917 kubelet[2859]: I0113 21:11:32.492518 2859 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:11:32.492917 kubelet[2859]: I0113 21:11:32.492539 2859 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:11:32.499558 kubelet[2859]: I0113 21:11:32.499320 2859 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:11:32.502299 kubelet[2859]: I0113 21:11:32.502260 2859 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:11:32.505033 kubelet[2859]: W0113 21:11:32.503638 2859 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:11:32.505033 kubelet[2859]: I0113 21:11:32.504711 2859 server.go:1269] "Started kubelet" Jan 13 21:11:32.505033 kubelet[2859]: W0113 21:11:32.504916 2859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-188&limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused Jan 13 21:11:32.505359 kubelet[2859]: E0113 21:11:32.505321 2859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-188&limit=500&resourceVersion=0\": dial tcp 172.31.25.188:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:11:32.510859 kubelet[2859]: W0113 21:11:32.509934 2859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused Jan 13 21:11:32.511972 kubelet[2859]: E0113 21:11:32.511919 2859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.188:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:11:32.512171 kubelet[2859]: I0113 21:11:32.511804 2859 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:11:32.512573 kubelet[2859]: I0113 21:11:32.512547 2859 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:11:32.513064 kubelet[2859]: I0113 21:11:32.512247 2859 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:11:32.516657 kubelet[2859]: I0113 21:11:32.511342 2859 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:11:32.518922 kubelet[2859]: I0113 21:11:32.518854 2859 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:11:32.521024 kubelet[2859]: I0113 21:11:32.520414 2859 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:11:32.522799 kubelet[2859]: I0113 21:11:32.522733 2859 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:11:32.523267 kubelet[2859]: E0113 21:11:32.523216 2859 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-188\" not found" Jan 13 21:11:32.527263 kubelet[2859]: I0113 21:11:32.527213 2859 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:11:32.527416 kubelet[2859]: I0113 21:11:32.527330 2859 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:11:32.529828 kubelet[2859]: E0113 21:11:32.529728 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": dial tcp 172.31.25.188:6443: connect: connection refused" interval="200ms" Jan 13 21:11:32.532368 kubelet[2859]: E0113 21:11:32.529931 2859 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.188:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.188:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-188.181a5cd8cbaedc7a default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-188,UID:ip-172-31-25-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-188,},FirstTimestamp:2025-01-13 21:11:32.504673402 +0000 UTC m=+0.854417573,LastTimestamp:2025-01-13 21:11:32.504673402 +0000 UTC m=+0.854417573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-188,}" Jan 13 21:11:32.532698 kubelet[2859]: I0113 21:11:32.532670 2859 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:11:32.532935 kubelet[2859]: I0113 21:11:32.532902 2859 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:11:32.533790 kubelet[2859]: W0113 21:11:32.533725 2859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused Jan 13 21:11:32.534092 kubelet[2859]: E0113 21:11:32.533977 2859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.188:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:11:32.537874 kubelet[2859]: I0113 21:11:32.537831 2859 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:11:32.566804 kubelet[2859]: I0113 21:11:32.566577 2859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:11:32.568531 kubelet[2859]: E0113 21:11:32.568468 2859 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:11:32.572811 kubelet[2859]: I0113 21:11:32.572032 2859 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:11:32.572811 kubelet[2859]: I0113 21:11:32.572116 2859 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:11:32.572811 kubelet[2859]: I0113 21:11:32.572153 2859 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:11:32.572811 kubelet[2859]: E0113 21:11:32.572244 2859 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:11:32.574932 kubelet[2859]: W0113 21:11:32.574884 2859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused Jan 13 21:11:32.575337 kubelet[2859]: E0113 21:11:32.575298 2859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.188:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:11:32.586216 kubelet[2859]: I0113 21:11:32.586173 2859 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:11:32.586216 kubelet[2859]: I0113 21:11:32.586209 2859 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:11:32.586412 kubelet[2859]: I0113 21:11:32.586244 2859 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:32.589301 kubelet[2859]: I0113 21:11:32.589263 2859 policy_none.go:49] "None policy: Start" Jan 13 21:11:32.590572 kubelet[2859]: I0113 21:11:32.590526 2859 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:11:32.590572 kubelet[2859]: I0113 21:11:32.590585 2859 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:11:32.601669 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:11:32.622317 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:11:32.624097 kubelet[2859]: E0113 21:11:32.623381 2859 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-188\" not found" Jan 13 21:11:32.628624 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:11:32.640637 kubelet[2859]: I0113 21:11:32.640596 2859 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:11:32.641085 kubelet[2859]: I0113 21:11:32.641059 2859 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:11:32.641233 kubelet[2859]: I0113 21:11:32.641183 2859 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:11:32.642011 kubelet[2859]: I0113 21:11:32.641967 2859 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:11:32.644197 kubelet[2859]: E0113 21:11:32.644163 2859 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-188\" not found" Jan 13 21:11:32.691870 systemd[1]: Created slice kubepods-burstable-poda8b00dea2a42439bff5ddc36f01b367c.slice - libcontainer container kubepods-burstable-poda8b00dea2a42439bff5ddc36f01b367c.slice. Jan 13 21:11:32.711356 systemd[1]: Created slice kubepods-burstable-podb7ecc25f84f984e48dc9a090990a91f1.slice - libcontainer container kubepods-burstable-podb7ecc25f84f984e48dc9a090990a91f1.slice. Jan 13 21:11:32.728871 systemd[1]: Created slice kubepods-burstable-poda89e38a151c3dcae5616f67762a95123.slice - libcontainer container kubepods-burstable-poda89e38a151c3dcae5616f67762a95123.slice. Jan 13 21:11:32.730971 kubelet[2859]: E0113 21:11:32.730893 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": dial tcp 172.31.25.188:6443: connect: connection refused" interval="400ms" Jan 13 21:11:32.743824 kubelet[2859]: I0113 21:11:32.743218 2859 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-188" Jan 13 21:11:32.743824 kubelet[2859]: E0113 21:11:32.743748 2859 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.188:6443/api/v1/nodes\": dial tcp 172.31.25.188:6443: connect: connection refused" node="ip-172-31-25-188" Jan 13 21:11:32.828338 kubelet[2859]: I0113 21:11:32.828178 2859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7ecc25f84f984e48dc9a090990a91f1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"b7ecc25f84f984e48dc9a090990a91f1\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" Jan 13 21:11:32.828338 kubelet[2859]: I0113 21:11:32.828253 2859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7ecc25f84f984e48dc9a090990a91f1-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"b7ecc25f84f984e48dc9a090990a91f1\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" Jan 13 21:11:32.828338 kubelet[2859]: I0113 21:11:32.828311 2859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b7ecc25f84f984e48dc9a090990a91f1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"b7ecc25f84f984e48dc9a090990a91f1\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" Jan 13 21:11:32.828338 kubelet[2859]: I0113 21:11:32.828352 2859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a89e38a151c3dcae5616f67762a95123-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-188\" (UID: \"a89e38a151c3dcae5616f67762a95123\") " pod="kube-system/kube-scheduler-ip-172-31-25-188" Jan 13 21:11:32.828338 kubelet[2859]: I0113 21:11:32.828389 2859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8b00dea2a42439bff5ddc36f01b367c-ca-certs\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"a8b00dea2a42439bff5ddc36f01b367c\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" Jan 13 21:11:32.828783 kubelet[2859]: I0113 21:11:32.828429 2859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8b00dea2a42439bff5ddc36f01b367c-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"a8b00dea2a42439bff5ddc36f01b367c\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" Jan 13 21:11:32.828783 kubelet[2859]: I0113 21:11:32.828495 2859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8b00dea2a42439bff5ddc36f01b367c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"a8b00dea2a42439bff5ddc36f01b367c\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" Jan 13 21:11:32.828783 kubelet[2859]: I0113 21:11:32.828546 2859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7ecc25f84f984e48dc9a090990a91f1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"b7ecc25f84f984e48dc9a090990a91f1\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" Jan 13 21:11:32.828783 kubelet[2859]: I0113 21:11:32.828588 2859 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7ecc25f84f984e48dc9a090990a91f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"b7ecc25f84f984e48dc9a090990a91f1\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" Jan 13 21:11:32.946636 kubelet[2859]: I0113 21:11:32.945704 2859 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-188" Jan 13 21:11:32.946636 kubelet[2859]: E0113 21:11:32.946189 2859 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.188:6443/api/v1/nodes\": dial tcp 172.31.25.188:6443: connect: connection refused" node="ip-172-31-25-188" Jan 13 21:11:33.008263 containerd[2031]: time="2025-01-13T21:11:33.008199836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-188,Uid:a8b00dea2a42439bff5ddc36f01b367c,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:33.024214 containerd[2031]: time="2025-01-13T21:11:33.024112929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-188,Uid:b7ecc25f84f984e48dc9a090990a91f1,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:33.035476 containerd[2031]: time="2025-01-13T21:11:33.035365545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-188,Uid:a89e38a151c3dcae5616f67762a95123,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:33.131584 kubelet[2859]: E0113 21:11:33.131500 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": dial tcp 172.31.25.188:6443: connect: connection refused" interval="800ms" Jan 13 21:11:33.349232 kubelet[2859]: I0113 21:11:33.349076 2859 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-188" Jan 13 21:11:33.350156 kubelet[2859]: E0113 21:11:33.350106 2859 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.188:6443/api/v1/nodes\": dial tcp 172.31.25.188:6443: connect: connection refused" node="ip-172-31-25-188" Jan 13 21:11:33.372746 kubelet[2859]: W0113 21:11:33.372654 2859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused Jan 13 21:11:33.373380 kubelet[2859]: E0113 21:11:33.372780 2859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.188:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:11:33.529875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1127768922.mount: Deactivated successfully. Jan 13 21:11:33.541439 containerd[2031]: time="2025-01-13T21:11:33.541359827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:33.542654 containerd[2031]: time="2025-01-13T21:11:33.542539067Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 21:11:33.544526 containerd[2031]: time="2025-01-13T21:11:33.544448831Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:33.547244 containerd[2031]: time="2025-01-13T21:11:33.547065875Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:11:33.548875 containerd[2031]: time="2025-01-13T21:11:33.548397947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:11:33.548875 containerd[2031]: time="2025-01-13T21:11:33.548514827Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:33.550149 containerd[2031]: time="2025-01-13T21:11:33.550057223Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:33.552751 containerd[2031]: time="2025-01-13T21:11:33.552074843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.735159ms" Jan 13 21:11:33.569217 containerd[2031]: time="2025-01-13T21:11:33.569145671Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.921934ms" Jan 13 21:11:33.569503 containerd[2031]: time="2025-01-13T21:11:33.569449487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:33.576501 containerd[2031]: time="2025-01-13T21:11:33.576417575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 540.935426ms" Jan 13 21:11:33.873643 kubelet[2859]: W0113 21:11:33.873575 2859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused Jan 13 21:11:33.874653 kubelet[2859]: E0113 21:11:33.873653 2859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.188:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:11:33.883939 containerd[2031]: time="2025-01-13T21:11:33.883626001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:33.883939 containerd[2031]: time="2025-01-13T21:11:33.883793149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:33.883939 containerd[2031]: time="2025-01-13T21:11:33.883856653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:33.888430 containerd[2031]: time="2025-01-13T21:11:33.888027169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:33.888430 containerd[2031]: time="2025-01-13T21:11:33.887721901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:33.888430 containerd[2031]: time="2025-01-13T21:11:33.887816077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:33.888430 containerd[2031]: time="2025-01-13T21:11:33.887853337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:33.889232 containerd[2031]: time="2025-01-13T21:11:33.888441985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:33.891847 containerd[2031]: time="2025-01-13T21:11:33.891626485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:33.891847 containerd[2031]: time="2025-01-13T21:11:33.891756685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:33.891847 containerd[2031]: time="2025-01-13T21:11:33.891794185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:33.894958 containerd[2031]: time="2025-01-13T21:11:33.894136141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:33.932470 kubelet[2859]: E0113 21:11:33.932379 2859 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": dial tcp 172.31.25.188:6443: connect: connection refused" interval="1.6s" Jan 13 21:11:33.952333 systemd[1]: Started cri-containerd-0d0fbb09afbf93ecd40d526f88e35d90556c5102feabc98fdeccf9f3983173e2.scope - libcontainer container 0d0fbb09afbf93ecd40d526f88e35d90556c5102feabc98fdeccf9f3983173e2. Jan 13 21:11:33.956384 systemd[1]: Started cri-containerd-2de552ec1c57a45c882bdf7c08c0319067419c089d1958a7f4db0948b9bdfe01.scope - libcontainer container 2de552ec1c57a45c882bdf7c08c0319067419c089d1958a7f4db0948b9bdfe01. Jan 13 21:11:33.961548 systemd[1]: Started cri-containerd-4629ebb085c4c8221bb993e5814d7ca535d813c3cead3c32902033acc2266ed2.scope - libcontainer container 4629ebb085c4c8221bb993e5814d7ca535d813c3cead3c32902033acc2266ed2. Jan 13 21:11:34.008290 kubelet[2859]: W0113 21:11:34.008146 2859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused Jan 13 21:11:34.008525 kubelet[2859]: E0113 21:11:34.008357 2859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.188:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:11:34.022954 kubelet[2859]: W0113 21:11:34.022793 2859 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-188&limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused Jan 13 21:11:34.022954 kubelet[2859]: E0113 21:11:34.022910 2859 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-188&limit=500&resourceVersion=0\": dial tcp 172.31.25.188:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:11:34.076098 containerd[2031]: time="2025-01-13T21:11:34.075344158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-188,Uid:a8b00dea2a42439bff5ddc36f01b367c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d0fbb09afbf93ecd40d526f88e35d90556c5102feabc98fdeccf9f3983173e2\"" Jan 13 21:11:34.089215 containerd[2031]: time="2025-01-13T21:11:34.088515370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-188,Uid:a89e38a151c3dcae5616f67762a95123,Namespace:kube-system,Attempt:0,} returns sandbox id \"2de552ec1c57a45c882bdf7c08c0319067419c089d1958a7f4db0948b9bdfe01\"" Jan 13 21:11:34.092842 containerd[2031]: time="2025-01-13T21:11:34.092373346Z" level=info msg="CreateContainer within sandbox \"0d0fbb09afbf93ecd40d526f88e35d90556c5102feabc98fdeccf9f3983173e2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:11:34.106827 containerd[2031]: time="2025-01-13T21:11:34.106743706Z" level=info msg="CreateContainer within sandbox \"2de552ec1c57a45c882bdf7c08c0319067419c089d1958a7f4db0948b9bdfe01\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:11:34.117892 containerd[2031]: time="2025-01-13T21:11:34.117798622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-188,Uid:b7ecc25f84f984e48dc9a090990a91f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4629ebb085c4c8221bb993e5814d7ca535d813c3cead3c32902033acc2266ed2\"" Jan 13 21:11:34.123039 containerd[2031]: time="2025-01-13T21:11:34.122780806Z" level=info msg="CreateContainer within sandbox \"4629ebb085c4c8221bb993e5814d7ca535d813c3cead3c32902033acc2266ed2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:11:34.144230 containerd[2031]: time="2025-01-13T21:11:34.142134298Z" level=info msg="CreateContainer within sandbox \"2de552ec1c57a45c882bdf7c08c0319067419c089d1958a7f4db0948b9bdfe01\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"68aa60eee57ec3576d4cf28207db2886478e4c2f94bb9e2f362abdcc8550decf\"" Jan 13 21:11:34.144230 containerd[2031]: time="2025-01-13T21:11:34.142690402Z" level=info msg="CreateContainer within sandbox \"0d0fbb09afbf93ecd40d526f88e35d90556c5102feabc98fdeccf9f3983173e2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eb9610731c9d6002c1fd7d120b66b8ee2e26fee56adf62a492057f327af6bd7b\"" Jan 13 21:11:34.145518 containerd[2031]: time="2025-01-13T21:11:34.145462042Z" level=info msg="StartContainer for \"eb9610731c9d6002c1fd7d120b66b8ee2e26fee56adf62a492057f327af6bd7b\"" Jan 13 21:11:34.147483 containerd[2031]: time="2025-01-13T21:11:34.147285382Z" level=info msg="StartContainer for \"68aa60eee57ec3576d4cf28207db2886478e4c2f94bb9e2f362abdcc8550decf\"" Jan 13 21:11:34.150419 containerd[2031]: time="2025-01-13T21:11:34.150360694Z" level=info msg="CreateContainer within sandbox \"4629ebb085c4c8221bb993e5814d7ca535d813c3cead3c32902033acc2266ed2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"712ce23223f79ee934081242dafe10bb233d34a926d29ebd8d41d71c1c6ee8c6\"" Jan 13 21:11:34.155482 containerd[2031]: time="2025-01-13T21:11:34.153615490Z" level=info msg="StartContainer for \"712ce23223f79ee934081242dafe10bb233d34a926d29ebd8d41d71c1c6ee8c6\"" Jan 13 21:11:34.157636 kubelet[2859]: I0113 21:11:34.157526 2859 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-188" Jan 13 21:11:34.158149 kubelet[2859]: E0113 21:11:34.158093 2859 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.188:6443/api/v1/nodes\": dial tcp 172.31.25.188:6443: connect: connection refused" node="ip-172-31-25-188" Jan 13 21:11:34.226744 systemd[1]: Started cri-containerd-68aa60eee57ec3576d4cf28207db2886478e4c2f94bb9e2f362abdcc8550decf.scope - libcontainer container 68aa60eee57ec3576d4cf28207db2886478e4c2f94bb9e2f362abdcc8550decf. Jan 13 21:11:34.240371 systemd[1]: Started cri-containerd-712ce23223f79ee934081242dafe10bb233d34a926d29ebd8d41d71c1c6ee8c6.scope - libcontainer container 712ce23223f79ee934081242dafe10bb233d34a926d29ebd8d41d71c1c6ee8c6. Jan 13 21:11:34.254699 systemd[1]: Started cri-containerd-eb9610731c9d6002c1fd7d120b66b8ee2e26fee56adf62a492057f327af6bd7b.scope - libcontainer container eb9610731c9d6002c1fd7d120b66b8ee2e26fee56adf62a492057f327af6bd7b. Jan 13 21:11:34.355228 containerd[2031]: time="2025-01-13T21:11:34.355164755Z" level=info msg="StartContainer for \"712ce23223f79ee934081242dafe10bb233d34a926d29ebd8d41d71c1c6ee8c6\" returns successfully" Jan 13 21:11:34.377879 containerd[2031]: time="2025-01-13T21:11:34.377543771Z" level=info msg="StartContainer for \"eb9610731c9d6002c1fd7d120b66b8ee2e26fee56adf62a492057f327af6bd7b\" returns successfully" Jan 13 21:11:34.385795 containerd[2031]: time="2025-01-13T21:11:34.385725551Z" level=info msg="StartContainer for \"68aa60eee57ec3576d4cf28207db2886478e4c2f94bb9e2f362abdcc8550decf\" returns successfully" Jan 13 21:11:35.760197 kubelet[2859]: I0113 21:11:35.760136 2859 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-188" Jan 13 21:11:36.153356 update_engine[1999]: I20250113 21:11:36.153031 1999 update_attempter.cc:509] Updating boot flags... Jan 13 21:11:36.297386 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3143) Jan 13 21:11:36.754142 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3142) Jan 13 21:11:37.260882 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3142) Jan 13 21:11:39.302862 kubelet[2859]: E0113 21:11:39.302249 2859 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-188\" not found" node="ip-172-31-25-188" Jan 13 21:11:39.399676 kubelet[2859]: I0113 21:11:39.398935 2859 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-25-188" Jan 13 21:11:39.399676 kubelet[2859]: E0113 21:11:39.399018 2859 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-25-188\": node \"ip-172-31-25-188\" not found" Jan 13 21:11:39.430290 kubelet[2859]: E0113 21:11:39.430125 2859 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-188.181a5cd8cbaedc7a default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-188,UID:ip-172-31-25-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-188,},FirstTimestamp:2025-01-13 21:11:32.504673402 +0000 UTC m=+0.854417573,LastTimestamp:2025-01-13 21:11:32.504673402 +0000 UTC m=+0.854417573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-188,}" Jan 13 21:11:39.499875 kubelet[2859]: E0113 21:11:39.499456 2859 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-188.181a5cd8cf7be6f6 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-188,UID:ip-172-31-25-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-25-188,},FirstTimestamp:2025-01-13 21:11:32.568442614 +0000 UTC m=+0.918186773,LastTimestamp:2025-01-13 21:11:32.568442614 +0000 UTC m=+0.918186773,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-188,}" Jan 13 21:11:39.511471 kubelet[2859]: I0113 21:11:39.511418 2859 apiserver.go:52] "Watching apiserver" Jan 13 21:11:39.528312 kubelet[2859]: I0113 21:11:39.528215 2859 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:11:39.561543 kubelet[2859]: E0113 21:11:39.559797 2859 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-188.181a5cd8d06b79be default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-188,UID:ip-172-31-25-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-25-188 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-25-188,},FirstTimestamp:2025-01-13 21:11:32.584143294 +0000 UTC m=+0.933887465,LastTimestamp:2025-01-13 21:11:32.584143294 +0000 UTC m=+0.933887465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-188,}" Jan 13 21:11:39.614975 kubelet[2859]: E0113 21:11:39.614675 2859 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-188.181a5cd8d06bd1f6 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-188,UID:ip-172-31-25-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-172-31-25-188 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-172-31-25-188,},FirstTimestamp:2025-01-13 21:11:32.584165878 +0000 UTC m=+0.933910049,LastTimestamp:2025-01-13 21:11:32.584165878 +0000 UTC m=+0.933910049,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-188,}" Jan 13 21:11:41.646358 systemd[1]: Reloading requested from client PID 3398 ('systemctl') (unit session-9.scope)... Jan 13 21:11:41.646393 systemd[1]: Reloading... Jan 13 21:11:41.855049 zram_generator::config[3447]: No configuration found. Jan 13 21:11:42.083887 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:11:42.310817 systemd[1]: Reloading finished in 663 ms. Jan 13 21:11:42.398418 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:42.424654 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:11:42.425486 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:42.425713 systemd[1]: kubelet.service: Consumed 1.639s CPU time, 116.2M memory peak, 0B memory swap peak. Jan 13 21:11:42.437775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:42.770384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:42.792587 (kubelet)[3497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:11:42.893338 kubelet[3497]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:42.894273 kubelet[3497]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:11:42.894273 kubelet[3497]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:42.894273 kubelet[3497]: I0113 21:11:42.894043 3497 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:11:42.915391 kubelet[3497]: I0113 21:11:42.914134 3497 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:11:42.915698 kubelet[3497]: I0113 21:11:42.915668 3497 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:11:42.916300 kubelet[3497]: I0113 21:11:42.916263 3497 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:11:42.919500 kubelet[3497]: I0113 21:11:42.919460 3497 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:11:42.925745 kubelet[3497]: I0113 21:11:42.925702 3497 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:11:42.935059 kubelet[3497]: E0113 21:11:42.932816 3497 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:11:42.935059 kubelet[3497]: I0113 21:11:42.932887 3497 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:11:42.938389 kubelet[3497]: I0113 21:11:42.938351 3497 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:11:42.938823 kubelet[3497]: I0113 21:11:42.938799 3497 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:11:42.939420 kubelet[3497]: I0113 21:11:42.939368 3497 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:11:42.939944 kubelet[3497]: I0113 21:11:42.939602 3497 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-188","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:11:42.940239 kubelet[3497]: I0113 21:11:42.940213 3497 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:11:42.940345 kubelet[3497]: I0113 21:11:42.940326 3497 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:11:42.940516 kubelet[3497]: I0113 21:11:42.940494 3497 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:42.940872 kubelet[3497]: I0113 21:11:42.940816 3497 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:11:42.941746 kubelet[3497]: I0113 21:11:42.941709 3497 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:11:42.942028 kubelet[3497]: I0113 21:11:42.941963 3497 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:11:42.942172 kubelet[3497]: I0113 21:11:42.942152 3497 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:11:42.948153 kubelet[3497]: I0113 21:11:42.948112 3497 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:11:42.950602 kubelet[3497]: I0113 21:11:42.950556 3497 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:11:42.961747 kubelet[3497]: I0113 21:11:42.961706 3497 server.go:1269] "Started kubelet" Jan 13 21:11:42.977101 kubelet[3497]: I0113 21:11:42.977059 3497 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:11:42.988856 sudo[3511]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:11:42.989566 sudo[3511]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:11:43.010267 kubelet[3497]: I0113 21:11:42.977365 3497 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:11:43.018368 kubelet[3497]: I0113 21:11:43.017930 3497 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:11:43.019355 kubelet[3497]: E0113 21:11:43.018825 3497 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-188\" not found" Jan 13 21:11:43.030219 kubelet[3497]: I0113 21:11:42.977475 3497 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:11:43.030219 kubelet[3497]: I0113 21:11:43.027691 3497 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:11:43.030219 kubelet[3497]: I0113 21:11:42.978020 3497 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:11:43.030219 kubelet[3497]: I0113 21:11:43.020795 3497 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:11:43.030219 kubelet[3497]: I0113 21:11:43.024212 3497 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:11:43.030219 kubelet[3497]: I0113 21:11:43.025697 3497 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:11:43.050588 kubelet[3497]: I0113 21:11:43.046902 3497 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:11:43.050588 kubelet[3497]: I0113 21:11:43.047113 3497 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:11:43.054216 kubelet[3497]: E0113 21:11:43.053840 3497 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:11:43.061153 kubelet[3497]: I0113 21:11:43.061101 3497 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:11:43.090904 kubelet[3497]: I0113 21:11:43.090725 3497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:11:43.105812 kubelet[3497]: I0113 21:11:43.104317 3497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:11:43.105812 kubelet[3497]: I0113 21:11:43.104405 3497 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:11:43.105812 kubelet[3497]: I0113 21:11:43.104461 3497 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:11:43.105812 kubelet[3497]: E0113 21:11:43.104569 3497 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:11:43.205007 kubelet[3497]: E0113 21:11:43.204922 3497 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:11:43.232376 kubelet[3497]: I0113 21:11:43.232320 3497 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:11:43.232376 kubelet[3497]: I0113 21:11:43.232359 3497 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:11:43.232579 kubelet[3497]: I0113 21:11:43.232396 3497 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:43.233241 kubelet[3497]: I0113 21:11:43.232653 3497 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:11:43.233241 kubelet[3497]: I0113 21:11:43.232687 3497 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:11:43.233241 kubelet[3497]: I0113 21:11:43.232727 3497 policy_none.go:49] "None policy: Start" Jan 13 21:11:43.234939 kubelet[3497]: I0113 21:11:43.234885 3497 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:11:43.234939 kubelet[3497]: I0113 21:11:43.234943 3497 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:11:43.236568 kubelet[3497]: I0113 21:11:43.235460 3497 state_mem.go:75] "Updated machine memory state" Jan 13 21:11:43.246704 kubelet[3497]: I0113 21:11:43.246439 3497 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:11:43.248282 kubelet[3497]: I0113 21:11:43.248247 3497 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:11:43.250490 kubelet[3497]: I0113 21:11:43.248581 3497 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:11:43.252197 kubelet[3497]: I0113 21:11:43.251713 3497 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:11:43.381327 kubelet[3497]: I0113 21:11:43.380695 3497 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-188" Jan 13 21:11:43.402631 kubelet[3497]: I0113 21:11:43.401892 3497 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-25-188" Jan 13 21:11:43.402631 kubelet[3497]: I0113 21:11:43.402128 3497 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-25-188" Jan 13 21:11:43.432738 kubelet[3497]: I0113 21:11:43.432669 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7ecc25f84f984e48dc9a090990a91f1-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"b7ecc25f84f984e48dc9a090990a91f1\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" Jan 13 21:11:43.432874 kubelet[3497]: I0113 21:11:43.432797 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7ecc25f84f984e48dc9a090990a91f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"b7ecc25f84f984e48dc9a090990a91f1\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" Jan 13 21:11:43.432979 kubelet[3497]: I0113 21:11:43.432909 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a89e38a151c3dcae5616f67762a95123-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-188\" (UID: \"a89e38a151c3dcae5616f67762a95123\") " pod="kube-system/kube-scheduler-ip-172-31-25-188" Jan 13 21:11:43.433259 kubelet[3497]: I0113 21:11:43.433069 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8b00dea2a42439bff5ddc36f01b367c-ca-certs\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"a8b00dea2a42439bff5ddc36f01b367c\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" Jan 13 21:11:43.433358 kubelet[3497]: I0113 21:11:43.433317 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8b00dea2a42439bff5ddc36f01b367c-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"a8b00dea2a42439bff5ddc36f01b367c\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" Jan 13 21:11:43.433513 kubelet[3497]: I0113 21:11:43.433463 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8b00dea2a42439bff5ddc36f01b367c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"a8b00dea2a42439bff5ddc36f01b367c\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" Jan 13 21:11:43.433851 kubelet[3497]: I0113 21:11:43.433803 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7ecc25f84f984e48dc9a090990a91f1-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"b7ecc25f84f984e48dc9a090990a91f1\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" Jan 13 21:11:43.434049 kubelet[3497]: I0113 21:11:43.433935 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b7ecc25f84f984e48dc9a090990a91f1-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"b7ecc25f84f984e48dc9a090990a91f1\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" Jan 13 21:11:43.434138 kubelet[3497]: I0113 21:11:43.434105 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7ecc25f84f984e48dc9a090990a91f1-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"b7ecc25f84f984e48dc9a090990a91f1\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" Jan 13 21:11:43.437649 kubelet[3497]: E0113 21:11:43.437392 3497 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-25-188\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-188" Jan 13 21:11:43.437649 kubelet[3497]: E0113 21:11:43.437568 3497 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-25-188\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-188" Jan 13 21:11:43.945422 kubelet[3497]: I0113 21:11:43.945348 3497 apiserver.go:52] "Watching apiserver" Jan 13 21:11:44.010418 sudo[3511]: pam_unix(sudo:session): session closed for user root Jan 13 21:11:44.028841 kubelet[3497]: I0113 21:11:44.028741 3497 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:11:44.236103 kubelet[3497]: I0113 21:11:44.235437 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-188" podStartSLOduration=2.235412912 podStartE2EDuration="2.235412912s" podCreationTimestamp="2025-01-13 21:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:11:44.214850096 +0000 UTC m=+1.410727712" watchObservedRunningTime="2025-01-13 21:11:44.235412912 +0000 UTC m=+1.431290516" Jan 13 21:11:44.258502 kubelet[3497]: I0113 21:11:44.258404 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-188" podStartSLOduration=3.258380756 podStartE2EDuration="3.258380756s" podCreationTimestamp="2025-01-13 21:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:11:44.235794332 +0000 UTC m=+1.431671972" watchObservedRunningTime="2025-01-13 21:11:44.258380756 +0000 UTC m=+1.454258372" Jan 13 21:11:44.279050 kubelet[3497]: I0113 21:11:44.278155 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-188" podStartSLOduration=1.278130128 podStartE2EDuration="1.278130128s" podCreationTimestamp="2025-01-13 21:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:11:44.258932444 +0000 UTC m=+1.454810084" watchObservedRunningTime="2025-01-13 21:11:44.278130128 +0000 UTC m=+1.474007744" Jan 13 21:11:46.855204 sudo[2366]: pam_unix(sudo:session): session closed for user root Jan 13 21:11:46.882685 sshd[2363]: pam_unix(sshd:session): session closed for user core Jan 13 21:11:46.893672 systemd[1]: sshd@8-172.31.25.188:22-139.178.89.65:48136.service: Deactivated successfully. Jan 13 21:11:46.901739 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:11:46.903613 systemd[1]: session-9.scope: Consumed 11.218s CPU time, 154.5M memory peak, 0B memory swap peak. Jan 13 21:11:46.911128 systemd-logind[1997]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:11:46.914882 systemd-logind[1997]: Removed session 9. Jan 13 21:11:47.029874 kubelet[3497]: I0113 21:11:47.029740 3497 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:11:47.031166 containerd[2031]: time="2025-01-13T21:11:47.031064722Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:11:47.033082 kubelet[3497]: I0113 21:11:47.031488 3497 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:11:47.815598 systemd[1]: Created slice kubepods-burstable-pod8e82f6f9_ecfd_4a2a_82fb_f2fdea61c7e6.slice - libcontainer container kubepods-burstable-pod8e82f6f9_ecfd_4a2a_82fb_f2fdea61c7e6.slice. Jan 13 21:11:47.864456 kubelet[3497]: I0113 21:11:47.864387 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cni-path\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864456 kubelet[3497]: I0113 21:11:47.864459 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-host-proc-sys-net\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864681 kubelet[3497]: I0113 21:11:47.864502 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-host-proc-sys-kernel\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864681 kubelet[3497]: I0113 21:11:47.864538 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-hubble-tls\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864681 kubelet[3497]: I0113 21:11:47.864575 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnrx4\" (UniqueName: \"kubernetes.io/projected/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-kube-api-access-pnrx4\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864681 kubelet[3497]: I0113 21:11:47.864633 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-lib-modules\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864681 kubelet[3497]: I0113 21:11:47.864675 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-xtables-lock\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864968 kubelet[3497]: I0113 21:11:47.864709 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-clustermesh-secrets\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864968 kubelet[3497]: I0113 21:11:47.864749 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-config-path\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864968 kubelet[3497]: I0113 21:11:47.864784 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-run\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864968 kubelet[3497]: I0113 21:11:47.864817 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-cgroup\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864968 kubelet[3497]: I0113 21:11:47.864857 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-etc-cni-netd\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.864968 kubelet[3497]: I0113 21:11:47.864915 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-bpf-maps\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.868781 kubelet[3497]: I0113 21:11:47.864949 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-hostproc\") pod \"cilium-n26kq\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " pod="kube-system/cilium-n26kq" Jan 13 21:11:47.877363 systemd[1]: Created slice kubepods-besteffort-pod9c79dac2_e4dd_47f8_91f8_e420bf891e1f.slice - libcontainer container kubepods-besteffort-pod9c79dac2_e4dd_47f8_91f8_e420bf891e1f.slice. Jan 13 21:11:47.966088 kubelet[3497]: I0113 21:11:47.966008 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c79dac2-e4dd-47f8-91f8-e420bf891e1f-xtables-lock\") pod \"kube-proxy-w6l96\" (UID: \"9c79dac2-e4dd-47f8-91f8-e420bf891e1f\") " pod="kube-system/kube-proxy-w6l96" Jan 13 21:11:47.966269 kubelet[3497]: I0113 21:11:47.966162 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c79dac2-e4dd-47f8-91f8-e420bf891e1f-kube-proxy\") pod \"kube-proxy-w6l96\" (UID: \"9c79dac2-e4dd-47f8-91f8-e420bf891e1f\") " pod="kube-system/kube-proxy-w6l96" Jan 13 21:11:47.966269 kubelet[3497]: I0113 21:11:47.966208 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtn96\" (UniqueName: \"kubernetes.io/projected/9c79dac2-e4dd-47f8-91f8-e420bf891e1f-kube-api-access-xtn96\") pod \"kube-proxy-w6l96\" (UID: \"9c79dac2-e4dd-47f8-91f8-e420bf891e1f\") " pod="kube-system/kube-proxy-w6l96" Jan 13 21:11:47.966389 kubelet[3497]: I0113 21:11:47.966328 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c79dac2-e4dd-47f8-91f8-e420bf891e1f-lib-modules\") pod \"kube-proxy-w6l96\" (UID: \"9c79dac2-e4dd-47f8-91f8-e420bf891e1f\") " pod="kube-system/kube-proxy-w6l96" Jan 13 21:11:48.119250 systemd[1]: Created slice kubepods-besteffort-pod3c8a1815_aa70_4a68_80cf_69673d57f4f8.slice - libcontainer container kubepods-besteffort-pod3c8a1815_aa70_4a68_80cf_69673d57f4f8.slice. Jan 13 21:11:48.125262 containerd[2031]: time="2025-01-13T21:11:48.125175132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n26kq,Uid:8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:48.170745 kubelet[3497]: I0113 21:11:48.167712 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c8a1815-aa70-4a68-80cf-69673d57f4f8-cilium-config-path\") pod \"cilium-operator-5d85765b45-k9c7q\" (UID: \"3c8a1815-aa70-4a68-80cf-69673d57f4f8\") " pod="kube-system/cilium-operator-5d85765b45-k9c7q" Jan 13 21:11:48.170745 kubelet[3497]: I0113 21:11:48.167817 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv668\" (UniqueName: \"kubernetes.io/projected/3c8a1815-aa70-4a68-80cf-69673d57f4f8-kube-api-access-wv668\") pod \"cilium-operator-5d85765b45-k9c7q\" (UID: \"3c8a1815-aa70-4a68-80cf-69673d57f4f8\") " pod="kube-system/cilium-operator-5d85765b45-k9c7q" Jan 13 21:11:48.199099 containerd[2031]: time="2025-01-13T21:11:48.197529180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w6l96,Uid:9c79dac2-e4dd-47f8-91f8-e420bf891e1f,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:48.214507 containerd[2031]: time="2025-01-13T21:11:48.214329948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:48.214985 containerd[2031]: time="2025-01-13T21:11:48.214463508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:48.214985 containerd[2031]: time="2025-01-13T21:11:48.214507872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:48.215497 containerd[2031]: time="2025-01-13T21:11:48.215194728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:48.287548 systemd[1]: Started cri-containerd-f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc.scope - libcontainer container f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc. Jan 13 21:11:48.314926 containerd[2031]: time="2025-01-13T21:11:48.313854060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:48.314926 containerd[2031]: time="2025-01-13T21:11:48.313961220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:48.314926 containerd[2031]: time="2025-01-13T21:11:48.314051424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:48.317018 containerd[2031]: time="2025-01-13T21:11:48.316735956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:48.382461 systemd[1]: Started cri-containerd-a7847059e44250d6197ed91af17ba99107f329b5134727e8c384f13b93af64c8.scope - libcontainer container a7847059e44250d6197ed91af17ba99107f329b5134727e8c384f13b93af64c8. Jan 13 21:11:48.387734 containerd[2031]: time="2025-01-13T21:11:48.387660529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n26kq,Uid:8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\"" Jan 13 21:11:48.394602 containerd[2031]: time="2025-01-13T21:11:48.394518037Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:11:48.432024 containerd[2031]: time="2025-01-13T21:11:48.430856497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-k9c7q,Uid:3c8a1815-aa70-4a68-80cf-69673d57f4f8,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:48.447438 containerd[2031]: time="2025-01-13T21:11:48.447361669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w6l96,Uid:9c79dac2-e4dd-47f8-91f8-e420bf891e1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7847059e44250d6197ed91af17ba99107f329b5134727e8c384f13b93af64c8\"" Jan 13 21:11:48.456969 containerd[2031]: time="2025-01-13T21:11:48.456502537Z" level=info msg="CreateContainer within sandbox \"a7847059e44250d6197ed91af17ba99107f329b5134727e8c384f13b93af64c8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:11:48.488436 containerd[2031]: time="2025-01-13T21:11:48.488212273Z" level=info msg="CreateContainer within sandbox \"a7847059e44250d6197ed91af17ba99107f329b5134727e8c384f13b93af64c8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a8a32aa4fa03c1e5b47ac4abe6ce1d866f1a6588147242622d30f2238a0b2cb\"" Jan 13 21:11:48.490024 containerd[2031]: time="2025-01-13T21:11:48.489791293Z" level=info msg="StartContainer for \"3a8a32aa4fa03c1e5b47ac4abe6ce1d866f1a6588147242622d30f2238a0b2cb\"" Jan 13 21:11:48.495339 containerd[2031]: time="2025-01-13T21:11:48.494766757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:48.495339 containerd[2031]: time="2025-01-13T21:11:48.494904793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:48.495339 containerd[2031]: time="2025-01-13T21:11:48.494966473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:48.497221 containerd[2031]: time="2025-01-13T21:11:48.497038201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:48.547718 systemd[1]: Started cri-containerd-06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3.scope - libcontainer container 06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3. Jan 13 21:11:48.569396 systemd[1]: Started cri-containerd-3a8a32aa4fa03c1e5b47ac4abe6ce1d866f1a6588147242622d30f2238a0b2cb.scope - libcontainer container 3a8a32aa4fa03c1e5b47ac4abe6ce1d866f1a6588147242622d30f2238a0b2cb. Jan 13 21:11:48.666328 containerd[2031]: time="2025-01-13T21:11:48.665822726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-k9c7q,Uid:3c8a1815-aa70-4a68-80cf-69673d57f4f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\"" Jan 13 21:11:48.666792 containerd[2031]: time="2025-01-13T21:11:48.666526838Z" level=info msg="StartContainer for \"3a8a32aa4fa03c1e5b47ac4abe6ce1d866f1a6588147242622d30f2238a0b2cb\" returns successfully" Jan 13 21:11:53.132501 kubelet[3497]: I0113 21:11:53.132396 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w6l96" podStartSLOduration=6.132371008 podStartE2EDuration="6.132371008s" podCreationTimestamp="2025-01-13 21:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:11:49.235715017 +0000 UTC m=+6.431592645" watchObservedRunningTime="2025-01-13 21:11:53.132371008 +0000 UTC m=+10.328248624" Jan 13 21:11:53.963302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167339940.mount: Deactivated successfully. Jan 13 21:11:58.214808 containerd[2031]: time="2025-01-13T21:11:58.214722106Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:58.217359 containerd[2031]: time="2025-01-13T21:11:58.217291714Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650946" Jan 13 21:11:58.218284 containerd[2031]: time="2025-01-13T21:11:58.218210710Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:58.222036 containerd[2031]: time="2025-01-13T21:11:58.221718982Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.827127865s" Jan 13 21:11:58.222036 containerd[2031]: time="2025-01-13T21:11:58.221783782Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 21:11:58.227052 containerd[2031]: time="2025-01-13T21:11:58.225293746Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:11:58.227587 containerd[2031]: time="2025-01-13T21:11:58.227493550Z" level=info msg="CreateContainer within sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:11:58.249608 containerd[2031]: time="2025-01-13T21:11:58.249537910Z" level=info msg="CreateContainer within sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30\"" Jan 13 21:11:58.250553 containerd[2031]: time="2025-01-13T21:11:58.250280614Z" level=info msg="StartContainer for \"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30\"" Jan 13 21:11:58.306286 systemd[1]: run-containerd-runc-k8s.io-ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30-runc.6W5xO0.mount: Deactivated successfully. Jan 13 21:11:58.315365 systemd[1]: Started cri-containerd-ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30.scope - libcontainer container ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30. Jan 13 21:11:58.371331 containerd[2031]: time="2025-01-13T21:11:58.371269414Z" level=info msg="StartContainer for \"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30\" returns successfully" Jan 13 21:11:58.389475 systemd[1]: cri-containerd-ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30.scope: Deactivated successfully. Jan 13 21:11:59.243512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30-rootfs.mount: Deactivated successfully. Jan 13 21:11:59.726487 containerd[2031]: time="2025-01-13T21:11:59.725932789Z" level=info msg="shim disconnected" id=ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30 namespace=k8s.io Jan 13 21:11:59.726487 containerd[2031]: time="2025-01-13T21:11:59.726037237Z" level=warning msg="cleaning up after shim disconnected" id=ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30 namespace=k8s.io Jan 13 21:11:59.726487 containerd[2031]: time="2025-01-13T21:11:59.726059257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:00.264104 containerd[2031]: time="2025-01-13T21:12:00.262738740Z" level=info msg="CreateContainer within sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:12:00.290329 containerd[2031]: time="2025-01-13T21:12:00.290203716Z" level=info msg="CreateContainer within sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34\"" Jan 13 21:12:00.296943 containerd[2031]: time="2025-01-13T21:12:00.294559164Z" level=info msg="StartContainer for \"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34\"" Jan 13 21:12:00.403312 systemd[1]: Started cri-containerd-ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34.scope - libcontainer container ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34. Jan 13 21:12:00.490929 containerd[2031]: time="2025-01-13T21:12:00.490862461Z" level=info msg="StartContainer for \"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34\" returns successfully" Jan 13 21:12:00.525223 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:12:00.525805 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:00.525949 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:12:00.537720 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:12:00.540561 systemd[1]: cri-containerd-ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34.scope: Deactivated successfully. Jan 13 21:12:00.595824 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:12:00.607858 containerd[2031]: time="2025-01-13T21:12:00.607478234Z" level=info msg="shim disconnected" id=ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34 namespace=k8s.io Jan 13 21:12:00.607858 containerd[2031]: time="2025-01-13T21:12:00.607583606Z" level=warning msg="cleaning up after shim disconnected" id=ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34 namespace=k8s.io Jan 13 21:12:00.607858 containerd[2031]: time="2025-01-13T21:12:00.607609190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:00.609985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34-rootfs.mount: Deactivated successfully. Jan 13 21:12:01.261589 containerd[2031]: time="2025-01-13T21:12:01.261500341Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:01.266598 containerd[2031]: time="2025-01-13T21:12:01.265735429Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138286" Jan 13 21:12:01.274784 containerd[2031]: time="2025-01-13T21:12:01.274660093Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:01.284065 containerd[2031]: time="2025-01-13T21:12:01.281859109Z" level=info msg="CreateContainer within sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:12:01.285284 containerd[2031]: time="2025-01-13T21:12:01.285027181Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.059593743s" Jan 13 21:12:01.285284 containerd[2031]: time="2025-01-13T21:12:01.285116089Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 21:12:01.288479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971994651.mount: Deactivated successfully. Jan 13 21:12:01.296210 containerd[2031]: time="2025-01-13T21:12:01.292897189Z" level=info msg="CreateContainer within sandbox \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:12:01.331617 containerd[2031]: time="2025-01-13T21:12:01.331433485Z" level=info msg="CreateContainer within sandbox \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\"" Jan 13 21:12:01.334262 containerd[2031]: time="2025-01-13T21:12:01.332333593Z" level=info msg="StartContainer for \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\"" Jan 13 21:12:01.341649 containerd[2031]: time="2025-01-13T21:12:01.341580337Z" level=info msg="CreateContainer within sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f\"" Jan 13 21:12:01.344061 containerd[2031]: time="2025-01-13T21:12:01.343241641Z" level=info msg="StartContainer for \"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f\"" Jan 13 21:12:01.421560 systemd[1]: Started cri-containerd-2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f.scope - libcontainer container 2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f. Jan 13 21:12:01.425416 systemd[1]: Started cri-containerd-5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7.scope - libcontainer container 5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7. Jan 13 21:12:01.501394 containerd[2031]: time="2025-01-13T21:12:01.501319406Z" level=info msg="StartContainer for \"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f\" returns successfully" Jan 13 21:12:01.510338 systemd[1]: cri-containerd-2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f.scope: Deactivated successfully. Jan 13 21:12:01.525968 containerd[2031]: time="2025-01-13T21:12:01.523770950Z" level=info msg="StartContainer for \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\" returns successfully" Jan 13 21:12:01.676231 containerd[2031]: time="2025-01-13T21:12:01.675963015Z" level=info msg="shim disconnected" id=2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f namespace=k8s.io Jan 13 21:12:01.676231 containerd[2031]: time="2025-01-13T21:12:01.676087095Z" level=warning msg="cleaning up after shim disconnected" id=2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f namespace=k8s.io Jan 13 21:12:01.676231 containerd[2031]: time="2025-01-13T21:12:01.676133547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:02.297603 containerd[2031]: time="2025-01-13T21:12:02.297136394Z" level=info msg="CreateContainer within sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:12:02.327439 containerd[2031]: time="2025-01-13T21:12:02.327364898Z" level=info msg="CreateContainer within sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6\"" Jan 13 21:12:02.327687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747084951.mount: Deactivated successfully. Jan 13 21:12:02.329697 containerd[2031]: time="2025-01-13T21:12:02.329634458Z" level=info msg="StartContainer for \"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6\"" Jan 13 21:12:02.443176 systemd[1]: Started cri-containerd-e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6.scope - libcontainer container e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6. Jan 13 21:12:02.556590 kubelet[3497]: I0113 21:12:02.556232 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-k9c7q" podStartSLOduration=2.940192144 podStartE2EDuration="15.556208583s" podCreationTimestamp="2025-01-13 21:11:47 +0000 UTC" firstStartedPulling="2025-01-13 21:11:48.671372426 +0000 UTC m=+5.867250030" lastFinishedPulling="2025-01-13 21:12:01.287388877 +0000 UTC m=+18.483266469" observedRunningTime="2025-01-13 21:12:02.421912407 +0000 UTC m=+19.617790035" watchObservedRunningTime="2025-01-13 21:12:02.556208583 +0000 UTC m=+19.752086187" Jan 13 21:12:02.585766 containerd[2031]: time="2025-01-13T21:12:02.585608295Z" level=info msg="StartContainer for \"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6\" returns successfully" Jan 13 21:12:02.586691 systemd[1]: cri-containerd-e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6.scope: Deactivated successfully. Jan 13 21:12:02.651339 containerd[2031]: time="2025-01-13T21:12:02.651116176Z" level=info msg="shim disconnected" id=e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6 namespace=k8s.io Jan 13 21:12:02.652449 containerd[2031]: time="2025-01-13T21:12:02.652373200Z" level=warning msg="cleaning up after shim disconnected" id=e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6 namespace=k8s.io Jan 13 21:12:02.652449 containerd[2031]: time="2025-01-13T21:12:02.652431976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:03.288352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6-rootfs.mount: Deactivated successfully. Jan 13 21:12:03.305415 containerd[2031]: time="2025-01-13T21:12:03.305330379Z" level=info msg="CreateContainer within sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:12:03.367945 containerd[2031]: time="2025-01-13T21:12:03.367731255Z" level=info msg="CreateContainer within sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\"" Jan 13 21:12:03.370712 containerd[2031]: time="2025-01-13T21:12:03.370629603Z" level=info msg="StartContainer for \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\"" Jan 13 21:12:03.474388 systemd[1]: Started cri-containerd-7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d.scope - libcontainer container 7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d. Jan 13 21:12:03.525963 containerd[2031]: time="2025-01-13T21:12:03.525872008Z" level=info msg="StartContainer for \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\" returns successfully" Jan 13 21:12:03.675048 kubelet[3497]: I0113 21:12:03.674368 3497 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:12:03.754850 kubelet[3497]: W0113 21:12:03.754588 3497 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object Jan 13 21:12:03.754850 kubelet[3497]: E0113 21:12:03.754666 3497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-25-188\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-188' and this object" logger="UnhandledError" Jan 13 21:12:03.764736 systemd[1]: Created slice kubepods-burstable-pod66dc399e_addb_4e1d_ba61_484a84bce32f.slice - libcontainer container kubepods-burstable-pod66dc399e_addb_4e1d_ba61_484a84bce32f.slice. Jan 13 21:12:03.781202 systemd[1]: Created slice kubepods-burstable-pod0439a085_f194_4662_8e1c_9024115b788b.slice - libcontainer container kubepods-burstable-pod0439a085_f194_4662_8e1c_9024115b788b.slice. Jan 13 21:12:03.801834 kubelet[3497]: I0113 21:12:03.801657 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7rm\" (UniqueName: \"kubernetes.io/projected/66dc399e-addb-4e1d-ba61-484a84bce32f-kube-api-access-nm7rm\") pod \"coredns-6f6b679f8f-gvlsz\" (UID: \"66dc399e-addb-4e1d-ba61-484a84bce32f\") " pod="kube-system/coredns-6f6b679f8f-gvlsz" Jan 13 21:12:03.807023 kubelet[3497]: I0113 21:12:03.806731 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0439a085-f194-4662-8e1c-9024115b788b-config-volume\") pod \"coredns-6f6b679f8f-zt9c8\" (UID: \"0439a085-f194-4662-8e1c-9024115b788b\") " pod="kube-system/coredns-6f6b679f8f-zt9c8" Jan 13 21:12:03.807023 kubelet[3497]: I0113 21:12:03.806900 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkfz5\" (UniqueName: \"kubernetes.io/projected/0439a085-f194-4662-8e1c-9024115b788b-kube-api-access-hkfz5\") pod \"coredns-6f6b679f8f-zt9c8\" (UID: \"0439a085-f194-4662-8e1c-9024115b788b\") " pod="kube-system/coredns-6f6b679f8f-zt9c8" Jan 13 21:12:03.807650 kubelet[3497]: I0113 21:12:03.807307 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66dc399e-addb-4e1d-ba61-484a84bce32f-config-volume\") pod \"coredns-6f6b679f8f-gvlsz\" (UID: \"66dc399e-addb-4e1d-ba61-484a84bce32f\") " pod="kube-system/coredns-6f6b679f8f-gvlsz" Jan 13 21:12:04.909276 kubelet[3497]: E0113 21:12:04.909208 3497 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:12:04.909276 kubelet[3497]: E0113 21:12:04.909208 3497 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:12:04.909276 kubelet[3497]: E0113 21:12:04.909348 3497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0439a085-f194-4662-8e1c-9024115b788b-config-volume podName:0439a085-f194-4662-8e1c-9024115b788b nodeName:}" failed. No retries permitted until 2025-01-13 21:12:05.409310579 +0000 UTC m=+22.605188171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0439a085-f194-4662-8e1c-9024115b788b-config-volume") pod "coredns-6f6b679f8f-zt9c8" (UID: "0439a085-f194-4662-8e1c-9024115b788b") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:12:04.909276 kubelet[3497]: E0113 21:12:04.909380 3497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66dc399e-addb-4e1d-ba61-484a84bce32f-config-volume podName:66dc399e-addb-4e1d-ba61-484a84bce32f nodeName:}" failed. No retries permitted until 2025-01-13 21:12:05.409364243 +0000 UTC m=+22.605241847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/66dc399e-addb-4e1d-ba61-484a84bce32f-config-volume") pod "coredns-6f6b679f8f-gvlsz" (UID: "66dc399e-addb-4e1d-ba61-484a84bce32f") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:12:05.575871 containerd[2031]: time="2025-01-13T21:12:05.575628186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvlsz,Uid:66dc399e-addb-4e1d-ba61-484a84bce32f,Namespace:kube-system,Attempt:0,}" Jan 13 21:12:05.621053 containerd[2031]: time="2025-01-13T21:12:05.620544606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zt9c8,Uid:0439a085-f194-4662-8e1c-9024115b788b,Namespace:kube-system,Attempt:0,}" Jan 13 21:12:06.605869 systemd-networkd[1935]: cilium_host: Link UP Jan 13 21:12:06.606458 systemd-networkd[1935]: cilium_net: Link UP Jan 13 21:12:06.606869 systemd-networkd[1935]: cilium_net: Gained carrier Jan 13 21:12:06.606923 (udev-worker)[4334]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:06.608244 systemd-networkd[1935]: cilium_host: Gained carrier Jan 13 21:12:06.609546 (udev-worker)[4271]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:06.794439 (udev-worker)[4344]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:06.804462 systemd-networkd[1935]: cilium_vxlan: Link UP Jan 13 21:12:06.804483 systemd-networkd[1935]: cilium_vxlan: Gained carrier Jan 13 21:12:07.328647 kernel: NET: Registered PF_ALG protocol family Jan 13 21:12:07.367219 systemd-networkd[1935]: cilium_host: Gained IPv6LL Jan 13 21:12:07.559272 systemd-networkd[1935]: cilium_net: Gained IPv6LL Jan 13 21:12:08.071228 systemd-networkd[1935]: cilium_vxlan: Gained IPv6LL Jan 13 21:12:08.717929 systemd-networkd[1935]: lxc_health: Link UP Jan 13 21:12:08.721956 systemd-networkd[1935]: lxc_health: Gained carrier Jan 13 21:12:09.138894 systemd-networkd[1935]: lxc97a7ead0cd7b: Link UP Jan 13 21:12:09.146104 kernel: eth0: renamed from tmpeea5a Jan 13 21:12:09.153089 systemd-networkd[1935]: lxc97a7ead0cd7b: Gained carrier Jan 13 21:12:09.213809 systemd-networkd[1935]: lxcd2f24dfec946: Link UP Jan 13 21:12:09.221232 kernel: eth0: renamed from tmp0e683 Jan 13 21:12:09.229362 systemd-networkd[1935]: lxcd2f24dfec946: Gained carrier Jan 13 21:12:09.991391 systemd-networkd[1935]: lxc_health: Gained IPv6LL Jan 13 21:12:10.167968 kubelet[3497]: I0113 21:12:10.167836 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n26kq" podStartSLOduration=13.333831468 podStartE2EDuration="23.167812293s" podCreationTimestamp="2025-01-13 21:11:47 +0000 UTC" firstStartedPulling="2025-01-13 21:11:48.390698989 +0000 UTC m=+5.586576593" lastFinishedPulling="2025-01-13 21:11:58.224679802 +0000 UTC m=+15.420557418" observedRunningTime="2025-01-13 21:12:04.356249836 +0000 UTC m=+21.552127464" watchObservedRunningTime="2025-01-13 21:12:10.167812293 +0000 UTC m=+27.363689897" Jan 13 21:12:10.439436 systemd-networkd[1935]: lxc97a7ead0cd7b: Gained IPv6LL Jan 13 21:12:11.271249 systemd-networkd[1935]: lxcd2f24dfec946: Gained IPv6LL Jan 13 21:12:14.193755 ntpd[1989]: Listen normally on 8 cilium_host 192.168.0.226:123 Jan 13 21:12:14.194669 ntpd[1989]: 13 Jan 21:12:14 ntpd[1989]: Listen normally on 8 cilium_host 192.168.0.226:123 Jan 13 21:12:14.194669 ntpd[1989]: 13 Jan 21:12:14 ntpd[1989]: Listen normally on 9 cilium_net [fe80::70dc:65ff:fe5d:6b7f%4]:123 Jan 13 21:12:14.193905 ntpd[1989]: Listen normally on 9 cilium_net [fe80::70dc:65ff:fe5d:6b7f%4]:123 Jan 13 21:12:14.194936 ntpd[1989]: Listen normally on 10 cilium_host [fe80::d848:35ff:fee2:fd0b%5]:123 Jan 13 21:12:14.195803 ntpd[1989]: 13 Jan 21:12:14 ntpd[1989]: Listen normally on 10 cilium_host [fe80::d848:35ff:fee2:fd0b%5]:123 Jan 13 21:12:14.195803 ntpd[1989]: 13 Jan 21:12:14 ntpd[1989]: Listen normally on 11 cilium_vxlan [fe80::d479:68ff:fec9:1a9c%6]:123 Jan 13 21:12:14.195803 ntpd[1989]: 13 Jan 21:12:14 ntpd[1989]: Listen normally on 12 lxc_health [fe80::b836:a5ff:feda:5450%8]:123 Jan 13 21:12:14.195803 ntpd[1989]: 13 Jan 21:12:14 ntpd[1989]: Listen normally on 13 lxc97a7ead0cd7b [fe80::2066:adff:fe6a:67d1%10]:123 Jan 13 21:12:14.195132 ntpd[1989]: Listen normally on 11 cilium_vxlan [fe80::d479:68ff:fec9:1a9c%6]:123 Jan 13 21:12:14.196985 ntpd[1989]: 13 Jan 21:12:14 ntpd[1989]: Listen normally on 14 lxcd2f24dfec946 [fe80::6cec:76ff:fe48:bc13%12]:123 Jan 13 21:12:14.195288 ntpd[1989]: Listen normally on 12 lxc_health [fe80::b836:a5ff:feda:5450%8]:123 Jan 13 21:12:14.195449 ntpd[1989]: Listen normally on 13 lxc97a7ead0cd7b [fe80::2066:adff:fe6a:67d1%10]:123 Jan 13 21:12:14.195982 ntpd[1989]: Listen normally on 14 lxcd2f24dfec946 [fe80::6cec:76ff:fe48:bc13%12]:123 Jan 13 21:12:14.816142 kubelet[3497]: I0113 21:12:14.815123 3497 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 21:12:18.558551 containerd[2031]: time="2025-01-13T21:12:18.558266503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:12:18.558551 containerd[2031]: time="2025-01-13T21:12:18.558392323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:12:18.558551 containerd[2031]: time="2025-01-13T21:12:18.558430507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:18.560566 containerd[2031]: time="2025-01-13T21:12:18.558584335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:18.621381 containerd[2031]: time="2025-01-13T21:12:18.620324767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:12:18.621381 containerd[2031]: time="2025-01-13T21:12:18.620444203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:12:18.621381 containerd[2031]: time="2025-01-13T21:12:18.620568115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:18.621381 containerd[2031]: time="2025-01-13T21:12:18.621200875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:18.664219 systemd[1]: Started cri-containerd-eea5acc213edf6593515c12acad86a6460399319de68300f882040bef8a80364.scope - libcontainer container eea5acc213edf6593515c12acad86a6460399319de68300f882040bef8a80364. Jan 13 21:12:18.689967 systemd[1]: Started cri-containerd-0e6836448050445279ea8f6be6b43bdbc1b9dc8cbef0b2db6370ea40c14e75f1.scope - libcontainer container 0e6836448050445279ea8f6be6b43bdbc1b9dc8cbef0b2db6370ea40c14e75f1. Jan 13 21:12:18.783781 containerd[2031]: time="2025-01-13T21:12:18.783695528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zt9c8,Uid:0439a085-f194-4662-8e1c-9024115b788b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e6836448050445279ea8f6be6b43bdbc1b9dc8cbef0b2db6370ea40c14e75f1\"" Jan 13 21:12:18.790055 containerd[2031]: time="2025-01-13T21:12:18.789947876Z" level=info msg="CreateContainer within sandbox \"0e6836448050445279ea8f6be6b43bdbc1b9dc8cbef0b2db6370ea40c14e75f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:12:18.850161 containerd[2031]: time="2025-01-13T21:12:18.847983332Z" level=info msg="CreateContainer within sandbox \"0e6836448050445279ea8f6be6b43bdbc1b9dc8cbef0b2db6370ea40c14e75f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8be8a6fe421468fd8c9b5e2b6114ae8c9711321fa313b276c483a0d888e76ec\"" Jan 13 21:12:18.857690 containerd[2031]: time="2025-01-13T21:12:18.857259440Z" level=info msg="StartContainer for \"f8be8a6fe421468fd8c9b5e2b6114ae8c9711321fa313b276c483a0d888e76ec\"" Jan 13 21:12:18.863934 containerd[2031]: time="2025-01-13T21:12:18.862152332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gvlsz,Uid:66dc399e-addb-4e1d-ba61-484a84bce32f,Namespace:kube-system,Attempt:0,} returns sandbox id \"eea5acc213edf6593515c12acad86a6460399319de68300f882040bef8a80364\"" Jan 13 21:12:18.874707 containerd[2031]: time="2025-01-13T21:12:18.874631492Z" level=info msg="CreateContainer within sandbox \"eea5acc213edf6593515c12acad86a6460399319de68300f882040bef8a80364\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:12:18.915490 containerd[2031]: time="2025-01-13T21:12:18.915404672Z" level=info msg="CreateContainer within sandbox \"eea5acc213edf6593515c12acad86a6460399319de68300f882040bef8a80364\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1da6364b9314fd14052adc9534193036aefad91e66fe7f07cf909dc07769885b\"" Jan 13 21:12:18.917585 containerd[2031]: time="2025-01-13T21:12:18.917434244Z" level=info msg="StartContainer for \"1da6364b9314fd14052adc9534193036aefad91e66fe7f07cf909dc07769885b\"" Jan 13 21:12:18.951352 systemd[1]: Started cri-containerd-f8be8a6fe421468fd8c9b5e2b6114ae8c9711321fa313b276c483a0d888e76ec.scope - libcontainer container f8be8a6fe421468fd8c9b5e2b6114ae8c9711321fa313b276c483a0d888e76ec. Jan 13 21:12:19.011497 systemd[1]: Started cri-containerd-1da6364b9314fd14052adc9534193036aefad91e66fe7f07cf909dc07769885b.scope - libcontainer container 1da6364b9314fd14052adc9534193036aefad91e66fe7f07cf909dc07769885b. Jan 13 21:12:19.089527 containerd[2031]: time="2025-01-13T21:12:19.089443817Z" level=info msg="StartContainer for \"f8be8a6fe421468fd8c9b5e2b6114ae8c9711321fa313b276c483a0d888e76ec\" returns successfully" Jan 13 21:12:19.141777 containerd[2031]: time="2025-01-13T21:12:19.139516386Z" level=info msg="StartContainer for \"1da6364b9314fd14052adc9534193036aefad91e66fe7f07cf909dc07769885b\" returns successfully" Jan 13 21:12:19.401248 kubelet[3497]: I0113 21:12:19.400875 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zt9c8" podStartSLOduration=32.400851319 podStartE2EDuration="32.400851319s" podCreationTimestamp="2025-01-13 21:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:12:19.399248179 +0000 UTC m=+36.595125795" watchObservedRunningTime="2025-01-13 21:12:19.400851319 +0000 UTC m=+36.596728923" Jan 13 21:12:19.456957 kubelet[3497]: I0113 21:12:19.456850 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gvlsz" podStartSLOduration=32.456824551 podStartE2EDuration="32.456824551s" podCreationTimestamp="2025-01-13 21:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:12:19.427364047 +0000 UTC m=+36.623241675" watchObservedRunningTime="2025-01-13 21:12:19.456824551 +0000 UTC m=+36.652702167" Jan 13 21:12:19.577403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3728554145.mount: Deactivated successfully. Jan 13 21:12:20.400520 systemd[1]: Started sshd@9-172.31.25.188:22-139.178.89.65:42336.service - OpenSSH per-connection server daemon (139.178.89.65:42336). Jan 13 21:12:20.583730 sshd[4869]: Accepted publickey for core from 139.178.89.65 port 42336 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:20.586792 sshd[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:20.594540 systemd-logind[1997]: New session 10 of user core. Jan 13 21:12:20.603270 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:12:20.866402 sshd[4869]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:20.873273 systemd[1]: sshd@9-172.31.25.188:22-139.178.89.65:42336.service: Deactivated successfully. Jan 13 21:12:20.879634 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:12:20.880929 systemd-logind[1997]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:12:20.882856 systemd-logind[1997]: Removed session 10. Jan 13 21:12:25.906534 systemd[1]: Started sshd@10-172.31.25.188:22-139.178.89.65:60796.service - OpenSSH per-connection server daemon (139.178.89.65:60796). Jan 13 21:12:26.084908 sshd[4887]: Accepted publickey for core from 139.178.89.65 port 60796 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:26.087930 sshd[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:26.097736 systemd-logind[1997]: New session 11 of user core. Jan 13 21:12:26.103316 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:12:26.362707 sshd[4887]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:26.369186 systemd[1]: sshd@10-172.31.25.188:22-139.178.89.65:60796.service: Deactivated successfully. Jan 13 21:12:26.375440 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:12:26.377356 systemd-logind[1997]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:12:26.379924 systemd-logind[1997]: Removed session 11. Jan 13 21:12:31.403562 systemd[1]: Started sshd@11-172.31.25.188:22-139.178.89.65:52600.service - OpenSSH per-connection server daemon (139.178.89.65:52600). Jan 13 21:12:31.589732 sshd[4901]: Accepted publickey for core from 139.178.89.65 port 52600 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:31.593014 sshd[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:31.604538 systemd-logind[1997]: New session 12 of user core. Jan 13 21:12:31.610407 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:12:31.864373 sshd[4901]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:31.871048 systemd-logind[1997]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:12:31.872182 systemd[1]: sshd@11-172.31.25.188:22-139.178.89.65:52600.service: Deactivated successfully. Jan 13 21:12:31.881788 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:12:31.886477 systemd-logind[1997]: Removed session 12. Jan 13 21:12:36.904869 systemd[1]: Started sshd@12-172.31.25.188:22-139.178.89.65:52608.service - OpenSSH per-connection server daemon (139.178.89.65:52608). Jan 13 21:12:37.089298 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 52608 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:37.092219 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:37.102057 systemd-logind[1997]: New session 13 of user core. Jan 13 21:12:37.111407 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:12:37.369972 sshd[4915]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:37.375603 systemd[1]: sshd@12-172.31.25.188:22-139.178.89.65:52608.service: Deactivated successfully. Jan 13 21:12:37.379838 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:12:37.385366 systemd-logind[1997]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:12:37.387173 systemd-logind[1997]: Removed session 13. Jan 13 21:12:37.413525 systemd[1]: Started sshd@13-172.31.25.188:22-139.178.89.65:52610.service - OpenSSH per-connection server daemon (139.178.89.65:52610). Jan 13 21:12:37.598087 sshd[4928]: Accepted publickey for core from 139.178.89.65 port 52610 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:37.600883 sshd[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:37.608408 systemd-logind[1997]: New session 14 of user core. Jan 13 21:12:37.620276 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:12:37.997582 sshd[4928]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:38.005854 systemd[1]: sshd@13-172.31.25.188:22-139.178.89.65:52610.service: Deactivated successfully. Jan 13 21:12:38.012985 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:12:38.019329 systemd-logind[1997]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:12:38.046508 systemd[1]: Started sshd@14-172.31.25.188:22-139.178.89.65:52624.service - OpenSSH per-connection server daemon (139.178.89.65:52624). Jan 13 21:12:38.049643 systemd-logind[1997]: Removed session 14. Jan 13 21:12:38.229853 sshd[4939]: Accepted publickey for core from 139.178.89.65 port 52624 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:38.232605 sshd[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:38.242472 systemd-logind[1997]: New session 15 of user core. Jan 13 21:12:38.250287 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:12:38.501361 sshd[4939]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:38.508429 systemd[1]: sshd@14-172.31.25.188:22-139.178.89.65:52624.service: Deactivated successfully. Jan 13 21:12:38.513078 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:12:38.516251 systemd-logind[1997]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:12:38.518529 systemd-logind[1997]: Removed session 15. Jan 13 21:12:43.543597 systemd[1]: Started sshd@15-172.31.25.188:22-139.178.89.65:58276.service - OpenSSH per-connection server daemon (139.178.89.65:58276). Jan 13 21:12:43.728185 sshd[4953]: Accepted publickey for core from 139.178.89.65 port 58276 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:43.731330 sshd[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:43.741880 systemd-logind[1997]: New session 16 of user core. Jan 13 21:12:43.753618 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:12:44.013430 sshd[4953]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:44.020951 systemd[1]: sshd@15-172.31.25.188:22-139.178.89.65:58276.service: Deactivated successfully. Jan 13 21:12:44.021834 systemd-logind[1997]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:12:44.028131 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:12:44.034388 systemd-logind[1997]: Removed session 16. Jan 13 21:12:49.052621 systemd[1]: Started sshd@16-172.31.25.188:22-139.178.89.65:58282.service - OpenSSH per-connection server daemon (139.178.89.65:58282). Jan 13 21:12:49.239409 sshd[4969]: Accepted publickey for core from 139.178.89.65 port 58282 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:49.242743 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:49.252331 systemd-logind[1997]: New session 17 of user core. Jan 13 21:12:49.263407 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:12:49.528439 sshd[4969]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:49.536210 systemd[1]: sshd@16-172.31.25.188:22-139.178.89.65:58282.service: Deactivated successfully. Jan 13 21:12:49.540755 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:12:49.543497 systemd-logind[1997]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:12:49.546086 systemd-logind[1997]: Removed session 17. Jan 13 21:12:54.569515 systemd[1]: Started sshd@17-172.31.25.188:22-139.178.89.65:36730.service - OpenSSH per-connection server daemon (139.178.89.65:36730). Jan 13 21:12:54.742473 sshd[4981]: Accepted publickey for core from 139.178.89.65 port 36730 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:54.745189 sshd[4981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:54.754248 systemd-logind[1997]: New session 18 of user core. Jan 13 21:12:54.762297 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:12:55.010451 sshd[4981]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:55.015849 systemd[1]: sshd@17-172.31.25.188:22-139.178.89.65:36730.service: Deactivated successfully. Jan 13 21:12:55.020528 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:12:55.027980 systemd-logind[1997]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:12:55.030068 systemd-logind[1997]: Removed session 18. Jan 13 21:13:00.051529 systemd[1]: Started sshd@18-172.31.25.188:22-139.178.89.65:36746.service - OpenSSH per-connection server daemon (139.178.89.65:36746). Jan 13 21:13:00.230428 sshd[4994]: Accepted publickey for core from 139.178.89.65 port 36746 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:00.233192 sshd[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:00.242256 systemd-logind[1997]: New session 19 of user core. Jan 13 21:13:00.252277 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:13:00.493358 sshd[4994]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:00.499169 systemd-logind[1997]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:13:00.499706 systemd[1]: sshd@18-172.31.25.188:22-139.178.89.65:36746.service: Deactivated successfully. Jan 13 21:13:00.506646 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:13:00.511706 systemd-logind[1997]: Removed session 19. Jan 13 21:13:00.536463 systemd[1]: Started sshd@19-172.31.25.188:22-139.178.89.65:36758.service - OpenSSH per-connection server daemon (139.178.89.65:36758). Jan 13 21:13:00.705248 sshd[5007]: Accepted publickey for core from 139.178.89.65 port 36758 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:00.708370 sshd[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:00.718199 systemd-logind[1997]: New session 20 of user core. Jan 13 21:13:00.729321 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:13:01.030881 sshd[5007]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:01.037618 systemd[1]: sshd@19-172.31.25.188:22-139.178.89.65:36758.service: Deactivated successfully. Jan 13 21:13:01.041472 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:13:01.043935 systemd-logind[1997]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:13:01.045951 systemd-logind[1997]: Removed session 20. Jan 13 21:13:01.081564 systemd[1]: Started sshd@20-172.31.25.188:22-139.178.89.65:60860.service - OpenSSH per-connection server daemon (139.178.89.65:60860). Jan 13 21:13:01.255774 sshd[5017]: Accepted publickey for core from 139.178.89.65 port 60860 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:01.258744 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:01.268163 systemd-logind[1997]: New session 21 of user core. Jan 13 21:13:01.273316 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:13:03.996335 sshd[5017]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:04.009768 systemd-logind[1997]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:13:04.012777 systemd[1]: sshd@20-172.31.25.188:22-139.178.89.65:60860.service: Deactivated successfully. Jan 13 21:13:04.020959 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:13:04.045565 systemd[1]: Started sshd@21-172.31.25.188:22-139.178.89.65:60866.service - OpenSSH per-connection server daemon (139.178.89.65:60866). Jan 13 21:13:04.050925 systemd-logind[1997]: Removed session 21. Jan 13 21:13:04.235621 sshd[5036]: Accepted publickey for core from 139.178.89.65 port 60866 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:04.238448 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:04.248505 systemd-logind[1997]: New session 22 of user core. Jan 13 21:13:04.258352 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:13:04.792337 sshd[5036]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:04.798701 systemd[1]: sshd@21-172.31.25.188:22-139.178.89.65:60866.service: Deactivated successfully. Jan 13 21:13:04.804272 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:13:04.808064 systemd-logind[1997]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:13:04.811614 systemd-logind[1997]: Removed session 22. Jan 13 21:13:04.835614 systemd[1]: Started sshd@22-172.31.25.188:22-139.178.89.65:60882.service - OpenSSH per-connection server daemon (139.178.89.65:60882). Jan 13 21:13:05.027117 sshd[5047]: Accepted publickey for core from 139.178.89.65 port 60882 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:05.029965 sshd[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:05.038248 systemd-logind[1997]: New session 23 of user core. Jan 13 21:13:05.049328 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:13:05.286881 sshd[5047]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:05.293167 systemd-logind[1997]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:13:05.293921 systemd[1]: sshd@22-172.31.25.188:22-139.178.89.65:60882.service: Deactivated successfully. Jan 13 21:13:05.297907 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:13:05.303409 systemd-logind[1997]: Removed session 23. Jan 13 21:13:10.328564 systemd[1]: Started sshd@23-172.31.25.188:22-139.178.89.65:60898.service - OpenSSH per-connection server daemon (139.178.89.65:60898). Jan 13 21:13:10.510821 sshd[5059]: Accepted publickey for core from 139.178.89.65 port 60898 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:10.513854 sshd[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:10.523455 systemd-logind[1997]: New session 24 of user core. Jan 13 21:13:10.532329 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:13:10.775752 sshd[5059]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:10.783062 systemd[1]: sshd@23-172.31.25.188:22-139.178.89.65:60898.service: Deactivated successfully. Jan 13 21:13:10.786573 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:13:10.788513 systemd-logind[1997]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:13:10.791960 systemd-logind[1997]: Removed session 24. Jan 13 21:13:15.816106 systemd[1]: Started sshd@24-172.31.25.188:22-139.178.89.65:53454.service - OpenSSH per-connection server daemon (139.178.89.65:53454). Jan 13 21:13:15.998985 sshd[5074]: Accepted publickey for core from 139.178.89.65 port 53454 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:16.002244 sshd[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:16.012399 systemd-logind[1997]: New session 25 of user core. Jan 13 21:13:16.023328 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:13:16.275649 sshd[5074]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:16.282827 systemd[1]: sshd@24-172.31.25.188:22-139.178.89.65:53454.service: Deactivated successfully. Jan 13 21:13:16.286876 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:13:16.288306 systemd-logind[1997]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:13:16.292250 systemd-logind[1997]: Removed session 25. Jan 13 21:13:21.316634 systemd[1]: Started sshd@25-172.31.25.188:22-139.178.89.65:56864.service - OpenSSH per-connection server daemon (139.178.89.65:56864). Jan 13 21:13:21.509196 sshd[5088]: Accepted publickey for core from 139.178.89.65 port 56864 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:21.512030 sshd[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:21.521416 systemd-logind[1997]: New session 26 of user core. Jan 13 21:13:21.528487 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:13:21.773351 sshd[5088]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:21.779115 systemd-logind[1997]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:13:21.780309 systemd[1]: sshd@25-172.31.25.188:22-139.178.89.65:56864.service: Deactivated successfully. Jan 13 21:13:21.784236 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:13:21.789859 systemd-logind[1997]: Removed session 26. Jan 13 21:13:26.814520 systemd[1]: Started sshd@26-172.31.25.188:22-139.178.89.65:56872.service - OpenSSH per-connection server daemon (139.178.89.65:56872). Jan 13 21:13:26.996823 sshd[5101]: Accepted publickey for core from 139.178.89.65 port 56872 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:26.999782 sshd[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:27.008177 systemd-logind[1997]: New session 27 of user core. Jan 13 21:13:27.019313 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:13:27.255811 sshd[5101]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:27.262160 systemd[1]: sshd@26-172.31.25.188:22-139.178.89.65:56872.service: Deactivated successfully. Jan 13 21:13:27.265585 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:13:27.267220 systemd-logind[1997]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:13:27.269299 systemd-logind[1997]: Removed session 27. Jan 13 21:13:27.304469 systemd[1]: Started sshd@27-172.31.25.188:22-139.178.89.65:56884.service - OpenSSH per-connection server daemon (139.178.89.65:56884). Jan 13 21:13:27.471835 sshd[5114]: Accepted publickey for core from 139.178.89.65 port 56884 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:27.474602 sshd[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:27.482190 systemd-logind[1997]: New session 28 of user core. Jan 13 21:13:27.493328 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:13:30.523469 containerd[2031]: time="2025-01-13T21:13:30.523379788Z" level=info msg="StopContainer for \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\" with timeout 30 (s)" Jan 13 21:13:30.533422 containerd[2031]: time="2025-01-13T21:13:30.532905268Z" level=info msg="Stop container \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\" with signal terminated" Jan 13 21:13:30.570314 systemd[1]: cri-containerd-5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7.scope: Deactivated successfully. Jan 13 21:13:30.574578 containerd[2031]: time="2025-01-13T21:13:30.574061236Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:13:30.595593 containerd[2031]: time="2025-01-13T21:13:30.595524052Z" level=info msg="StopContainer for \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\" with timeout 2 (s)" Jan 13 21:13:30.596169 containerd[2031]: time="2025-01-13T21:13:30.596122456Z" level=info msg="Stop container \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\" with signal terminated" Jan 13 21:13:30.612717 systemd-networkd[1935]: lxc_health: Link DOWN Jan 13 21:13:30.612733 systemd-networkd[1935]: lxc_health: Lost carrier Jan 13 21:13:30.655565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7-rootfs.mount: Deactivated successfully. Jan 13 21:13:30.658460 systemd[1]: cri-containerd-7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d.scope: Deactivated successfully. Jan 13 21:13:30.659943 systemd[1]: cri-containerd-7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d.scope: Consumed 15.597s CPU time. Jan 13 21:13:30.674297 containerd[2031]: time="2025-01-13T21:13:30.674223041Z" level=info msg="shim disconnected" id=5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7 namespace=k8s.io Jan 13 21:13:30.674622 containerd[2031]: time="2025-01-13T21:13:30.674592245Z" level=warning msg="cleaning up after shim disconnected" id=5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7 namespace=k8s.io Jan 13 21:13:30.674752 containerd[2031]: time="2025-01-13T21:13:30.674725433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:30.715670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d-rootfs.mount: Deactivated successfully. Jan 13 21:13:30.725046 containerd[2031]: time="2025-01-13T21:13:30.724944569Z" level=info msg="shim disconnected" id=7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d namespace=k8s.io Jan 13 21:13:30.725046 containerd[2031]: time="2025-01-13T21:13:30.725037341Z" level=warning msg="cleaning up after shim disconnected" id=7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d namespace=k8s.io Jan 13 21:13:30.725590 containerd[2031]: time="2025-01-13T21:13:30.725059445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:30.726497 containerd[2031]: time="2025-01-13T21:13:30.725949473Z" level=info msg="StopContainer for \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\" returns successfully" Jan 13 21:13:30.727021 containerd[2031]: time="2025-01-13T21:13:30.726892781Z" level=info msg="StopPodSandbox for \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\"" Jan 13 21:13:30.727021 containerd[2031]: time="2025-01-13T21:13:30.726965909Z" level=info msg="Container to stop \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:30.732075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3-shm.mount: Deactivated successfully. Jan 13 21:13:30.745567 systemd[1]: cri-containerd-06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3.scope: Deactivated successfully. Jan 13 21:13:30.766465 containerd[2031]: time="2025-01-13T21:13:30.766208009Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:13:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:13:30.773764 containerd[2031]: time="2025-01-13T21:13:30.773486981Z" level=info msg="StopContainer for \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\" returns successfully" Jan 13 21:13:30.775785 containerd[2031]: time="2025-01-13T21:13:30.775636409Z" level=info msg="StopPodSandbox for \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\"" Jan 13 21:13:30.775785 containerd[2031]: time="2025-01-13T21:13:30.775825025Z" level=info msg="Container to stop \"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:30.775785 containerd[2031]: time="2025-01-13T21:13:30.775886717Z" level=info msg="Container to stop \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:30.775785 containerd[2031]: time="2025-01-13T21:13:30.775913021Z" level=info msg="Container to stop \"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:30.775785 containerd[2031]: time="2025-01-13T21:13:30.775936217Z" level=info msg="Container to stop \"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:30.775785 containerd[2031]: time="2025-01-13T21:13:30.775959161Z" level=info msg="Container to stop \"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:13:30.791779 systemd[1]: cri-containerd-f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc.scope: Deactivated successfully. Jan 13 21:13:30.813450 containerd[2031]: time="2025-01-13T21:13:30.811348026Z" level=info msg="shim disconnected" id=06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3 namespace=k8s.io Jan 13 21:13:30.813450 containerd[2031]: time="2025-01-13T21:13:30.813217674Z" level=warning msg="cleaning up after shim disconnected" id=06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3 namespace=k8s.io Jan 13 21:13:30.813450 containerd[2031]: time="2025-01-13T21:13:30.813240618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:30.848560 containerd[2031]: time="2025-01-13T21:13:30.848156982Z" level=info msg="TearDown network for sandbox \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\" successfully" Jan 13 21:13:30.848560 containerd[2031]: time="2025-01-13T21:13:30.848217690Z" level=info msg="StopPodSandbox for \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\" returns successfully" Jan 13 21:13:30.853380 containerd[2031]: time="2025-01-13T21:13:30.853136730Z" level=info msg="shim disconnected" id=f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc namespace=k8s.io Jan 13 21:13:30.853540 containerd[2031]: time="2025-01-13T21:13:30.853353462Z" level=warning msg="cleaning up after shim disconnected" id=f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc namespace=k8s.io Jan 13 21:13:30.853540 containerd[2031]: time="2025-01-13T21:13:30.853447014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:30.892366 containerd[2031]: time="2025-01-13T21:13:30.892298370Z" level=info msg="TearDown network for sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" successfully" Jan 13 21:13:30.892366 containerd[2031]: time="2025-01-13T21:13:30.892352070Z" level=info msg="StopPodSandbox for \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" returns successfully" Jan 13 21:13:30.920119 kubelet[3497]: I0113 21:13:30.918510 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c8a1815-aa70-4a68-80cf-69673d57f4f8-cilium-config-path\") pod \"3c8a1815-aa70-4a68-80cf-69673d57f4f8\" (UID: \"3c8a1815-aa70-4a68-80cf-69673d57f4f8\") " Jan 13 21:13:30.920119 kubelet[3497]: I0113 21:13:30.918613 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv668\" (UniqueName: \"kubernetes.io/projected/3c8a1815-aa70-4a68-80cf-69673d57f4f8-kube-api-access-wv668\") pod \"3c8a1815-aa70-4a68-80cf-69673d57f4f8\" (UID: \"3c8a1815-aa70-4a68-80cf-69673d57f4f8\") " Jan 13 21:13:30.927657 kubelet[3497]: I0113 21:13:30.926910 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8a1815-aa70-4a68-80cf-69673d57f4f8-kube-api-access-wv668" (OuterVolumeSpecName: "kube-api-access-wv668") pod "3c8a1815-aa70-4a68-80cf-69673d57f4f8" (UID: "3c8a1815-aa70-4a68-80cf-69673d57f4f8"). InnerVolumeSpecName "kube-api-access-wv668". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:13:30.928099 kubelet[3497]: I0113 21:13:30.927982 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c8a1815-aa70-4a68-80cf-69673d57f4f8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3c8a1815-aa70-4a68-80cf-69673d57f4f8" (UID: "3c8a1815-aa70-4a68-80cf-69673d57f4f8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:13:31.019819 kubelet[3497]: I0113 21:13:31.019747 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cni-path\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.019819 kubelet[3497]: I0113 21:13:31.019818 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-host-proc-sys-kernel\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020278 kubelet[3497]: I0113 21:13:31.019861 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-hostproc\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020278 kubelet[3497]: I0113 21:13:31.019897 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-host-proc-sys-net\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020278 kubelet[3497]: I0113 21:13:31.019968 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-hubble-tls\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020278 kubelet[3497]: I0113 21:13:31.020026 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-xtables-lock\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020278 kubelet[3497]: I0113 21:13:31.020066 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-cgroup\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020278 kubelet[3497]: I0113 21:13:31.020099 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-bpf-maps\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020821 kubelet[3497]: I0113 21:13:31.020138 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnrx4\" (UniqueName: \"kubernetes.io/projected/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-kube-api-access-pnrx4\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020821 kubelet[3497]: I0113 21:13:31.020174 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-lib-modules\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020821 kubelet[3497]: I0113 21:13:31.020207 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-run\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020821 kubelet[3497]: I0113 21:13:31.020246 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-clustermesh-secrets\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020821 kubelet[3497]: I0113 21:13:31.020293 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-config-path\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.020821 kubelet[3497]: I0113 21:13:31.020326 3497 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-etc-cni-netd\") pod \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\" (UID: \"8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6\") " Jan 13 21:13:31.021741 kubelet[3497]: I0113 21:13:31.020404 3497 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c8a1815-aa70-4a68-80cf-69673d57f4f8-cilium-config-path\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.021741 kubelet[3497]: I0113 21:13:31.020434 3497 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wv668\" (UniqueName: \"kubernetes.io/projected/3c8a1815-aa70-4a68-80cf-69673d57f4f8-kube-api-access-wv668\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.021741 kubelet[3497]: I0113 21:13:31.020512 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:31.021741 kubelet[3497]: I0113 21:13:31.020588 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cni-path" (OuterVolumeSpecName: "cni-path") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:31.021741 kubelet[3497]: I0113 21:13:31.020638 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:31.022284 kubelet[3497]: I0113 21:13:31.020679 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-hostproc" (OuterVolumeSpecName: "hostproc") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:31.022284 kubelet[3497]: I0113 21:13:31.020714 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:31.022742 kubelet[3497]: I0113 21:13:31.022687 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:31.023043 kubelet[3497]: I0113 21:13:31.022879 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:31.023043 kubelet[3497]: I0113 21:13:31.022954 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:31.023764 kubelet[3497]: I0113 21:13:31.023244 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:31.023764 kubelet[3497]: I0113 21:13:31.023316 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:13:31.030667 kubelet[3497]: I0113 21:13:31.028451 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-kube-api-access-pnrx4" (OuterVolumeSpecName: "kube-api-access-pnrx4") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "kube-api-access-pnrx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:13:31.034131 kubelet[3497]: I0113 21:13:31.034061 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:13:31.034651 kubelet[3497]: I0113 21:13:31.034594 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:13:31.035039 kubelet[3497]: I0113 21:13:31.034954 3497 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" (UID: "8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:13:31.120045 systemd[1]: Removed slice kubepods-burstable-pod8e82f6f9_ecfd_4a2a_82fb_f2fdea61c7e6.slice - libcontainer container kubepods-burstable-pod8e82f6f9_ecfd_4a2a_82fb_f2fdea61c7e6.slice. Jan 13 21:13:31.121189 kubelet[3497]: I0113 21:13:31.120853 3497 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-hostproc\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.121189 kubelet[3497]: I0113 21:13:31.120891 3497 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-host-proc-sys-net\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.121189 kubelet[3497]: I0113 21:13:31.120914 3497 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-hubble-tls\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.121189 kubelet[3497]: I0113 21:13:31.120935 3497 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-xtables-lock\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.121189 kubelet[3497]: I0113 21:13:31.120960 3497 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-cgroup\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.120340 systemd[1]: kubepods-burstable-pod8e82f6f9_ecfd_4a2a_82fb_f2fdea61c7e6.slice: Consumed 15.761s CPU time. Jan 13 21:13:31.121594 kubelet[3497]: I0113 21:13:31.120981 3497 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-bpf-maps\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.121594 kubelet[3497]: I0113 21:13:31.121328 3497 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pnrx4\" (UniqueName: \"kubernetes.io/projected/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-kube-api-access-pnrx4\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.121594 kubelet[3497]: I0113 21:13:31.121351 3497 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-lib-modules\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.121594 kubelet[3497]: I0113 21:13:31.121373 3497 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-run\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.121594 kubelet[3497]: I0113 21:13:31.121394 3497 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-clustermesh-secrets\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.121594 kubelet[3497]: I0113 21:13:31.121415 3497 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cilium-config-path\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.121594 kubelet[3497]: I0113 21:13:31.121434 3497 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-etc-cni-netd\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.121594 kubelet[3497]: I0113 21:13:31.121453 3497 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-cni-path\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.122047 kubelet[3497]: I0113 21:13:31.121471 3497 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6-host-proc-sys-kernel\") on node \"ip-172-31-25-188\" DevicePath \"\"" Jan 13 21:13:31.124596 systemd[1]: Removed slice kubepods-besteffort-pod3c8a1815_aa70_4a68_80cf_69673d57f4f8.slice - libcontainer container kubepods-besteffort-pod3c8a1815_aa70_4a68_80cf_69673d57f4f8.slice. Jan 13 21:13:31.528774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3-rootfs.mount: Deactivated successfully. Jan 13 21:13:31.528950 systemd[1]: var-lib-kubelet-pods-3c8a1815\x2daa70\x2d4a68\x2d80cf\x2d69673d57f4f8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwv668.mount: Deactivated successfully. Jan 13 21:13:31.529135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc-rootfs.mount: Deactivated successfully. Jan 13 21:13:31.529277 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc-shm.mount: Deactivated successfully. Jan 13 21:13:31.529412 systemd[1]: var-lib-kubelet-pods-8e82f6f9\x2decfd\x2d4a2a\x2d82fb\x2df2fdea61c7e6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpnrx4.mount: Deactivated successfully. Jan 13 21:13:31.529552 systemd[1]: var-lib-kubelet-pods-8e82f6f9\x2decfd\x2d4a2a\x2d82fb\x2df2fdea61c7e6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:13:31.529681 systemd[1]: var-lib-kubelet-pods-8e82f6f9\x2decfd\x2d4a2a\x2d82fb\x2df2fdea61c7e6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:13:31.587604 kubelet[3497]: I0113 21:13:31.587448 3497 scope.go:117] "RemoveContainer" containerID="5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7" Jan 13 21:13:31.591434 containerd[2031]: time="2025-01-13T21:13:31.591148277Z" level=info msg="RemoveContainer for \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\"" Jan 13 21:13:31.606541 containerd[2031]: time="2025-01-13T21:13:31.606297174Z" level=info msg="RemoveContainer for \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\" returns successfully" Jan 13 21:13:31.611100 kubelet[3497]: I0113 21:13:31.610201 3497 scope.go:117] "RemoveContainer" containerID="5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7" Jan 13 21:13:31.614562 containerd[2031]: time="2025-01-13T21:13:31.614396454Z" level=error msg="ContainerStatus for \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\": not found" Jan 13 21:13:31.627313 kubelet[3497]: E0113 21:13:31.627258 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\": not found" containerID="5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7" Jan 13 21:13:31.627475 kubelet[3497]: I0113 21:13:31.627332 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7"} err="failed to get container status \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e4e3451ec167ab7cd2606fc0b0df3a939364c0244c268cb66bb9a519d6c30b7\": not found" Jan 13 21:13:31.627732 kubelet[3497]: I0113 21:13:31.627473 3497 scope.go:117] "RemoveContainer" containerID="7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d" Jan 13 21:13:31.635899 containerd[2031]: time="2025-01-13T21:13:31.635848794Z" level=info msg="RemoveContainer for \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\"" Jan 13 21:13:31.645583 containerd[2031]: time="2025-01-13T21:13:31.644820522Z" level=info msg="RemoveContainer for \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\" returns successfully" Jan 13 21:13:31.645764 kubelet[3497]: I0113 21:13:31.645388 3497 scope.go:117] "RemoveContainer" containerID="e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6" Jan 13 21:13:31.650795 containerd[2031]: time="2025-01-13T21:13:31.650721426Z" level=info msg="RemoveContainer for \"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6\"" Jan 13 21:13:31.658380 containerd[2031]: time="2025-01-13T21:13:31.658178142Z" level=info msg="RemoveContainer for \"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6\" returns successfully" Jan 13 21:13:31.659773 kubelet[3497]: I0113 21:13:31.659720 3497 scope.go:117] "RemoveContainer" containerID="2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f" Jan 13 21:13:31.666637 containerd[2031]: time="2025-01-13T21:13:31.666261954Z" level=info msg="RemoveContainer for \"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f\"" Jan 13 21:13:31.672777 containerd[2031]: time="2025-01-13T21:13:31.672702858Z" level=info msg="RemoveContainer for \"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f\" returns successfully" Jan 13 21:13:31.673306 kubelet[3497]: I0113 21:13:31.673094 3497 scope.go:117] "RemoveContainer" containerID="ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34" Jan 13 21:13:31.675350 containerd[2031]: time="2025-01-13T21:13:31.675266514Z" level=info msg="RemoveContainer for \"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34\"" Jan 13 21:13:31.681157 containerd[2031]: time="2025-01-13T21:13:31.681082986Z" level=info msg="RemoveContainer for \"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34\" returns successfully" Jan 13 21:13:31.681658 kubelet[3497]: I0113 21:13:31.681520 3497 scope.go:117] "RemoveContainer" containerID="ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30" Jan 13 21:13:31.685164 containerd[2031]: time="2025-01-13T21:13:31.684225798Z" level=info msg="RemoveContainer for \"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30\"" Jan 13 21:13:31.695367 containerd[2031]: time="2025-01-13T21:13:31.695266278Z" level=info msg="RemoveContainer for \"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30\" returns successfully" Jan 13 21:13:31.695908 kubelet[3497]: I0113 21:13:31.695871 3497 scope.go:117] "RemoveContainer" containerID="7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d" Jan 13 21:13:31.696549 containerd[2031]: time="2025-01-13T21:13:31.696426834Z" level=error msg="ContainerStatus for \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\": not found" Jan 13 21:13:31.696803 kubelet[3497]: E0113 21:13:31.696772 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\": not found" containerID="7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d" Jan 13 21:13:31.696911 kubelet[3497]: I0113 21:13:31.696818 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d"} err="failed to get container status \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f02aca3ffe85beb4e59c44782c84ae08ec79c6fcf09c71d2374368199fa596d\": not found" Jan 13 21:13:31.696911 kubelet[3497]: I0113 21:13:31.696856 3497 scope.go:117] "RemoveContainer" containerID="e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6" Jan 13 21:13:31.697386 containerd[2031]: time="2025-01-13T21:13:31.697311054Z" level=error msg="ContainerStatus for \"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6\": not found" Jan 13 21:13:31.697725 kubelet[3497]: E0113 21:13:31.697642 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6\": not found" containerID="e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6" Jan 13 21:13:31.697809 kubelet[3497]: I0113 21:13:31.697726 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6"} err="failed to get container status \"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4d11fa3f17a511ceabefc1f4c382576b0cd504952cfbed6d4f6b47eef6f07b6\": not found" Jan 13 21:13:31.697809 kubelet[3497]: I0113 21:13:31.697791 3497 scope.go:117] "RemoveContainer" containerID="2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f" Jan 13 21:13:31.698420 containerd[2031]: time="2025-01-13T21:13:31.698222046Z" level=error msg="ContainerStatus for \"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f\": not found" Jan 13 21:13:31.698854 kubelet[3497]: E0113 21:13:31.698657 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f\": not found" containerID="2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f" Jan 13 21:13:31.698854 kubelet[3497]: I0113 21:13:31.698709 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f"} err="failed to get container status \"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2b74dfd308f43c9200080d2c9202dd75effc0dbb74db2dca0f8d72c6bef6806f\": not found" Jan 13 21:13:31.698854 kubelet[3497]: I0113 21:13:31.698745 3497 scope.go:117] "RemoveContainer" containerID="ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34" Jan 13 21:13:31.699800 containerd[2031]: time="2025-01-13T21:13:31.699480690Z" level=error msg="ContainerStatus for \"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34\": not found" Jan 13 21:13:31.700293 kubelet[3497]: E0113 21:13:31.700123 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34\": not found" containerID="ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34" Jan 13 21:13:31.700293 kubelet[3497]: I0113 21:13:31.700172 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34"} err="failed to get container status \"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad84c100c9db15d8dfae97871d40c81fb1dc637def34c22d9a2ee3328e553d34\": not found" Jan 13 21:13:31.700293 kubelet[3497]: I0113 21:13:31.700233 3497 scope.go:117] "RemoveContainer" containerID="ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30" Jan 13 21:13:31.701107 containerd[2031]: time="2025-01-13T21:13:31.700972926Z" level=error msg="ContainerStatus for \"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30\": not found" Jan 13 21:13:31.701393 kubelet[3497]: E0113 21:13:31.701338 3497 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30\": not found" containerID="ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30" Jan 13 21:13:31.701514 kubelet[3497]: I0113 21:13:31.701435 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30"} err="failed to get container status \"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee24e17cd3ee8e8e45342eb7738c58ba6a609214b94cd25d6896fc7e9c527e30\": not found" Jan 13 21:13:32.448334 sshd[5114]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:32.456539 systemd[1]: sshd@27-172.31.25.188:22-139.178.89.65:56884.service: Deactivated successfully. Jan 13 21:13:32.460723 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:13:32.461462 systemd[1]: session-28.scope: Consumed 2.278s CPU time. Jan 13 21:13:32.462676 systemd-logind[1997]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:13:32.466264 systemd-logind[1997]: Removed session 28. Jan 13 21:13:32.488579 systemd[1]: Started sshd@28-172.31.25.188:22-139.178.89.65:59984.service - OpenSSH per-connection server daemon (139.178.89.65:59984). Jan 13 21:13:32.661082 sshd[5274]: Accepted publickey for core from 139.178.89.65 port 59984 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:32.663818 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:32.671468 systemd-logind[1997]: New session 29 of user core. Jan 13 21:13:32.683297 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 21:13:33.110500 kubelet[3497]: I0113 21:13:33.110431 3497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8a1815-aa70-4a68-80cf-69673d57f4f8" path="/var/lib/kubelet/pods/3c8a1815-aa70-4a68-80cf-69673d57f4f8/volumes" Jan 13 21:13:33.113440 kubelet[3497]: I0113 21:13:33.111885 3497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" path="/var/lib/kubelet/pods/8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6/volumes" Jan 13 21:13:33.193544 ntpd[1989]: Deleting interface #12 lxc_health, fe80::b836:a5ff:feda:5450%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Jan 13 21:13:33.194236 ntpd[1989]: 13 Jan 21:13:33 ntpd[1989]: Deleting interface #12 lxc_health, fe80::b836:a5ff:feda:5450%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Jan 13 21:13:33.292290 kubelet[3497]: E0113 21:13:33.292210 3497 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:13:34.040461 sshd[5274]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:34.050710 systemd[1]: sshd@28-172.31.25.188:22-139.178.89.65:59984.service: Deactivated successfully. Jan 13 21:13:34.057634 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 21:13:34.060403 systemd[1]: session-29.scope: Consumed 1.137s CPU time. Jan 13 21:13:34.064163 systemd-logind[1997]: Session 29 logged out. Waiting for processes to exit. Jan 13 21:13:34.074399 kubelet[3497]: E0113 21:13:34.073125 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" containerName="mount-cgroup" Jan 13 21:13:34.074399 kubelet[3497]: E0113 21:13:34.073184 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" containerName="apply-sysctl-overwrites" Jan 13 21:13:34.074399 kubelet[3497]: E0113 21:13:34.073203 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3c8a1815-aa70-4a68-80cf-69673d57f4f8" containerName="cilium-operator" Jan 13 21:13:34.074399 kubelet[3497]: E0113 21:13:34.073219 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" containerName="mount-bpf-fs" Jan 13 21:13:34.074399 kubelet[3497]: E0113 21:13:34.073234 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" containerName="clean-cilium-state" Jan 13 21:13:34.074399 kubelet[3497]: E0113 21:13:34.073250 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" containerName="cilium-agent" Jan 13 21:13:34.074399 kubelet[3497]: I0113 21:13:34.073304 3497 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e82f6f9-ecfd-4a2a-82fb-f2fdea61c7e6" containerName="cilium-agent" Jan 13 21:13:34.074399 kubelet[3497]: I0113 21:13:34.073345 3497 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8a1815-aa70-4a68-80cf-69673d57f4f8" containerName="cilium-operator" Jan 13 21:13:34.086490 kubelet[3497]: W0113 21:13:34.086406 3497 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object Jan 13 21:13:34.086649 kubelet[3497]: E0113 21:13:34.086495 3497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-25-188\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-188' and this object" logger="UnhandledError" Jan 13 21:13:34.090388 systemd-logind[1997]: Removed session 29. Jan 13 21:13:34.107065 kubelet[3497]: W0113 21:13:34.104277 3497 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object Jan 13 21:13:34.107065 kubelet[3497]: E0113 21:13:34.104342 3497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-25-188\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-188' and this object" logger="UnhandledError" Jan 13 21:13:34.104578 systemd[1]: Started sshd@29-172.31.25.188:22-139.178.89.65:59998.service - OpenSSH per-connection server daemon (139.178.89.65:59998). Jan 13 21:13:34.108892 kubelet[3497]: W0113 21:13:34.108791 3497 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object Jan 13 21:13:34.108892 kubelet[3497]: E0113 21:13:34.108883 3497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-25-188\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-188' and this object" logger="UnhandledError" Jan 13 21:13:34.110973 kubelet[3497]: E0113 21:13:34.110334 3497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-gvlsz" podUID="66dc399e-addb-4e1d-ba61-484a84bce32f" Jan 13 21:13:34.111592 kubelet[3497]: W0113 21:13:34.110968 3497 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object Jan 13 21:13:34.114607 kubelet[3497]: E0113 21:13:34.111163 3497 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-25-188\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-25-188' and this object" logger="UnhandledError" Jan 13 21:13:34.134059 systemd[1]: Created slice kubepods-burstable-pod30049fb3_9356_4878_b2f7_e5ce861d721c.slice - libcontainer container kubepods-burstable-pod30049fb3_9356_4878_b2f7_e5ce861d721c.slice. Jan 13 21:13:34.148816 kubelet[3497]: I0113 21:13:34.148748 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30049fb3-9356-4878-b2f7-e5ce861d721c-etc-cni-netd\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.148816 kubelet[3497]: I0113 21:13:34.148823 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/30049fb3-9356-4878-b2f7-e5ce861d721c-bpf-maps\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149110 kubelet[3497]: I0113 21:13:34.148867 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/30049fb3-9356-4878-b2f7-e5ce861d721c-host-proc-sys-kernel\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149110 kubelet[3497]: I0113 21:13:34.148914 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/30049fb3-9356-4878-b2f7-e5ce861d721c-cni-path\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149110 kubelet[3497]: I0113 21:13:34.148948 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30049fb3-9356-4878-b2f7-e5ce861d721c-xtables-lock\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149110 kubelet[3497]: I0113 21:13:34.148984 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/30049fb3-9356-4878-b2f7-e5ce861d721c-cilium-run\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149316 kubelet[3497]: I0113 21:13:34.149114 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30049fb3-9356-4878-b2f7-e5ce861d721c-lib-modules\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149316 kubelet[3497]: I0113 21:13:34.149159 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30049fb3-9356-4878-b2f7-e5ce861d721c-cilium-config-path\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149316 kubelet[3497]: I0113 21:13:34.149201 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/30049fb3-9356-4878-b2f7-e5ce861d721c-cilium-cgroup\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149316 kubelet[3497]: I0113 21:13:34.149246 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/30049fb3-9356-4878-b2f7-e5ce861d721c-hubble-tls\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149316 kubelet[3497]: I0113 21:13:34.149280 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/30049fb3-9356-4878-b2f7-e5ce861d721c-clustermesh-secrets\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149316 kubelet[3497]: I0113 21:13:34.149314 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/30049fb3-9356-4878-b2f7-e5ce861d721c-hostproc\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149827 kubelet[3497]: I0113 21:13:34.149354 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z59gg\" (UniqueName: \"kubernetes.io/projected/30049fb3-9356-4878-b2f7-e5ce861d721c-kube-api-access-z59gg\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149827 kubelet[3497]: I0113 21:13:34.149393 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/30049fb3-9356-4878-b2f7-e5ce861d721c-host-proc-sys-net\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.149827 kubelet[3497]: I0113 21:13:34.149430 3497 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/30049fb3-9356-4878-b2f7-e5ce861d721c-cilium-ipsec-secrets\") pod \"cilium-q9rzr\" (UID: \"30049fb3-9356-4878-b2f7-e5ce861d721c\") " pod="kube-system/cilium-q9rzr" Jan 13 21:13:34.311158 sshd[5286]: Accepted publickey for core from 139.178.89.65 port 59998 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:34.315389 sshd[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:34.325339 systemd-logind[1997]: New session 30 of user core. Jan 13 21:13:34.332512 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 21:13:34.459249 sshd[5286]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:34.466289 systemd[1]: sshd@29-172.31.25.188:22-139.178.89.65:59998.service: Deactivated successfully. Jan 13 21:13:34.469863 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 21:13:34.472093 systemd-logind[1997]: Session 30 logged out. Waiting for processes to exit. Jan 13 21:13:34.474597 systemd-logind[1997]: Removed session 30. Jan 13 21:13:34.503674 systemd[1]: Started sshd@30-172.31.25.188:22-139.178.89.65:60010.service - OpenSSH per-connection server daemon (139.178.89.65:60010). Jan 13 21:13:34.676540 sshd[5295]: Accepted publickey for core from 139.178.89.65 port 60010 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:34.679214 sshd[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:34.687341 systemd-logind[1997]: New session 31 of user core. Jan 13 21:13:34.697246 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 13 21:13:35.251356 kubelet[3497]: E0113 21:13:35.251282 3497 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:35.251356 kubelet[3497]: E0113 21:13:35.251328 3497 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:35.252049 kubelet[3497]: E0113 21:13:35.251439 3497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30049fb3-9356-4878-b2f7-e5ce861d721c-clustermesh-secrets podName:30049fb3-9356-4878-b2f7-e5ce861d721c nodeName:}" failed. No retries permitted until 2025-01-13 21:13:35.751410484 +0000 UTC m=+112.947288088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/30049fb3-9356-4878-b2f7-e5ce861d721c-clustermesh-secrets") pod "cilium-q9rzr" (UID: "30049fb3-9356-4878-b2f7-e5ce861d721c") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:35.252049 kubelet[3497]: E0113 21:13:35.251330 3497 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-q9rzr: failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:35.252049 kubelet[3497]: E0113 21:13:35.251969 3497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30049fb3-9356-4878-b2f7-e5ce861d721c-hubble-tls podName:30049fb3-9356-4878-b2f7-e5ce861d721c nodeName:}" failed. No retries permitted until 2025-01-13 21:13:35.751939864 +0000 UTC m=+112.947817456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/30049fb3-9356-4878-b2f7-e5ce861d721c-hubble-tls") pod "cilium-q9rzr" (UID: "30049fb3-9356-4878-b2f7-e5ce861d721c") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:35.252292 kubelet[3497]: E0113 21:13:35.251307 3497 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:13:35.252376 kubelet[3497]: E0113 21:13:35.252297 3497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/30049fb3-9356-4878-b2f7-e5ce861d721c-cilium-config-path podName:30049fb3-9356-4878-b2f7-e5ce861d721c nodeName:}" failed. No retries permitted until 2025-01-13 21:13:35.752277004 +0000 UTC m=+112.948154608 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/30049fb3-9356-4878-b2f7-e5ce861d721c-cilium-config-path") pod "cilium-q9rzr" (UID: "30049fb3-9356-4878-b2f7-e5ce861d721c") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:13:35.252376 kubelet[3497]: E0113 21:13:35.251279 3497 secret.go:188] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:35.252376 kubelet[3497]: E0113 21:13:35.252359 3497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30049fb3-9356-4878-b2f7-e5ce861d721c-cilium-ipsec-secrets podName:30049fb3-9356-4878-b2f7-e5ce861d721c nodeName:}" failed. No retries permitted until 2025-01-13 21:13:35.752344456 +0000 UTC m=+112.948222072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/30049fb3-9356-4878-b2f7-e5ce861d721c-cilium-ipsec-secrets") pod "cilium-q9rzr" (UID: "30049fb3-9356-4878-b2f7-e5ce861d721c") : failed to sync secret cache: timed out waiting for the condition Jan 13 21:13:35.628039 kubelet[3497]: I0113 21:13:35.627413 3497 setters.go:600] "Node became not ready" node="ip-172-31-25-188" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:13:35Z","lastTransitionTime":"2025-01-13T21:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:13:35.946614 containerd[2031]: time="2025-01-13T21:13:35.946508927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q9rzr,Uid:30049fb3-9356-4878-b2f7-e5ce861d721c,Namespace:kube-system,Attempt:0,}" Jan 13 21:13:35.990422 containerd[2031]: time="2025-01-13T21:13:35.990241103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:13:35.990422 containerd[2031]: time="2025-01-13T21:13:35.990357515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:13:35.991166 containerd[2031]: time="2025-01-13T21:13:35.991023371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:35.992641 containerd[2031]: time="2025-01-13T21:13:35.992490671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:13:36.030326 systemd[1]: Started cri-containerd-739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c.scope - libcontainer container 739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c. Jan 13 21:13:36.071440 containerd[2031]: time="2025-01-13T21:13:36.071272004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q9rzr,Uid:30049fb3-9356-4878-b2f7-e5ce861d721c,Namespace:kube-system,Attempt:0,} returns sandbox id \"739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c\"" Jan 13 21:13:36.079559 containerd[2031]: time="2025-01-13T21:13:36.079501064Z" level=info msg="CreateContainer within sandbox \"739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:13:36.105484 containerd[2031]: time="2025-01-13T21:13:36.105282524Z" level=info msg="CreateContainer within sandbox \"739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4e75577fdf1b413d4812ce41aaaae9807c101e2e665e34133ffcb6877c7a010d\"" Jan 13 21:13:36.106371 kubelet[3497]: E0113 21:13:36.106296 3497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-gvlsz" podUID="66dc399e-addb-4e1d-ba61-484a84bce32f" Jan 13 21:13:36.106854 containerd[2031]: time="2025-01-13T21:13:36.106587584Z" level=info msg="StartContainer for \"4e75577fdf1b413d4812ce41aaaae9807c101e2e665e34133ffcb6877c7a010d\"" Jan 13 21:13:36.160662 systemd[1]: Started cri-containerd-4e75577fdf1b413d4812ce41aaaae9807c101e2e665e34133ffcb6877c7a010d.scope - libcontainer container 4e75577fdf1b413d4812ce41aaaae9807c101e2e665e34133ffcb6877c7a010d. Jan 13 21:13:36.210890 containerd[2031]: time="2025-01-13T21:13:36.209610836Z" level=info msg="StartContainer for \"4e75577fdf1b413d4812ce41aaaae9807c101e2e665e34133ffcb6877c7a010d\" returns successfully" Jan 13 21:13:36.228598 systemd[1]: cri-containerd-4e75577fdf1b413d4812ce41aaaae9807c101e2e665e34133ffcb6877c7a010d.scope: Deactivated successfully. Jan 13 21:13:36.286054 containerd[2031]: time="2025-01-13T21:13:36.285746493Z" level=info msg="shim disconnected" id=4e75577fdf1b413d4812ce41aaaae9807c101e2e665e34133ffcb6877c7a010d namespace=k8s.io Jan 13 21:13:36.286054 containerd[2031]: time="2025-01-13T21:13:36.285821037Z" level=warning msg="cleaning up after shim disconnected" id=4e75577fdf1b413d4812ce41aaaae9807c101e2e665e34133ffcb6877c7a010d namespace=k8s.io Jan 13 21:13:36.286054 containerd[2031]: time="2025-01-13T21:13:36.285845505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:36.306777 containerd[2031]: time="2025-01-13T21:13:36.306682413Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:13:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:13:36.636576 containerd[2031]: time="2025-01-13T21:13:36.635550358Z" level=info msg="CreateContainer within sandbox \"739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:13:36.662420 containerd[2031]: time="2025-01-13T21:13:36.662338187Z" level=info msg="CreateContainer within sandbox \"739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"50478ea03e5d05c81b8c23538f7c91a41bb890d5fc2b7a52707a756b0abba472\"" Jan 13 21:13:36.665364 containerd[2031]: time="2025-01-13T21:13:36.665290187Z" level=info msg="StartContainer for \"50478ea03e5d05c81b8c23538f7c91a41bb890d5fc2b7a52707a756b0abba472\"" Jan 13 21:13:36.713298 systemd[1]: Started cri-containerd-50478ea03e5d05c81b8c23538f7c91a41bb890d5fc2b7a52707a756b0abba472.scope - libcontainer container 50478ea03e5d05c81b8c23538f7c91a41bb890d5fc2b7a52707a756b0abba472. Jan 13 21:13:36.764943 containerd[2031]: time="2025-01-13T21:13:36.764769563Z" level=info msg="StartContainer for \"50478ea03e5d05c81b8c23538f7c91a41bb890d5fc2b7a52707a756b0abba472\" returns successfully" Jan 13 21:13:36.795924 systemd[1]: cri-containerd-50478ea03e5d05c81b8c23538f7c91a41bb890d5fc2b7a52707a756b0abba472.scope: Deactivated successfully. Jan 13 21:13:36.836213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50478ea03e5d05c81b8c23538f7c91a41bb890d5fc2b7a52707a756b0abba472-rootfs.mount: Deactivated successfully. Jan 13 21:13:36.842977 containerd[2031]: time="2025-01-13T21:13:36.842852976Z" level=info msg="shim disconnected" id=50478ea03e5d05c81b8c23538f7c91a41bb890d5fc2b7a52707a756b0abba472 namespace=k8s.io Jan 13 21:13:36.843208 containerd[2031]: time="2025-01-13T21:13:36.842962524Z" level=warning msg="cleaning up after shim disconnected" id=50478ea03e5d05c81b8c23538f7c91a41bb890d5fc2b7a52707a756b0abba472 namespace=k8s.io Jan 13 21:13:36.843208 containerd[2031]: time="2025-01-13T21:13:36.843044028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:37.642451 containerd[2031]: time="2025-01-13T21:13:37.641725979Z" level=info msg="CreateContainer within sandbox \"739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:13:37.676249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123750429.mount: Deactivated successfully. Jan 13 21:13:37.694235 containerd[2031]: time="2025-01-13T21:13:37.694160784Z" level=info msg="CreateContainer within sandbox \"739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eedc9f78109a26918b0c22c784aa32854abe619a48e53d9b295a829ab097b3a0\"" Jan 13 21:13:37.697128 containerd[2031]: time="2025-01-13T21:13:37.695301708Z" level=info msg="StartContainer for \"eedc9f78109a26918b0c22c784aa32854abe619a48e53d9b295a829ab097b3a0\"" Jan 13 21:13:37.751407 systemd[1]: Started cri-containerd-eedc9f78109a26918b0c22c784aa32854abe619a48e53d9b295a829ab097b3a0.scope - libcontainer container eedc9f78109a26918b0c22c784aa32854abe619a48e53d9b295a829ab097b3a0. Jan 13 21:13:37.810667 containerd[2031]: time="2025-01-13T21:13:37.810599712Z" level=info msg="StartContainer for \"eedc9f78109a26918b0c22c784aa32854abe619a48e53d9b295a829ab097b3a0\" returns successfully" Jan 13 21:13:37.815300 systemd[1]: cri-containerd-eedc9f78109a26918b0c22c784aa32854abe619a48e53d9b295a829ab097b3a0.scope: Deactivated successfully. Jan 13 21:13:37.857350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eedc9f78109a26918b0c22c784aa32854abe619a48e53d9b295a829ab097b3a0-rootfs.mount: Deactivated successfully. Jan 13 21:13:37.865554 containerd[2031]: time="2025-01-13T21:13:37.865339621Z" level=info msg="shim disconnected" id=eedc9f78109a26918b0c22c784aa32854abe619a48e53d9b295a829ab097b3a0 namespace=k8s.io Jan 13 21:13:37.865554 containerd[2031]: time="2025-01-13T21:13:37.865436797Z" level=warning msg="cleaning up after shim disconnected" id=eedc9f78109a26918b0c22c784aa32854abe619a48e53d9b295a829ab097b3a0 namespace=k8s.io Jan 13 21:13:37.865554 containerd[2031]: time="2025-01-13T21:13:37.865481149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:38.104920 kubelet[3497]: E0113 21:13:38.104838 3497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-gvlsz" podUID="66dc399e-addb-4e1d-ba61-484a84bce32f" Jan 13 21:13:38.293877 kubelet[3497]: E0113 21:13:38.293793 3497 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:13:38.648085 containerd[2031]: time="2025-01-13T21:13:38.647982636Z" level=info msg="CreateContainer within sandbox \"739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:13:38.684353 containerd[2031]: time="2025-01-13T21:13:38.684235021Z" level=info msg="CreateContainer within sandbox \"739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f0cbfa50f89e328053a02c66f788b94b40a8cef9147926c95193e4ea472a28ac\"" Jan 13 21:13:38.687222 containerd[2031]: time="2025-01-13T21:13:38.687091405Z" level=info msg="StartContainer for \"f0cbfa50f89e328053a02c66f788b94b40a8cef9147926c95193e4ea472a28ac\"" Jan 13 21:13:38.748371 systemd[1]: Started cri-containerd-f0cbfa50f89e328053a02c66f788b94b40a8cef9147926c95193e4ea472a28ac.scope - libcontainer container f0cbfa50f89e328053a02c66f788b94b40a8cef9147926c95193e4ea472a28ac. Jan 13 21:13:38.809365 systemd[1]: cri-containerd-f0cbfa50f89e328053a02c66f788b94b40a8cef9147926c95193e4ea472a28ac.scope: Deactivated successfully. Jan 13 21:13:38.816718 containerd[2031]: time="2025-01-13T21:13:38.816246649Z" level=info msg="StartContainer for \"f0cbfa50f89e328053a02c66f788b94b40a8cef9147926c95193e4ea472a28ac\" returns successfully" Jan 13 21:13:38.869058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0cbfa50f89e328053a02c66f788b94b40a8cef9147926c95193e4ea472a28ac-rootfs.mount: Deactivated successfully. Jan 13 21:13:38.882295 containerd[2031]: time="2025-01-13T21:13:38.882045458Z" level=info msg="shim disconnected" id=f0cbfa50f89e328053a02c66f788b94b40a8cef9147926c95193e4ea472a28ac namespace=k8s.io Jan 13 21:13:38.882295 containerd[2031]: time="2025-01-13T21:13:38.882247370Z" level=warning msg="cleaning up after shim disconnected" id=f0cbfa50f89e328053a02c66f788b94b40a8cef9147926c95193e4ea472a28ac namespace=k8s.io Jan 13 21:13:38.882938 containerd[2031]: time="2025-01-13T21:13:38.882268334Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:39.660249 containerd[2031]: time="2025-01-13T21:13:39.660163454Z" level=info msg="CreateContainer within sandbox \"739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:13:39.698331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76903951.mount: Deactivated successfully. Jan 13 21:13:39.701542 containerd[2031]: time="2025-01-13T21:13:39.701458622Z" level=info msg="CreateContainer within sandbox \"739753c716307776d40bb9f606988fec39eeefa6d59eaa71f9a554754db7e10c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f6103ffdf036cae52df1da14f35ae17d225321bc1fee15be7cf8f5e875b4ef05\"" Jan 13 21:13:39.705640 containerd[2031]: time="2025-01-13T21:13:39.705277478Z" level=info msg="StartContainer for \"f6103ffdf036cae52df1da14f35ae17d225321bc1fee15be7cf8f5e875b4ef05\"" Jan 13 21:13:39.759335 systemd[1]: Started cri-containerd-f6103ffdf036cae52df1da14f35ae17d225321bc1fee15be7cf8f5e875b4ef05.scope - libcontainer container f6103ffdf036cae52df1da14f35ae17d225321bc1fee15be7cf8f5e875b4ef05. Jan 13 21:13:39.821653 containerd[2031]: time="2025-01-13T21:13:39.821382806Z" level=info msg="StartContainer for \"f6103ffdf036cae52df1da14f35ae17d225321bc1fee15be7cf8f5e875b4ef05\" returns successfully" Jan 13 21:13:40.105706 kubelet[3497]: E0113 21:13:40.105139 3497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-gvlsz" podUID="66dc399e-addb-4e1d-ba61-484a84bce32f" Jan 13 21:13:40.611220 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 21:13:40.700610 kubelet[3497]: I0113 21:13:40.698867 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q9rzr" podStartSLOduration=6.698843835 podStartE2EDuration="6.698843835s" podCreationTimestamp="2025-01-13 21:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:13:40.697170267 +0000 UTC m=+117.893047895" watchObservedRunningTime="2025-01-13 21:13:40.698843835 +0000 UTC m=+117.894721439" Jan 13 21:13:42.104947 kubelet[3497]: E0113 21:13:42.104816 3497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-gvlsz" podUID="66dc399e-addb-4e1d-ba61-484a84bce32f" Jan 13 21:13:42.104947 kubelet[3497]: E0113 21:13:42.104888 3497 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-zt9c8" podUID="0439a085-f194-4662-8e1c-9024115b788b" Jan 13 21:13:43.057039 containerd[2031]: time="2025-01-13T21:13:43.055743338Z" level=info msg="StopPodSandbox for \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\"" Jan 13 21:13:43.057039 containerd[2031]: time="2025-01-13T21:13:43.055904582Z" level=info msg="TearDown network for sandbox \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\" successfully" Jan 13 21:13:43.057039 containerd[2031]: time="2025-01-13T21:13:43.055931546Z" level=info msg="StopPodSandbox for \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\" returns successfully" Jan 13 21:13:43.057039 containerd[2031]: time="2025-01-13T21:13:43.056692046Z" level=info msg="RemovePodSandbox for \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\"" Jan 13 21:13:43.057039 containerd[2031]: time="2025-01-13T21:13:43.056759234Z" level=info msg="Forcibly stopping sandbox \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\"" Jan 13 21:13:43.057039 containerd[2031]: time="2025-01-13T21:13:43.056878418Z" level=info msg="TearDown network for sandbox \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\" successfully" Jan 13 21:13:43.064476 containerd[2031]: time="2025-01-13T21:13:43.064320326Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:13:43.064658 containerd[2031]: time="2025-01-13T21:13:43.064504562Z" level=info msg="RemovePodSandbox \"06e05040c8a6f8219741b14e3db181f92844532f3e3de18dca58f20c3b551ce3\" returns successfully" Jan 13 21:13:43.066026 containerd[2031]: time="2025-01-13T21:13:43.065351750Z" level=info msg="StopPodSandbox for \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\"" Jan 13 21:13:43.066026 containerd[2031]: time="2025-01-13T21:13:43.065515154Z" level=info msg="TearDown network for sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" successfully" Jan 13 21:13:43.066026 containerd[2031]: time="2025-01-13T21:13:43.065545430Z" level=info msg="StopPodSandbox for \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" returns successfully" Jan 13 21:13:43.066372 containerd[2031]: time="2025-01-13T21:13:43.066297506Z" level=info msg="RemovePodSandbox for \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\"" Jan 13 21:13:43.066470 containerd[2031]: time="2025-01-13T21:13:43.066373850Z" level=info msg="Forcibly stopping sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\"" Jan 13 21:13:43.066541 containerd[2031]: time="2025-01-13T21:13:43.066483542Z" level=info msg="TearDown network for sandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" successfully" Jan 13 21:13:43.075043 containerd[2031]: time="2025-01-13T21:13:43.073493474Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:13:43.075043 containerd[2031]: time="2025-01-13T21:13:43.073652846Z" level=info msg="RemovePodSandbox \"f9af345d89baa5590dfde5667f7d280168947243a48145d8271be5d8840ffffc\" returns successfully" Jan 13 21:13:43.369894 systemd[1]: run-containerd-runc-k8s.io-f6103ffdf036cae52df1da14f35ae17d225321bc1fee15be7cf8f5e875b4ef05-runc.tpSLPu.mount: Deactivated successfully. Jan 13 21:13:44.981321 systemd-networkd[1935]: lxc_health: Link UP Jan 13 21:13:44.992165 (udev-worker)[6132]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:13:45.002160 systemd-networkd[1935]: lxc_health: Gained carrier Jan 13 21:13:46.697472 systemd-networkd[1935]: lxc_health: Gained IPv6LL Jan 13 21:13:49.193600 ntpd[1989]: Listen normally on 15 lxc_health [fe80::c808:e3ff:fe7a:9d84%14]:123 Jan 13 21:13:49.194286 ntpd[1989]: 13 Jan 21:13:49 ntpd[1989]: Listen normally on 15 lxc_health [fe80::c808:e3ff:fe7a:9d84%14]:123 Jan 13 21:13:50.286912 systemd[1]: run-containerd-runc-k8s.io-f6103ffdf036cae52df1da14f35ae17d225321bc1fee15be7cf8f5e875b4ef05-runc.kj9YbD.mount: Deactivated successfully. Jan 13 21:13:50.436022 sshd[5295]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:50.444294 systemd[1]: sshd@30-172.31.25.188:22-139.178.89.65:60010.service: Deactivated successfully. Jan 13 21:13:50.451829 systemd[1]: session-31.scope: Deactivated successfully. Jan 13 21:13:50.455762 systemd-logind[1997]: Session 31 logged out. Waiting for processes to exit. Jan 13 21:13:50.458903 systemd-logind[1997]: Removed session 31. Jan 13 21:14:04.828958 systemd[1]: cri-containerd-712ce23223f79ee934081242dafe10bb233d34a926d29ebd8d41d71c1c6ee8c6.scope: Deactivated successfully. Jan 13 21:14:04.830283 systemd[1]: cri-containerd-712ce23223f79ee934081242dafe10bb233d34a926d29ebd8d41d71c1c6ee8c6.scope: Consumed 4.716s CPU time, 18.1M memory peak, 0B memory swap peak. Jan 13 21:14:04.868789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-712ce23223f79ee934081242dafe10bb233d34a926d29ebd8d41d71c1c6ee8c6-rootfs.mount: Deactivated successfully. Jan 13 21:14:04.878334 containerd[2031]: time="2025-01-13T21:14:04.878257755Z" level=info msg="shim disconnected" id=712ce23223f79ee934081242dafe10bb233d34a926d29ebd8d41d71c1c6ee8c6 namespace=k8s.io Jan 13 21:14:04.879115 containerd[2031]: time="2025-01-13T21:14:04.878897547Z" level=warning msg="cleaning up after shim disconnected" id=712ce23223f79ee934081242dafe10bb233d34a926d29ebd8d41d71c1c6ee8c6 namespace=k8s.io Jan 13 21:14:04.879115 containerd[2031]: time="2025-01-13T21:14:04.878958051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:04.900034 containerd[2031]: time="2025-01-13T21:14:04.899911695Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:14:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:14:05.732249 kubelet[3497]: I0113 21:14:05.731874 3497 scope.go:117] "RemoveContainer" containerID="712ce23223f79ee934081242dafe10bb233d34a926d29ebd8d41d71c1c6ee8c6" Jan 13 21:14:05.736048 containerd[2031]: time="2025-01-13T21:14:05.735929811Z" level=info msg="CreateContainer within sandbox \"4629ebb085c4c8221bb993e5814d7ca535d813c3cead3c32902033acc2266ed2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 21:14:05.761095 containerd[2031]: time="2025-01-13T21:14:05.761017971Z" level=info msg="CreateContainer within sandbox \"4629ebb085c4c8221bb993e5814d7ca535d813c3cead3c32902033acc2266ed2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f0062a039d7d6bacfcdc325f76e4d0d62e4b2ac4363d9d285bbae6bed869495f\"" Jan 13 21:14:05.761791 containerd[2031]: time="2025-01-13T21:14:05.761712939Z" level=info msg="StartContainer for \"f0062a039d7d6bacfcdc325f76e4d0d62e4b2ac4363d9d285bbae6bed869495f\"" Jan 13 21:14:05.813325 systemd[1]: Started cri-containerd-f0062a039d7d6bacfcdc325f76e4d0d62e4b2ac4363d9d285bbae6bed869495f.scope - libcontainer container f0062a039d7d6bacfcdc325f76e4d0d62e4b2ac4363d9d285bbae6bed869495f. Jan 13 21:14:05.882814 containerd[2031]: time="2025-01-13T21:14:05.882742972Z" level=info msg="StartContainer for \"f0062a039d7d6bacfcdc325f76e4d0d62e4b2ac4363d9d285bbae6bed869495f\" returns successfully" Jan 13 21:14:06.195008 kubelet[3497]: E0113 21:14:06.194892 3497 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": context deadline exceeded" Jan 13 21:14:08.651416 systemd[1]: cri-containerd-68aa60eee57ec3576d4cf28207db2886478e4c2f94bb9e2f362abdcc8550decf.scope: Deactivated successfully. Jan 13 21:14:08.651950 systemd[1]: cri-containerd-68aa60eee57ec3576d4cf28207db2886478e4c2f94bb9e2f362abdcc8550decf.scope: Consumed 2.195s CPU time, 16.1M memory peak, 0B memory swap peak. Jan 13 21:14:08.705168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68aa60eee57ec3576d4cf28207db2886478e4c2f94bb9e2f362abdcc8550decf-rootfs.mount: Deactivated successfully. Jan 13 21:14:08.718310 containerd[2031]: time="2025-01-13T21:14:08.718200330Z" level=info msg="shim disconnected" id=68aa60eee57ec3576d4cf28207db2886478e4c2f94bb9e2f362abdcc8550decf namespace=k8s.io Jan 13 21:14:08.718310 containerd[2031]: time="2025-01-13T21:14:08.718300206Z" level=warning msg="cleaning up after shim disconnected" id=68aa60eee57ec3576d4cf28207db2886478e4c2f94bb9e2f362abdcc8550decf namespace=k8s.io Jan 13 21:14:08.719112 containerd[2031]: time="2025-01-13T21:14:08.718323030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:14:08.756024 kubelet[3497]: I0113 21:14:08.754387 3497 scope.go:117] "RemoveContainer" containerID="68aa60eee57ec3576d4cf28207db2886478e4c2f94bb9e2f362abdcc8550decf" Jan 13 21:14:08.760049 containerd[2031]: time="2025-01-13T21:14:08.758573478Z" level=info msg="CreateContainer within sandbox \"2de552ec1c57a45c882bdf7c08c0319067419c089d1958a7f4db0948b9bdfe01\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 21:14:08.796123 containerd[2031]: time="2025-01-13T21:14:08.796054206Z" level=info msg="CreateContainer within sandbox \"2de552ec1c57a45c882bdf7c08c0319067419c089d1958a7f4db0948b9bdfe01\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b4933a70f91c5a4e73dd50aa42c26ea2b345beca53605a0bec2e83dce9627d51\"" Jan 13 21:14:08.797481 containerd[2031]: time="2025-01-13T21:14:08.797375574Z" level=info msg="StartContainer for \"b4933a70f91c5a4e73dd50aa42c26ea2b345beca53605a0bec2e83dce9627d51\"" Jan 13 21:14:08.871331 systemd[1]: Started cri-containerd-b4933a70f91c5a4e73dd50aa42c26ea2b345beca53605a0bec2e83dce9627d51.scope - libcontainer container b4933a70f91c5a4e73dd50aa42c26ea2b345beca53605a0bec2e83dce9627d51. Jan 13 21:14:08.941298 containerd[2031]: time="2025-01-13T21:14:08.941227891Z" level=info msg="StartContainer for \"b4933a70f91c5a4e73dd50aa42c26ea2b345beca53605a0bec2e83dce9627d51\" returns successfully" Jan 13 21:14:16.195535 kubelet[3497]: E0113 21:14:16.195409 3497 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"