Jan 13 20:08:13.176008 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 20:08:13.176056 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:56:28 -00 2025 Jan 13 20:08:13.176081 kernel: KASLR disabled due to lack of seed Jan 13 20:08:13.176097 kernel: efi: EFI v2.7 by EDK II Jan 13 20:08:13.176113 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Jan 13 20:08:13.176128 kernel: secureboot: Secure boot disabled Jan 13 20:08:13.176145 kernel: ACPI: Early table checksum verification disabled Jan 13 20:08:13.176160 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 20:08:13.176176 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 20:08:13.176191 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:08:13.176210 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 20:08:13.176226 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:08:13.176241 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 20:08:13.176257 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 20:08:13.176275 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 20:08:13.176295 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:08:13.176312 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 20:08:13.176328 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 20:08:13.176344 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 20:08:13.176360 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 20:08:13.176376 kernel: printk: bootconsole [uart0] enabled Jan 13 20:08:13.176392 kernel: NUMA: Failed to initialise from firmware Jan 13 20:08:13.176408 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:08:13.176424 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 20:08:13.176439 kernel: Zone ranges: Jan 13 20:08:13.176455 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:08:13.176475 kernel: DMA32 empty Jan 13 20:08:13.176492 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 20:08:13.176507 kernel: Movable zone start for each node Jan 13 20:08:13.176524 kernel: Early memory node ranges Jan 13 20:08:13.176540 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 20:08:13.176556 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 20:08:13.176572 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 20:08:13.176614 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 20:08:13.176652 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 20:08:13.176670 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 20:08:13.176687 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 20:08:13.176703 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 20:08:13.176726 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:08:13.176766 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 20:08:13.176794 kernel: psci: probing for conduit method from ACPI. Jan 13 20:08:13.176812 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 20:08:13.176829 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:08:13.176851 kernel: psci: Trusted OS migration not required Jan 13 20:08:13.176868 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:08:13.176885 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:08:13.176902 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:08:13.176920 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:08:13.176937 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:08:13.176954 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:08:13.176971 kernel: CPU features: detected: Spectre-v2 Jan 13 20:08:13.176988 kernel: CPU features: detected: Spectre-v3a Jan 13 20:08:13.177005 kernel: CPU features: detected: Spectre-BHB Jan 13 20:08:13.177022 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 20:08:13.177039 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 20:08:13.177060 kernel: alternatives: applying boot alternatives Jan 13 20:08:13.177079 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:08:13.177098 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:08:13.177115 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:08:13.177133 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:08:13.177159 kernel: Fallback order for Node 0: 0 Jan 13 20:08:13.177176 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 20:08:13.177193 kernel: Policy zone: Normal Jan 13 20:08:13.177210 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:08:13.177226 kernel: software IO TLB: area num 2. Jan 13 20:08:13.177248 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 20:08:13.177266 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Jan 13 20:08:13.177283 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:08:13.177300 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:08:13.177319 kernel: rcu: RCU event tracing is enabled. Jan 13 20:08:13.177336 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:08:13.177354 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:08:13.177371 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:08:13.177388 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:08:13.177405 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:08:13.177422 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:08:13.177444 kernel: GICv3: 96 SPIs implemented Jan 13 20:08:13.177461 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:08:13.177478 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:08:13.177494 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 20:08:13.177511 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 20:08:13.177528 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 20:08:13.177545 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:08:13.177562 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:08:13.177580 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 20:08:13.177596 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 20:08:13.177613 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 20:08:13.177630 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:08:13.177652 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 20:08:13.177670 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 20:08:13.177687 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 20:08:13.177704 kernel: Console: colour dummy device 80x25 Jan 13 20:08:13.177722 kernel: printk: console [tty1] enabled Jan 13 20:08:13.178994 kernel: ACPI: Core revision 20230628 Jan 13 20:08:13.179030 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 20:08:13.179049 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:08:13.179067 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:08:13.179084 kernel: landlock: Up and running. Jan 13 20:08:13.179111 kernel: SELinux: Initializing. Jan 13 20:08:13.179129 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:08:13.179146 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:08:13.179164 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:08:13.179182 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:08:13.179199 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:08:13.179217 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:08:13.179234 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 20:08:13.179256 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 20:08:13.179274 kernel: Remapping and enabling EFI services. Jan 13 20:08:13.179291 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:08:13.179308 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:08:13.179325 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 20:08:13.179343 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 20:08:13.179360 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 20:08:13.179378 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:08:13.179395 kernel: SMP: Total of 2 processors activated. Jan 13 20:08:13.179412 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:08:13.179434 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 20:08:13.179451 kernel: CPU features: detected: CRC32 instructions Jan 13 20:08:13.179480 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:08:13.179503 kernel: alternatives: applying system-wide alternatives Jan 13 20:08:13.179520 kernel: devtmpfs: initialized Jan 13 20:08:13.179539 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:08:13.179557 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:08:13.179575 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:08:13.179593 kernel: SMBIOS 3.0.0 present. Jan 13 20:08:13.179615 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 20:08:13.179633 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:08:13.179652 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:08:13.179670 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:08:13.179689 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:08:13.179729 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:08:13.179802 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Jan 13 20:08:13.179830 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:08:13.179849 kernel: cpuidle: using governor menu Jan 13 20:08:13.179867 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:08:13.179885 kernel: ASID allocator initialised with 65536 entries Jan 13 20:08:13.179904 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:08:13.179922 kernel: Serial: AMBA PL011 UART driver Jan 13 20:08:13.179940 kernel: Modules: 17360 pages in range for non-PLT usage Jan 13 20:08:13.179958 kernel: Modules: 508880 pages in range for PLT usage Jan 13 20:08:13.179976 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:08:13.179999 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:08:13.180017 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:08:13.180035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:08:13.180053 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:08:13.180071 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:08:13.180089 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:08:13.180107 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:08:13.180125 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:08:13.180142 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:08:13.180164 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:08:13.180182 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:08:13.180200 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:08:13.180218 kernel: ACPI: Interpreter enabled Jan 13 20:08:13.180236 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:08:13.180254 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:08:13.180272 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 20:08:13.180567 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:08:13.180862 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:08:13.181063 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:08:13.181254 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 20:08:13.181453 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 20:08:13.181478 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 20:08:13.181497 kernel: acpiphp: Slot [1] registered Jan 13 20:08:13.181515 kernel: acpiphp: Slot [2] registered Jan 13 20:08:13.181533 kernel: acpiphp: Slot [3] registered Jan 13 20:08:13.181557 kernel: acpiphp: Slot [4] registered Jan 13 20:08:13.181576 kernel: acpiphp: Slot [5] registered Jan 13 20:08:13.181594 kernel: acpiphp: Slot [6] registered Jan 13 20:08:13.181612 kernel: acpiphp: Slot [7] registered Jan 13 20:08:13.181630 kernel: acpiphp: Slot [8] registered Jan 13 20:08:13.181648 kernel: acpiphp: Slot [9] registered Jan 13 20:08:13.181665 kernel: acpiphp: Slot [10] registered Jan 13 20:08:13.181683 kernel: acpiphp: Slot [11] registered Jan 13 20:08:13.181701 kernel: acpiphp: Slot [12] registered Jan 13 20:08:13.181719 kernel: acpiphp: Slot [13] registered Jan 13 20:08:13.181766 kernel: acpiphp: Slot [14] registered Jan 13 20:08:13.181814 kernel: acpiphp: Slot [15] registered Jan 13 20:08:13.181833 kernel: acpiphp: Slot [16] registered Jan 13 20:08:13.181851 kernel: acpiphp: Slot [17] registered Jan 13 20:08:13.181869 kernel: acpiphp: Slot [18] registered Jan 13 20:08:13.181887 kernel: acpiphp: Slot [19] registered Jan 13 20:08:13.181905 kernel: acpiphp: Slot [20] registered Jan 13 20:08:13.181923 kernel: acpiphp: Slot [21] registered Jan 13 20:08:13.181941 kernel: acpiphp: Slot [22] registered Jan 13 20:08:13.181966 kernel: acpiphp: Slot [23] registered Jan 13 20:08:13.181985 kernel: acpiphp: Slot [24] registered Jan 13 20:08:13.182003 kernel: acpiphp: Slot [25] registered Jan 13 20:08:13.182021 kernel: acpiphp: Slot [26] registered Jan 13 20:08:13.182039 kernel: acpiphp: Slot [27] registered Jan 13 20:08:13.182056 kernel: acpiphp: Slot [28] registered Jan 13 20:08:13.182074 kernel: acpiphp: Slot [29] registered Jan 13 20:08:13.182092 kernel: acpiphp: Slot [30] registered Jan 13 20:08:13.182110 kernel: acpiphp: Slot [31] registered Jan 13 20:08:13.182128 kernel: PCI host bridge to bus 0000:00 Jan 13 20:08:13.182351 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 20:08:13.182535 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:08:13.182715 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 20:08:13.182966 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 20:08:13.183187 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 20:08:13.183407 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 20:08:13.183612 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 20:08:13.183898 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:08:13.184105 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 20:08:13.184305 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:08:13.184519 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:08:13.184719 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 20:08:13.184983 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 20:08:13.185191 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 20:08:13.185389 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:08:13.185592 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 20:08:13.185819 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 20:08:13.186026 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 20:08:13.186226 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 20:08:13.186431 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 20:08:13.186624 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 20:08:13.189310 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:08:13.191880 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 20:08:13.191928 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:08:13.191948 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:08:13.191967 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:08:13.191986 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:08:13.192005 kernel: iommu: Default domain type: Translated Jan 13 20:08:13.192033 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:08:13.192052 kernel: efivars: Registered efivars operations Jan 13 20:08:13.192071 kernel: vgaarb: loaded Jan 13 20:08:13.192090 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:08:13.192108 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:08:13.192127 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:08:13.192145 kernel: pnp: PnP ACPI init Jan 13 20:08:13.192383 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 20:08:13.192421 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:08:13.192440 kernel: NET: Registered PF_INET protocol family Jan 13 20:08:13.192458 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:08:13.192477 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:08:13.192496 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:08:13.192515 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:08:13.192533 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:08:13.192552 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:08:13.192570 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:08:13.192594 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:08:13.192613 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:08:13.192632 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:08:13.192650 kernel: kvm [1]: HYP mode not available Jan 13 20:08:13.192668 kernel: Initialise system trusted keyrings Jan 13 20:08:13.192687 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:08:13.192706 kernel: Key type asymmetric registered Jan 13 20:08:13.192724 kernel: Asymmetric key parser 'x509' registered Jan 13 20:08:13.193895 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:08:13.193942 kernel: io scheduler mq-deadline registered Jan 13 20:08:13.193962 kernel: io scheduler kyber registered Jan 13 20:08:13.193981 kernel: io scheduler bfq registered Jan 13 20:08:13.194250 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 20:08:13.194279 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:08:13.194299 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:08:13.194317 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 20:08:13.194336 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 20:08:13.194360 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:08:13.194379 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:08:13.194587 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 20:08:13.194613 kernel: printk: console [ttyS0] disabled Jan 13 20:08:13.194633 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 20:08:13.194651 kernel: printk: console [ttyS0] enabled Jan 13 20:08:13.194670 kernel: printk: bootconsole [uart0] disabled Jan 13 20:08:13.194688 kernel: thunder_xcv, ver 1.0 Jan 13 20:08:13.194707 kernel: thunder_bgx, ver 1.0 Jan 13 20:08:13.194725 kernel: nicpf, ver 1.0 Jan 13 20:08:13.194773 kernel: nicvf, ver 1.0 Jan 13 20:08:13.194990 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:08:13.195181 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:08:12 UTC (1736798892) Jan 13 20:08:13.195207 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:08:13.195226 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 20:08:13.195244 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:08:13.195263 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:08:13.195288 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:08:13.195307 kernel: Segment Routing with IPv6 Jan 13 20:08:13.195325 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:08:13.195343 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:08:13.195361 kernel: Key type dns_resolver registered Jan 13 20:08:13.195379 kernel: registered taskstats version 1 Jan 13 20:08:13.195397 kernel: Loading compiled-in X.509 certificates Jan 13 20:08:13.195416 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 46cb4d1b22f3a5974766fe7d7b651e2f296d4fe0' Jan 13 20:08:13.195434 kernel: Key type .fscrypt registered Jan 13 20:08:13.195452 kernel: Key type fscrypt-provisioning registered Jan 13 20:08:13.195476 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:08:13.195494 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:08:13.195512 kernel: ima: No architecture policies found Jan 13 20:08:13.195531 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:08:13.195549 kernel: clk: Disabling unused clocks Jan 13 20:08:13.195568 kernel: Freeing unused kernel memory: 39936K Jan 13 20:08:13.195586 kernel: Run /init as init process Jan 13 20:08:13.195604 kernel: with arguments: Jan 13 20:08:13.195622 kernel: /init Jan 13 20:08:13.195644 kernel: with environment: Jan 13 20:08:13.195662 kernel: HOME=/ Jan 13 20:08:13.195680 kernel: TERM=linux Jan 13 20:08:13.195717 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:08:13.197814 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:08:13.197858 systemd[1]: Detected virtualization amazon. Jan 13 20:08:13.197878 systemd[1]: Detected architecture arm64. Jan 13 20:08:13.197907 systemd[1]: Running in initrd. Jan 13 20:08:13.197928 systemd[1]: No hostname configured, using default hostname. Jan 13 20:08:13.197948 systemd[1]: Hostname set to . Jan 13 20:08:13.197968 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:08:13.197987 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:08:13.198007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:08:13.198027 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:08:13.198048 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:08:13.198073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:08:13.198094 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:08:13.198114 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:08:13.198137 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:08:13.198157 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:08:13.198177 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:08:13.198196 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:08:13.198220 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:08:13.198240 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:08:13.198260 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:08:13.198280 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:08:13.198299 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:08:13.198319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:08:13.198339 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:08:13.198358 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:08:13.198378 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:08:13.198402 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:08:13.198422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:08:13.198441 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:08:13.198461 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:08:13.198481 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:08:13.198501 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:08:13.198520 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:08:13.198540 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:08:13.198564 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:08:13.198584 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:13.198603 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:08:13.198623 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:08:13.198643 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:08:13.198716 systemd-journald[251]: Collecting audit messages is disabled. Jan 13 20:08:13.199832 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:08:13.199856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:13.199877 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:08:13.199906 systemd-journald[251]: Journal started Jan 13 20:08:13.199951 systemd-journald[251]: Runtime Journal (/run/log/journal/ec20205600ad29d6543a71cb21726119) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:08:13.175059 systemd-modules-load[252]: Inserted module 'overlay' Jan 13 20:08:13.211787 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:08:13.215788 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:08:13.218465 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:08:13.223880 kernel: Bridge firewalling registered Jan 13 20:08:13.220781 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 13 20:08:13.222251 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:08:13.231021 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:13.235122 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:08:13.244019 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:08:13.277344 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:08:13.293091 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:13.299995 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:08:13.310078 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:08:13.328960 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:13.350003 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:08:13.378420 dracut-cmdline[291]: dracut-dracut-053 Jan 13 20:08:13.384585 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:08:13.411965 systemd-resolved[285]: Positive Trust Anchors: Jan 13 20:08:13.411998 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:08:13.412062 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:08:13.554777 kernel: SCSI subsystem initialized Jan 13 20:08:13.562775 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:08:13.574779 kernel: iscsi: registered transport (tcp) Jan 13 20:08:13.596780 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:08:13.596873 kernel: QLogic iSCSI HBA Driver Jan 13 20:08:13.668772 kernel: random: crng init done Jan 13 20:08:13.668960 systemd-resolved[285]: Defaulting to hostname 'linux'. Jan 13 20:08:13.673394 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:08:13.679906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:08:13.703248 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:08:13.714022 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:08:13.756819 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:08:13.756896 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:08:13.756923 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:08:13.822782 kernel: raid6: neonx8 gen() 6528 MB/s Jan 13 20:08:13.839772 kernel: raid6: neonx4 gen() 6511 MB/s Jan 13 20:08:13.856769 kernel: raid6: neonx2 gen() 5419 MB/s Jan 13 20:08:13.873769 kernel: raid6: neonx1 gen() 3941 MB/s Jan 13 20:08:13.890769 kernel: raid6: int64x8 gen() 3606 MB/s Jan 13 20:08:13.907768 kernel: raid6: int64x4 gen() 3704 MB/s Jan 13 20:08:13.924769 kernel: raid6: int64x2 gen() 3597 MB/s Jan 13 20:08:13.942545 kernel: raid6: int64x1 gen() 2764 MB/s Jan 13 20:08:13.942582 kernel: raid6: using algorithm neonx8 gen() 6528 MB/s Jan 13 20:08:13.960573 kernel: raid6: .... xor() 4810 MB/s, rmw enabled Jan 13 20:08:13.960610 kernel: raid6: using neon recovery algorithm Jan 13 20:08:13.967771 kernel: xor: measuring software checksum speed Jan 13 20:08:13.968771 kernel: 8regs : 11608 MB/sec Jan 13 20:08:13.969768 kernel: 32regs : 11889 MB/sec Jan 13 20:08:13.971769 kernel: arm64_neon : 8900 MB/sec Jan 13 20:08:13.971803 kernel: xor: using function: 32regs (11889 MB/sec) Jan 13 20:08:14.053789 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:08:14.071986 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:08:14.083049 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:08:14.127620 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jan 13 20:08:14.135515 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:08:14.148011 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:08:14.179362 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Jan 13 20:08:14.235102 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:08:14.244056 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:08:14.366281 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:08:14.378996 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:08:14.421241 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:08:14.426726 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:08:14.431616 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:08:14.436081 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:08:14.447045 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:08:14.490302 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:08:14.557787 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:08:14.557859 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 20:08:14.587934 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:08:14.588185 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:08:14.588413 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:66:69:12:a4:81 Jan 13 20:08:14.562023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:08:14.562253 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:14.565127 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:08:14.567296 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:08:14.567553 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:14.570116 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:14.590538 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:14.590729 (udev-worker)[527]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:14.644879 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:08:14.644948 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:08:14.653792 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:08:14.656151 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:14.663119 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:08:14.673618 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:08:14.673661 kernel: GPT:9289727 != 16777215 Jan 13 20:08:14.673687 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:08:14.673713 kernel: GPT:9289727 != 16777215 Jan 13 20:08:14.673757 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:08:14.674770 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:08:14.706554 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:14.796778 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (516) Jan 13 20:08:14.829836 kernel: BTRFS: device fsid 2be7cc1c-29d4-4496-b29b-8561323213d2 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (528) Jan 13 20:08:14.835225 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:08:14.918786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:08:14.937116 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:08:14.952687 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:08:14.958253 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:08:14.977078 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:08:14.989793 disk-uuid[662]: Primary Header is updated. Jan 13 20:08:14.989793 disk-uuid[662]: Secondary Entries is updated. Jan 13 20:08:14.989793 disk-uuid[662]: Secondary Header is updated. Jan 13 20:08:14.998786 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:08:16.016103 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:08:16.016591 disk-uuid[663]: The operation has completed successfully. Jan 13 20:08:16.207112 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:08:16.208995 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:08:16.248990 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:08:16.257027 sh[923]: Success Jan 13 20:08:16.280784 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:08:16.410155 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:08:16.415993 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:08:16.419917 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:08:16.461008 kernel: BTRFS info (device dm-0): first mount of filesystem 2be7cc1c-29d4-4496-b29b-8561323213d2 Jan 13 20:08:16.461069 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:16.462793 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:08:16.464019 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:08:16.465078 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:08:16.550755 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:08:16.571957 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:08:16.575845 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:08:16.589976 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:08:16.598072 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:08:16.621110 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:08:16.621193 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:16.621233 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:08:16.629790 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:08:16.649579 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:08:16.651846 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:08:16.661285 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:08:16.673103 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:08:16.774055 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:08:16.788011 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:08:16.842854 systemd-networkd[1115]: lo: Link UP Jan 13 20:08:16.842875 systemd-networkd[1115]: lo: Gained carrier Jan 13 20:08:16.847873 systemd-networkd[1115]: Enumeration completed Jan 13 20:08:16.849852 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:08:16.849918 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:16.849925 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:08:16.859367 systemd[1]: Reached target network.target - Network. Jan 13 20:08:16.865688 systemd-networkd[1115]: eth0: Link UP Jan 13 20:08:16.865696 systemd-networkd[1115]: eth0: Gained carrier Jan 13 20:08:16.865714 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:16.879833 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.22.29/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:08:17.063947 ignition[1028]: Ignition 2.20.0 Jan 13 20:08:17.064471 ignition[1028]: Stage: fetch-offline Jan 13 20:08:17.064958 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:17.064987 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:17.065467 ignition[1028]: Ignition finished successfully Jan 13 20:08:17.074934 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:08:17.093097 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:08:17.115293 ignition[1124]: Ignition 2.20.0 Jan 13 20:08:17.115323 ignition[1124]: Stage: fetch Jan 13 20:08:17.116945 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:17.116971 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:17.117438 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:17.141757 ignition[1124]: PUT result: OK Jan 13 20:08:17.145061 ignition[1124]: parsed url from cmdline: "" Jan 13 20:08:17.145084 ignition[1124]: no config URL provided Jan 13 20:08:17.145101 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:08:17.145128 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:08:17.145160 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:17.148775 ignition[1124]: PUT result: OK Jan 13 20:08:17.148867 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:08:17.156023 ignition[1124]: GET result: OK Jan 13 20:08:17.156242 ignition[1124]: parsing config with SHA512: 835b2916aaf95728c439902fab24f4b046a230dfd4cca680e780dd879920d12df78b6e885c6608161d99da876521a42e1cca38baf9ce6f995c16fc4d89bdeb57 Jan 13 20:08:17.166779 unknown[1124]: fetched base config from "system" Jan 13 20:08:17.167186 unknown[1124]: fetched base config from "system" Jan 13 20:08:17.168240 ignition[1124]: fetch: fetch complete Jan 13 20:08:17.167201 unknown[1124]: fetched user config from "aws" Jan 13 20:08:17.168252 ignition[1124]: fetch: fetch passed Jan 13 20:08:17.168986 ignition[1124]: Ignition finished successfully Jan 13 20:08:17.178900 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:08:17.193038 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:08:17.219557 ignition[1130]: Ignition 2.20.0 Jan 13 20:08:17.220089 ignition[1130]: Stage: kargs Jan 13 20:08:17.220697 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:17.220723 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:17.220954 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:17.225205 ignition[1130]: PUT result: OK Jan 13 20:08:17.233715 ignition[1130]: kargs: kargs passed Jan 13 20:08:17.233872 ignition[1130]: Ignition finished successfully Jan 13 20:08:17.239119 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:08:17.253477 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:08:17.276094 ignition[1137]: Ignition 2.20.0 Jan 13 20:08:17.276592 ignition[1137]: Stage: disks Jan 13 20:08:17.277244 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:17.277269 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:17.277457 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:17.279984 ignition[1137]: PUT result: OK Jan 13 20:08:17.289595 ignition[1137]: disks: disks passed Jan 13 20:08:17.289702 ignition[1137]: Ignition finished successfully Jan 13 20:08:17.294016 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:08:17.298481 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:08:17.300848 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:08:17.307113 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:08:17.309001 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:08:17.310870 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:08:17.324042 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:08:17.383520 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:08:17.388848 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:08:17.402004 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:08:17.482768 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f9a95e53-2d63-4443-b523-cb2108fb48f6 r/w with ordered data mode. Quota mode: none. Jan 13 20:08:17.483598 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:08:17.487226 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:08:17.509935 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:08:17.516924 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:08:17.520717 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:08:17.520883 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:08:17.535826 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:08:17.543796 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1165) Jan 13 20:08:17.549263 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:08:17.549571 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:17.549601 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:08:17.546158 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:08:17.566782 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:08:17.567183 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:08:17.574443 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:08:17.968655 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:08:18.003185 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:08:18.011865 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:08:18.019773 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:08:18.349185 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:08:18.360940 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:08:18.370209 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:08:18.387054 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:08:18.389547 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:08:18.414191 systemd-networkd[1115]: eth0: Gained IPv6LL Jan 13 20:08:18.428817 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:08:18.435574 ignition[1278]: INFO : Ignition 2.20.0 Jan 13 20:08:18.435574 ignition[1278]: INFO : Stage: mount Jan 13 20:08:18.435574 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:18.435574 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:18.435574 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:18.446115 ignition[1278]: INFO : PUT result: OK Jan 13 20:08:18.450484 ignition[1278]: INFO : mount: mount passed Jan 13 20:08:18.450484 ignition[1278]: INFO : Ignition finished successfully Jan 13 20:08:18.454803 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:08:18.471045 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:08:18.488110 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:08:18.522234 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1290) Jan 13 20:08:18.522297 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:08:18.522324 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:08:18.524933 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:08:18.529785 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:08:18.533109 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:08:18.583054 ignition[1307]: INFO : Ignition 2.20.0 Jan 13 20:08:18.583054 ignition[1307]: INFO : Stage: files Jan 13 20:08:18.586306 ignition[1307]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:18.586306 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:18.586306 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:18.604617 ignition[1307]: INFO : PUT result: OK Jan 13 20:08:18.609288 ignition[1307]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:08:18.621282 ignition[1307]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:08:18.621282 ignition[1307]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:08:18.651136 ignition[1307]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:08:18.653927 ignition[1307]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:08:18.656805 unknown[1307]: wrote ssh authorized keys file for user: core Jan 13 20:08:18.659122 ignition[1307]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:08:18.672658 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:08:18.676477 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:08:18.786214 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:08:18.950121 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:08:18.950121 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:08:18.957048 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:08:19.295945 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:08:19.432417 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:08:19.435855 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:08:19.481643 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 20:08:19.729092 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:08:20.058501 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:08:20.058501 ignition[1307]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:08:20.065254 ignition[1307]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:08:20.065254 ignition[1307]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:08:20.065254 ignition[1307]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:08:20.065254 ignition[1307]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:08:20.065254 ignition[1307]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:08:20.065254 ignition[1307]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:08:20.065254 ignition[1307]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:08:20.065254 ignition[1307]: INFO : files: files passed Jan 13 20:08:20.065254 ignition[1307]: INFO : Ignition finished successfully Jan 13 20:08:20.090577 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:08:20.103034 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:08:20.109413 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:08:20.116388 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:08:20.118860 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:08:20.150017 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:08:20.150017 initrd-setup-root-after-ignition[1336]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:08:20.158104 initrd-setup-root-after-ignition[1340]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:08:20.163030 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:08:20.170950 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:08:20.191150 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:08:20.241733 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:08:20.242204 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:08:20.248865 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:08:20.250802 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:08:20.252819 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:08:20.259270 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:08:20.302927 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:08:20.316213 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:08:20.341446 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:08:20.344436 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:08:20.351430 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:08:20.354138 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:08:20.354366 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:08:20.354892 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:08:20.355174 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:08:20.355462 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:08:20.356107 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:08:20.376297 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:08:20.381125 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:08:20.384911 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:08:20.391806 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:08:20.394303 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:08:20.399676 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:08:20.401724 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:08:20.402800 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:08:20.405413 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:08:20.408070 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:08:20.411715 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:08:20.413445 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:08:20.416359 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:08:20.416924 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:08:20.431217 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:08:20.431459 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:08:20.434027 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:08:20.434223 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:08:20.452839 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:08:20.454749 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:08:20.455471 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:08:20.467904 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:08:20.469680 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:08:20.470063 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:08:20.473469 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:08:20.473703 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:08:20.507338 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:08:20.507578 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:08:20.521695 ignition[1360]: INFO : Ignition 2.20.0 Jan 13 20:08:20.521695 ignition[1360]: INFO : Stage: umount Jan 13 20:08:20.521695 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:08:20.521695 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:08:20.521695 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:08:20.521695 ignition[1360]: INFO : PUT result: OK Jan 13 20:08:20.533428 ignition[1360]: INFO : umount: umount passed Jan 13 20:08:20.533428 ignition[1360]: INFO : Ignition finished successfully Jan 13 20:08:20.538674 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:08:20.541350 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:08:20.546864 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:08:20.546972 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:08:20.550652 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:08:20.550772 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:08:20.558731 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:08:20.558859 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:08:20.560901 systemd[1]: Stopped target network.target - Network. Jan 13 20:08:20.562593 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:08:20.562679 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:08:20.566583 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:08:20.579340 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:08:20.582869 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:08:20.587167 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:08:20.593315 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:08:20.596625 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:08:20.596728 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:08:20.600058 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:08:20.600132 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:08:20.602020 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:08:20.602110 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:08:20.603969 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:08:20.604045 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:08:20.606472 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:08:20.610106 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:08:20.615979 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:08:20.616954 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:08:20.617158 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:08:20.617317 systemd-networkd[1115]: eth0: DHCPv6 lease lost Jan 13 20:08:20.623254 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:08:20.623455 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:08:20.647371 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:08:20.647512 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:08:20.650932 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:08:20.651044 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:08:20.675915 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:08:20.677871 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:08:20.681715 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:08:20.684567 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:08:20.690384 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:08:20.690601 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:08:20.708506 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:08:20.708843 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:20.722390 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:08:20.724532 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:08:20.726666 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:08:20.726786 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:08:20.737877 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:08:20.738570 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:08:20.747570 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:08:20.749813 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:08:20.753636 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:08:20.754151 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:08:20.757155 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:08:20.757231 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:08:20.760723 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:08:20.760950 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:08:20.772143 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:08:20.772238 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:08:20.774790 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:08:20.774875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:08:20.801008 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:08:20.803416 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:08:20.803530 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:08:20.806349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:08:20.806448 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:20.827470 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:08:20.828012 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:08:20.835686 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:08:20.851128 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:08:20.867655 systemd[1]: Switching root. Jan 13 20:08:20.908106 systemd-journald[251]: Journal stopped Jan 13 20:08:23.393606 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 13 20:08:23.393775 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:08:23.393829 kernel: SELinux: policy capability open_perms=1 Jan 13 20:08:23.393861 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:08:23.393892 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:08:23.393940 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:08:23.393981 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:08:23.394012 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:08:23.394044 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:08:23.394075 kernel: audit: type=1403 audit(1736798901.548:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:08:23.394113 systemd[1]: Successfully loaded SELinux policy in 85.591ms. Jan 13 20:08:23.394159 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.740ms. Jan 13 20:08:23.394192 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:08:23.394223 systemd[1]: Detected virtualization amazon. Jan 13 20:08:23.394254 systemd[1]: Detected architecture arm64. Jan 13 20:08:23.394282 systemd[1]: Detected first boot. Jan 13 20:08:23.394314 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:08:23.394348 zram_generator::config[1404]: No configuration found. Jan 13 20:08:23.394382 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:08:23.394416 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:08:23.394447 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:08:23.394479 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:08:23.394512 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:08:23.394543 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:08:23.394575 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:08:23.394604 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:08:23.394639 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:08:23.394675 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:08:23.394704 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:08:23.394839 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:08:23.394879 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:08:23.394909 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:08:23.394940 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:08:23.394969 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:08:23.394999 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:08:23.395031 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:08:23.395067 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:08:23.395099 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:08:23.395129 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:08:23.395158 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:08:23.395190 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:08:23.395219 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:08:23.395247 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:08:23.395280 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:08:23.395313 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:08:23.395344 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:08:23.395374 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:08:23.395403 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:08:23.395433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:08:23.395463 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:08:23.395495 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:08:23.395526 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:08:23.395556 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:08:23.395590 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:08:23.395619 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:08:23.395673 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:08:23.395702 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:08:23.395757 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:08:23.395794 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:08:23.395824 systemd[1]: Reached target machines.target - Containers. Jan 13 20:08:23.395857 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:08:23.395892 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:08:23.395924 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:08:23.395955 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:08:23.395984 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:08:23.396013 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:08:23.396041 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:08:23.396070 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:08:23.396110 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:08:23.396142 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:08:23.396177 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:08:23.396206 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:08:23.396236 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:08:23.396264 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:08:23.396296 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:08:23.396325 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:08:23.396354 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:08:23.396382 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:08:23.396413 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:08:23.396448 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:08:23.396478 systemd[1]: Stopped verity-setup.service. Jan 13 20:08:23.396510 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:08:23.396538 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:08:23.396567 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:08:23.396594 kernel: fuse: init (API version 7.39) Jan 13 20:08:23.396625 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:08:23.396654 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:08:23.396687 kernel: loop: module loaded Jan 13 20:08:23.396718 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:08:23.396787 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:08:23.396822 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:08:23.396850 kernel: ACPI: bus type drm_connector registered Jan 13 20:08:23.396879 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:08:23.396913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:08:23.396942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:08:23.396971 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:08:23.397000 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:08:23.397030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:08:23.397059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:08:23.397089 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:08:23.397117 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:08:23.397159 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:08:23.397189 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:08:23.397222 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:08:23.397250 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:08:23.397279 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:08:23.397308 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:08:23.397341 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:08:23.397374 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:08:23.397406 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:08:23.397478 systemd-journald[1489]: Collecting audit messages is disabled. Jan 13 20:08:23.397532 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:08:23.397564 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:08:23.397593 systemd-journald[1489]: Journal started Jan 13 20:08:23.397652 systemd-journald[1489]: Runtime Journal (/run/log/journal/ec20205600ad29d6543a71cb21726119) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:08:23.400764 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:08:22.743930 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:08:22.805948 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:08:22.806707 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:08:23.421274 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:08:23.421358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:08:23.434725 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:08:23.434832 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:08:23.453147 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:08:23.453236 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:08:23.477945 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:23.478065 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:08:23.485362 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:08:23.489225 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:08:23.492073 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:08:23.494637 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:08:23.498862 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:08:23.515897 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:08:23.564856 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:08:23.574254 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:08:23.583041 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:08:23.597071 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:08:23.610526 kernel: loop0: detected capacity change from 0 to 113552 Jan 13 20:08:23.614793 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:08:23.621076 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:08:23.624128 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:23.671583 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:08:23.677922 systemd-journald[1489]: Time spent on flushing to /var/log/journal/ec20205600ad29d6543a71cb21726119 is 52.821ms for 917 entries. Jan 13 20:08:23.677922 systemd-journald[1489]: System Journal (/var/log/journal/ec20205600ad29d6543a71cb21726119) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:08:23.739075 systemd-journald[1489]: Received client request to flush runtime journal. Jan 13 20:08:23.680530 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:08:23.716227 udevadm[1543]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:08:23.743302 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:08:23.760786 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:08:23.782810 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:08:23.797451 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:08:23.805855 kernel: loop1: detected capacity change from 0 to 116784 Jan 13 20:08:23.875279 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Jan 13 20:08:23.876299 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Jan 13 20:08:23.897068 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:08:23.946773 kernel: loop2: detected capacity change from 0 to 53784 Jan 13 20:08:24.085783 kernel: loop3: detected capacity change from 0 to 194512 Jan 13 20:08:24.144141 kernel: loop4: detected capacity change from 0 to 113552 Jan 13 20:08:24.168845 kernel: loop5: detected capacity change from 0 to 116784 Jan 13 20:08:24.186874 kernel: loop6: detected capacity change from 0 to 53784 Jan 13 20:08:24.205778 kernel: loop7: detected capacity change from 0 to 194512 Jan 13 20:08:24.241146 (sd-merge)[1558]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:08:24.242155 (sd-merge)[1558]: Merged extensions into '/usr'. Jan 13 20:08:24.251764 systemd[1]: Reloading requested from client PID 1515 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:08:24.251793 systemd[1]: Reloading... Jan 13 20:08:24.399654 zram_generator::config[1582]: No configuration found. Jan 13 20:08:24.765643 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:08:24.877038 systemd[1]: Reloading finished in 624 ms. Jan 13 20:08:24.917012 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:08:24.921371 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:08:24.938118 systemd[1]: Starting ensure-sysext.service... Jan 13 20:08:24.943079 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:08:24.951106 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:08:24.968973 systemd[1]: Reloading requested from client PID 1637 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:08:24.969003 systemd[1]: Reloading... Jan 13 20:08:25.035594 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:08:25.036228 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:08:25.044157 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:08:25.044945 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jan 13 20:08:25.045098 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Jan 13 20:08:25.056264 systemd-tmpfiles[1638]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:08:25.056474 systemd-tmpfiles[1638]: Skipping /boot Jan 13 20:08:25.074463 systemd-udevd[1639]: Using default interface naming scheme 'v255'. Jan 13 20:08:25.130708 systemd-tmpfiles[1638]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:08:25.131362 systemd-tmpfiles[1638]: Skipping /boot Jan 13 20:08:25.145782 zram_generator::config[1665]: No configuration found. Jan 13 20:08:25.201814 ldconfig[1511]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:08:25.328644 (udev-worker)[1697]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:25.553450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:08:25.615776 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1698) Jan 13 20:08:25.740698 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:08:25.741405 systemd[1]: Reloading finished in 771 ms. Jan 13 20:08:25.774950 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:08:25.778517 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:08:25.792696 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:08:25.846456 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:08:25.885380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:08:25.913199 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:08:25.920243 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:08:25.924902 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:08:25.935108 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:08:25.943268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:08:25.951312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:08:25.957051 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:08:25.959375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:08:25.970194 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:08:25.978300 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:08:25.986817 lvm[1836]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:08:25.992194 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:08:26.003986 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:08:26.028243 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:08:26.033833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:08:26.051841 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:08:26.055656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:08:26.056927 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:08:26.067173 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:08:26.069960 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:08:26.078037 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:08:26.087040 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:08:26.090096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:08:26.090822 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:08:26.091145 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:08:26.119502 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:08:26.147915 systemd[1]: Finished ensure-sysext.service. Jan 13 20:08:26.151529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:08:26.153832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:08:26.159803 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:08:26.188768 lvm[1861]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:08:26.191722 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:08:26.192250 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:08:26.195828 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:08:26.211003 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:08:26.213581 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:08:26.214069 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:08:26.214954 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:08:26.224209 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:08:26.271360 augenrules[1880]: No rules Jan 13 20:08:26.281578 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:08:26.282518 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:08:26.292926 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:08:26.306019 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:08:26.314146 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:08:26.341097 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:08:26.344243 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:08:26.372905 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:08:26.428980 systemd-networkd[1849]: lo: Link UP Jan 13 20:08:26.428996 systemd-networkd[1849]: lo: Gained carrier Jan 13 20:08:26.432301 systemd-networkd[1849]: Enumeration completed Jan 13 20:08:26.433309 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:26.433448 systemd-networkd[1849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:08:26.434099 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:08:26.437367 systemd-networkd[1849]: eth0: Link UP Jan 13 20:08:26.437651 systemd-networkd[1849]: eth0: Gained carrier Jan 13 20:08:26.437684 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:08:26.444083 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:08:26.447883 systemd-networkd[1849]: eth0: DHCPv4 address 172.31.22.29/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:08:26.456645 systemd-resolved[1850]: Positive Trust Anchors: Jan 13 20:08:26.456686 systemd-resolved[1850]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:08:26.456782 systemd-resolved[1850]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:08:26.473235 systemd-resolved[1850]: Defaulting to hostname 'linux'. Jan 13 20:08:26.476461 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:08:26.478791 systemd[1]: Reached target network.target - Network. Jan 13 20:08:26.480512 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:08:26.482804 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:08:26.484899 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:08:26.487190 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:08:26.489862 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:08:26.492115 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:08:26.494371 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:08:26.496641 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:08:26.496692 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:08:26.498363 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:08:26.501167 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:08:26.505626 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:08:26.517068 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:08:26.520067 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:08:26.522203 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:08:26.524011 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:08:26.525963 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:08:26.526014 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:08:26.532942 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:08:26.543123 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:08:26.550330 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:08:26.567346 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:08:26.574064 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:08:26.577955 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:08:26.585110 jq[1905]: false Jan 13 20:08:26.580339 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:08:26.592190 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:08:26.606129 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:08:26.611552 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:08:26.616996 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:08:26.625095 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:08:26.635209 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:08:26.638668 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:08:26.639930 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:08:26.645299 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:08:26.652967 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:08:26.669520 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:08:26.671179 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:08:26.714485 dbus-daemon[1904]: [system] SELinux support is enabled Jan 13 20:08:26.718325 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:08:26.726182 dbus-daemon[1904]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1849 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:08:26.731899 jq[1917]: true Jan 13 20:08:26.730454 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:08:26.730847 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:08:26.768474 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:08:26.768549 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:08:26.771396 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:08:26.773031 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:08:26.773075 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:08:26.789552 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:08:26.844844 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:08:26.845247 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:08:26.856285 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:48 UTC 2025 (1): Starting Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:48 UTC 2025 (1): Starting Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: ---------------------------------------------------- Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: corporation. Support and training for ntp-4 are Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: available at https://www.nwtime.org/support Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: ---------------------------------------------------- Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: proto: precision = 0.096 usec (-23) Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: basedate set to 2025-01-01 Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: gps base set to 2025-01-05 (week 2348) Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: Listen normally on 3 eth0 172.31.22.29:123 Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: Listen normally on 4 lo [::1]:123 Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: bind(21) AF_INET6 fe80::466:69ff:fe12:a481%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: unable to create socket on eth0 (5) for fe80::466:69ff:fe12:a481%2#123 Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: failed to init interface for address fe80::466:69ff:fe12:a481%2 Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: Listening on routing socket on fd #21 for interface updates Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:26.881958 ntpd[1908]: 13 Jan 20:08:26 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:26.871652 (ntainerd)[1931]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:08:26.883556 extend-filesystems[1906]: Found loop4 Jan 13 20:08:26.883556 extend-filesystems[1906]: Found loop5 Jan 13 20:08:26.883556 extend-filesystems[1906]: Found loop6 Jan 13 20:08:26.883556 extend-filesystems[1906]: Found loop7 Jan 13 20:08:26.883556 extend-filesystems[1906]: Found nvme0n1 Jan 13 20:08:26.883556 extend-filesystems[1906]: Found nvme0n1p1 Jan 13 20:08:26.883556 extend-filesystems[1906]: Found nvme0n1p2 Jan 13 20:08:26.883556 extend-filesystems[1906]: Found nvme0n1p3 Jan 13 20:08:26.883556 extend-filesystems[1906]: Found usr Jan 13 20:08:26.883556 extend-filesystems[1906]: Found nvme0n1p4 Jan 13 20:08:26.883556 extend-filesystems[1906]: Found nvme0n1p6 Jan 13 20:08:26.883556 extend-filesystems[1906]: Found nvme0n1p7 Jan 13 20:08:26.883556 extend-filesystems[1906]: Found nvme0n1p9 Jan 13 20:08:26.883556 extend-filesystems[1906]: Checking size of /dev/nvme0n1p9 Jan 13 20:08:26.856355 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:08:26.946369 jq[1927]: true Jan 13 20:08:26.890287 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:08:26.952959 update_engine[1916]: I20250113 20:08:26.888909 1916 main.cc:92] Flatcar Update Engine starting Jan 13 20:08:26.952959 update_engine[1916]: I20250113 20:08:26.903119 1916 update_check_scheduler.cc:74] Next update check in 5m44s Jan 13 20:08:26.856375 ntpd[1908]: ---------------------------------------------------- Jan 13 20:08:26.900378 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:08:26.856394 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:08:26.929998 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:08:26.856413 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:08:26.856431 ntpd[1908]: corporation. Support and training for ntp-4 are Jan 13 20:08:26.856449 ntpd[1908]: available at https://www.nwtime.org/support Jan 13 20:08:26.856468 ntpd[1908]: ---------------------------------------------------- Jan 13 20:08:26.858714 ntpd[1908]: proto: precision = 0.096 usec (-23) Jan 13 20:08:26.859690 ntpd[1908]: basedate set to 2025-01-01 Jan 13 20:08:26.859717 ntpd[1908]: gps base set to 2025-01-05 (week 2348) Jan 13 20:08:26.862280 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:08:26.862366 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:08:26.862659 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:08:26.862728 ntpd[1908]: Listen normally on 3 eth0 172.31.22.29:123 Jan 13 20:08:26.862826 ntpd[1908]: Listen normally on 4 lo [::1]:123 Jan 13 20:08:26.862899 ntpd[1908]: bind(21) AF_INET6 fe80::466:69ff:fe12:a481%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:08:26.862935 ntpd[1908]: unable to create socket on eth0 (5) for fe80::466:69ff:fe12:a481%2#123 Jan 13 20:08:26.862962 ntpd[1908]: failed to init interface for address fe80::466:69ff:fe12:a481%2 Jan 13 20:08:26.863014 ntpd[1908]: Listening on routing socket on fd #21 for interface updates Jan 13 20:08:26.866960 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:26.867013 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:08:26.976802 tar[1925]: linux-arm64/helm Jan 13 20:08:27.004895 extend-filesystems[1906]: Resized partition /dev/nvme0n1p9 Jan 13 20:08:27.001483 systemd-logind[1913]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:08:27.001520 systemd-logind[1913]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 20:08:27.004142 systemd-logind[1913]: New seat seat0. Jan 13 20:08:27.011385 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:08:27.029861 extend-filesystems[1965]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:08:27.039244 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:08:27.084036 coreos-metadata[1903]: Jan 13 20:08:27.083 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.085 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.086 INFO Fetch successful Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.086 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.087 INFO Fetch successful Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.087 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.089 INFO Fetch successful Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.089 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.093 INFO Fetch successful Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.093 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.094 INFO Fetch failed with 404: resource not found Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.094 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.095 INFO Fetch successful Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.095 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.097 INFO Fetch successful Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.097 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.098 INFO Fetch successful Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.098 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.100 INFO Fetch successful Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.100 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:08:27.171621 coreos-metadata[1903]: Jan 13 20:08:27.105 INFO Fetch successful Jan 13 20:08:27.209097 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:08:27.189526 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:08:27.216673 extend-filesystems[1965]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:08:27.216673 extend-filesystems[1965]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:08:27.216673 extend-filesystems[1965]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:08:27.227891 bash[1973]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:08:27.221629 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:08:27.228179 extend-filesystems[1906]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:08:27.226276 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:08:27.232133 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:08:27.239242 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:08:27.242252 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:08:27.246825 systemd[1]: Starting sshkeys.service... Jan 13 20:08:27.280771 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1698) Jan 13 20:08:27.365518 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:08:27.377692 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:08:27.424432 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:08:27.428067 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:08:27.435251 dbus-daemon[1904]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1941 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:08:27.444884 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:08:27.561691 polkitd[2035]: Started polkitd version 121 Jan 13 20:08:27.589846 containerd[1931]: time="2025-01-13T20:08:27.588332554Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:08:27.615963 locksmithd[1952]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:08:27.629516 polkitd[2035]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:08:27.629635 polkitd[2035]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:08:27.634798 polkitd[2035]: Finished loading, compiling and executing 2 rules Jan 13 20:08:27.636645 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:08:27.636403 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:08:27.643809 polkitd[2035]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:08:27.678025 coreos-metadata[2020]: Jan 13 20:08:27.677 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:08:27.679397 coreos-metadata[2020]: Jan 13 20:08:27.679 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:08:27.683790 coreos-metadata[2020]: Jan 13 20:08:27.681 INFO Fetch successful Jan 13 20:08:27.683790 coreos-metadata[2020]: Jan 13 20:08:27.681 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:08:27.686198 coreos-metadata[2020]: Jan 13 20:08:27.686 INFO Fetch successful Jan 13 20:08:27.690982 unknown[2020]: wrote ssh authorized keys file for user: core Jan 13 20:08:27.694065 systemd-networkd[1849]: eth0: Gained IPv6LL Jan 13 20:08:27.710052 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:08:27.714351 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:08:27.731784 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:08:27.762448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:27.773568 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:08:27.789036 systemd-hostnamed[1941]: Hostname set to (transient) Jan 13 20:08:27.791469 systemd-resolved[1850]: System hostname changed to 'ip-172-31-22-29'. Jan 13 20:08:27.834846 containerd[1931]: time="2025-01-13T20:08:27.832838124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:27.849486 containerd[1931]: time="2025-01-13T20:08:27.844489308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:27.849486 containerd[1931]: time="2025-01-13T20:08:27.844554708Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:08:27.849486 containerd[1931]: time="2025-01-13T20:08:27.844592268Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:08:27.849486 containerd[1931]: time="2025-01-13T20:08:27.844918128Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:08:27.849486 containerd[1931]: time="2025-01-13T20:08:27.844955868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:27.849486 containerd[1931]: time="2025-01-13T20:08:27.845076324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:27.849486 containerd[1931]: time="2025-01-13T20:08:27.845105952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:27.849486 containerd[1931]: time="2025-01-13T20:08:27.845450148Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:27.849486 containerd[1931]: time="2025-01-13T20:08:27.845481444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:27.849486 containerd[1931]: time="2025-01-13T20:08:27.845511756Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:27.849486 containerd[1931]: time="2025-01-13T20:08:27.845537400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:27.850062 containerd[1931]: time="2025-01-13T20:08:27.845706108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:27.863454 containerd[1931]: time="2025-01-13T20:08:27.862419360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:08:27.863454 containerd[1931]: time="2025-01-13T20:08:27.862677600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:08:27.863454 containerd[1931]: time="2025-01-13T20:08:27.862708476Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:08:27.863454 containerd[1931]: time="2025-01-13T20:08:27.862947420Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:08:27.863454 containerd[1931]: time="2025-01-13T20:08:27.863055600Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:08:27.879220 containerd[1931]: time="2025-01-13T20:08:27.878955804Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:08:27.879351 update-ssh-keys[2096]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.884928216Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.885006696Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.885070908Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.885130836Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.885433236Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.885970116Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.886196592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.886230120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.886265412Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.886298868Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.886328748Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.886359300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.886389540Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:08:27.893136 containerd[1931]: time="2025-01-13T20:08:27.886422024Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:08:27.887247 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.886452144Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.886480584Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.886510152Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.886550652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.886582704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.886611588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.886642380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.886670724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.886700772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.886728456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.887061624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.887098008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.887141460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.907045 containerd[1931]: time="2025-01-13T20:08:27.887174892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.899899 systemd[1]: Finished sshkeys.service. Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.887205156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.887234400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.887266440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.887314200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.887346684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.887468124Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.890076852Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.890128704Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.890169780Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.890201064Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.890225724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.890258448Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.890282676Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:08:27.918729 containerd[1931]: time="2025-01-13T20:08:27.890307540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:08:27.914348 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.901827000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.902008392Z" level=info msg="Connect containerd service" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.902121540Z" level=info msg="using legacy CRI server" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.902144172Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.902540652Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.912978528Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.913610280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.913698696Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.913781016Z" level=info msg="Start subscribing containerd event" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.913849140Z" level=info msg="Start recovering state" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.913977432Z" level=info msg="Start event monitor" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.914010240Z" level=info msg="Start snapshots syncer" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.914052996Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.914083752Z" level=info msg="Start streaming server" Jan 13 20:08:27.920543 containerd[1931]: time="2025-01-13T20:08:27.914216904Z" level=info msg="containerd successfully booted in 0.337160s" Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: Initializing new seelog logger Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: New Seelog Logger Creation Complete Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: 2025/01/13 20:08:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: 2025/01/13 20:08:27 processing appconfig overrides Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: 2025/01/13 20:08:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: 2025/01/13 20:08:27 processing appconfig overrides Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: 2025/01/13 20:08:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: 2025/01/13 20:08:27 processing appconfig overrides Jan 13 20:08:27.939211 amazon-ssm-agent[2090]: 2025-01-13 20:08:27 INFO Proxy environment variables: Jan 13 20:08:27.952466 amazon-ssm-agent[2090]: 2025/01/13 20:08:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:27.952466 amazon-ssm-agent[2090]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:08:27.952466 amazon-ssm-agent[2090]: 2025/01/13 20:08:27 processing appconfig overrides Jan 13 20:08:28.019124 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:08:28.033625 amazon-ssm-agent[2090]: 2025-01-13 20:08:27 INFO https_proxy: Jan 13 20:08:28.132675 amazon-ssm-agent[2090]: 2025-01-13 20:08:27 INFO http_proxy: Jan 13 20:08:28.232408 amazon-ssm-agent[2090]: 2025-01-13 20:08:27 INFO no_proxy: Jan 13 20:08:28.331296 amazon-ssm-agent[2090]: 2025-01-13 20:08:27 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:08:28.432856 amazon-ssm-agent[2090]: 2025-01-13 20:08:27 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:08:28.532190 amazon-ssm-agent[2090]: 2025-01-13 20:08:28 INFO Agent will take identity from EC2 Jan 13 20:08:28.630868 amazon-ssm-agent[2090]: 2025-01-13 20:08:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:08:28.730686 amazon-ssm-agent[2090]: 2025-01-13 20:08:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:08:28.830305 amazon-ssm-agent[2090]: 2025-01-13 20:08:28 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:08:28.936649 amazon-ssm-agent[2090]: 2025-01-13 20:08:28 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:08:29.003382 tar[1925]: linux-arm64/LICENSE Jan 13 20:08:29.003382 tar[1925]: linux-arm64/README.md Jan 13 20:08:29.037228 amazon-ssm-agent[2090]: 2025-01-13 20:08:28 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 20:08:29.046320 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:08:29.137577 amazon-ssm-agent[2090]: 2025-01-13 20:08:28 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:08:29.234276 amazon-ssm-agent[2090]: 2025-01-13 20:08:28 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:08:29.234276 amazon-ssm-agent[2090]: 2025-01-13 20:08:28 INFO [Registrar] Starting registrar module Jan 13 20:08:29.234475 amazon-ssm-agent[2090]: 2025-01-13 20:08:28 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:08:29.234475 amazon-ssm-agent[2090]: 2025-01-13 20:08:29 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:08:29.234475 amazon-ssm-agent[2090]: 2025-01-13 20:08:29 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:08:29.234475 amazon-ssm-agent[2090]: 2025-01-13 20:08:29 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:08:29.234475 amazon-ssm-agent[2090]: 2025-01-13 20:08:29 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:08:29.237713 amazon-ssm-agent[2090]: 2025-01-13 20:08:29 INFO [CredentialRefresher] Next credential rotation will be in 32.06665805356667 minutes Jan 13 20:08:29.418472 sshd_keygen[1940]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:08:29.468831 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:08:29.480911 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:08:29.487979 systemd[1]: Started sshd@0-172.31.22.29:22-147.75.109.163:35396.service - OpenSSH per-connection server daemon (147.75.109.163:35396). Jan 13 20:08:29.499452 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:08:29.501119 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:08:29.511940 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:08:29.553110 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:08:29.567272 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:08:29.581369 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:08:29.584562 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:08:29.659052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:29.663020 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:08:29.665451 systemd[1]: Startup finished in 1.071s (kernel) + 8.731s (initrd) + 8.200s (userspace) = 18.002s. Jan 13 20:08:29.674084 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:08:29.690415 agetty[2146]: failed to open credentials directory Jan 13 20:08:29.697074 agetty[2145]: failed to open credentials directory Jan 13 20:08:29.814952 sshd[2139]: Accepted publickey for core from 147.75.109.163 port 35396 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:29.820148 sshd-session[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:29.835520 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:08:29.845068 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:08:29.854073 systemd-logind[1913]: New session 1 of user core. Jan 13 20:08:29.857140 ntpd[1908]: Listen normally on 6 eth0 [fe80::466:69ff:fe12:a481%2]:123 Jan 13 20:08:29.857597 ntpd[1908]: 13 Jan 20:08:29 ntpd[1908]: Listen normally on 6 eth0 [fe80::466:69ff:fe12:a481%2]:123 Jan 13 20:08:29.870323 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:08:29.880289 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:08:29.903179 (systemd)[2160]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:08:30.135088 systemd[2160]: Queued start job for default target default.target. Jan 13 20:08:30.146814 systemd[2160]: Created slice app.slice - User Application Slice. Jan 13 20:08:30.146879 systemd[2160]: Reached target paths.target - Paths. Jan 13 20:08:30.146911 systemd[2160]: Reached target timers.target - Timers. Jan 13 20:08:30.158020 systemd[2160]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:08:30.173647 systemd[2160]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:08:30.173820 systemd[2160]: Reached target sockets.target - Sockets. Jan 13 20:08:30.173853 systemd[2160]: Reached target basic.target - Basic System. Jan 13 20:08:30.173942 systemd[2160]: Reached target default.target - Main User Target. Jan 13 20:08:30.174004 systemd[2160]: Startup finished in 258ms. Jan 13 20:08:30.174172 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:08:30.180046 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:08:30.264764 amazon-ssm-agent[2090]: 2025-01-13 20:08:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:08:30.355330 systemd[1]: Started sshd@1-172.31.22.29:22-147.75.109.163:56690.service - OpenSSH per-connection server daemon (147.75.109.163:56690). Jan 13 20:08:30.365647 amazon-ssm-agent[2090]: 2025-01-13 20:08:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2174) started Jan 13 20:08:30.463704 amazon-ssm-agent[2090]: 2025-01-13 20:08:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:08:30.598327 sshd[2180]: Accepted publickey for core from 147.75.109.163 port 56690 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:30.601690 sshd-session[2180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:30.610851 systemd-logind[1913]: New session 2 of user core. Jan 13 20:08:30.616158 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:08:30.746276 sshd[2190]: Connection closed by 147.75.109.163 port 56690 Jan 13 20:08:30.746561 sshd-session[2180]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:30.753365 systemd-logind[1913]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:08:30.753783 systemd[1]: sshd@1-172.31.22.29:22-147.75.109.163:56690.service: Deactivated successfully. Jan 13 20:08:30.757687 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:08:30.762256 systemd-logind[1913]: Removed session 2. Jan 13 20:08:30.784511 systemd[1]: Started sshd@2-172.31.22.29:22-147.75.109.163:56692.service - OpenSSH per-connection server daemon (147.75.109.163:56692). Jan 13 20:08:30.882896 kubelet[2153]: E0113 20:08:30.882785 2153 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:08:30.887843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:08:30.888176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:08:30.888707 systemd[1]: kubelet.service: Consumed 1.327s CPU time. Jan 13 20:08:30.968896 sshd[2195]: Accepted publickey for core from 147.75.109.163 port 56692 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:30.971209 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:30.979882 systemd-logind[1913]: New session 3 of user core. Jan 13 20:08:30.987024 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:08:31.104996 sshd[2198]: Connection closed by 147.75.109.163 port 56692 Jan 13 20:08:31.106477 sshd-session[2195]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:31.112400 systemd[1]: sshd@2-172.31.22.29:22-147.75.109.163:56692.service: Deactivated successfully. Jan 13 20:08:31.116684 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:08:31.118219 systemd-logind[1913]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:08:31.120301 systemd-logind[1913]: Removed session 3. Jan 13 20:08:31.145273 systemd[1]: Started sshd@3-172.31.22.29:22-147.75.109.163:56696.service - OpenSSH per-connection server daemon (147.75.109.163:56696). Jan 13 20:08:31.338005 sshd[2203]: Accepted publickey for core from 147.75.109.163 port 56696 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:31.340399 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:31.347545 systemd-logind[1913]: New session 4 of user core. Jan 13 20:08:31.360992 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:08:31.488266 sshd[2205]: Connection closed by 147.75.109.163 port 56696 Jan 13 20:08:31.489092 sshd-session[2203]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:31.493217 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:08:31.495481 systemd[1]: sshd@3-172.31.22.29:22-147.75.109.163:56696.service: Deactivated successfully. Jan 13 20:08:31.498956 systemd-logind[1913]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:08:31.500824 systemd-logind[1913]: Removed session 4. Jan 13 20:08:31.528276 systemd[1]: Started sshd@4-172.31.22.29:22-147.75.109.163:56708.service - OpenSSH per-connection server daemon (147.75.109.163:56708). Jan 13 20:08:31.703061 sshd[2210]: Accepted publickey for core from 147.75.109.163 port 56708 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:31.705886 sshd-session[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:31.714117 systemd-logind[1913]: New session 5 of user core. Jan 13 20:08:31.722990 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:08:31.862008 sudo[2213]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:08:31.863173 sudo[2213]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:31.877877 sudo[2213]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:31.901285 sshd[2212]: Connection closed by 147.75.109.163 port 56708 Jan 13 20:08:31.901098 sshd-session[2210]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:31.907111 systemd[1]: sshd@4-172.31.22.29:22-147.75.109.163:56708.service: Deactivated successfully. Jan 13 20:08:31.907485 systemd-logind[1913]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:08:31.911037 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:08:31.914516 systemd-logind[1913]: Removed session 5. Jan 13 20:08:31.934135 systemd[1]: Started sshd@5-172.31.22.29:22-147.75.109.163:56724.service - OpenSSH per-connection server daemon (147.75.109.163:56724). Jan 13 20:08:32.125388 sshd[2218]: Accepted publickey for core from 147.75.109.163 port 56724 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:32.129330 sshd-session[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:32.144078 systemd-logind[1913]: New session 6 of user core. Jan 13 20:08:32.153032 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:08:32.257017 sudo[2222]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:08:32.257632 sudo[2222]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:32.263521 sudo[2222]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:32.273387 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:08:32.274012 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:32.297373 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:08:32.346570 augenrules[2244]: No rules Jan 13 20:08:32.348691 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:08:32.349958 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:08:32.352417 sudo[2221]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:32.375236 sshd[2220]: Connection closed by 147.75.109.163 port 56724 Jan 13 20:08:32.376123 sshd-session[2218]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:32.382332 systemd-logind[1913]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:08:32.382894 systemd[1]: sshd@5-172.31.22.29:22-147.75.109.163:56724.service: Deactivated successfully. Jan 13 20:08:32.386327 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:08:32.389368 systemd-logind[1913]: Removed session 6. Jan 13 20:08:32.418249 systemd[1]: Started sshd@6-172.31.22.29:22-147.75.109.163:56740.service - OpenSSH per-connection server daemon (147.75.109.163:56740). Jan 13 20:08:32.608232 sshd[2252]: Accepted publickey for core from 147.75.109.163 port 56740 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:08:32.611069 sshd-session[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:32.619332 systemd-logind[1913]: New session 7 of user core. Jan 13 20:08:32.624007 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:08:32.726986 sudo[2255]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:08:32.727669 sudo[2255]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:08:33.460605 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:08:33.474236 (dockerd)[2273]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:08:33.606905 systemd-resolved[1850]: Clock change detected. Flushing caches. Jan 13 20:08:33.687557 dockerd[2273]: time="2025-01-13T20:08:33.687460744Z" level=info msg="Starting up" Jan 13 20:08:33.983058 dockerd[2273]: time="2025-01-13T20:08:33.982482617Z" level=info msg="Loading containers: start." Jan 13 20:08:34.253462 kernel: Initializing XFRM netlink socket Jan 13 20:08:34.285991 (udev-worker)[2297]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:34.376035 systemd-networkd[1849]: docker0: Link UP Jan 13 20:08:34.416582 dockerd[2273]: time="2025-01-13T20:08:34.416511064Z" level=info msg="Loading containers: done." Jan 13 20:08:34.445578 dockerd[2273]: time="2025-01-13T20:08:34.445505020Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:08:34.445792 dockerd[2273]: time="2025-01-13T20:08:34.445641376Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:08:34.445901 dockerd[2273]: time="2025-01-13T20:08:34.445865632Z" level=info msg="Daemon has completed initialization" Jan 13 20:08:34.496677 dockerd[2273]: time="2025-01-13T20:08:34.495978772Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:08:34.496439 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:08:34.845606 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck540000497-merged.mount: Deactivated successfully. Jan 13 20:08:35.988662 containerd[1931]: time="2025-01-13T20:08:35.988107643Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:08:36.606312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398355714.mount: Deactivated successfully. Jan 13 20:08:38.098254 containerd[1931]: time="2025-01-13T20:08:38.097416870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:38.099674 containerd[1931]: time="2025-01-13T20:08:38.099591198Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Jan 13 20:08:38.101314 containerd[1931]: time="2025-01-13T20:08:38.101238534Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:38.106857 containerd[1931]: time="2025-01-13T20:08:38.106759902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:38.109653 containerd[1931]: time="2025-01-13T20:08:38.109373202Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.121184091s" Jan 13 20:08:38.109653 containerd[1931]: time="2025-01-13T20:08:38.109435542Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 20:08:38.149820 containerd[1931]: time="2025-01-13T20:08:38.149761458Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:08:39.824593 containerd[1931]: time="2025-01-13T20:08:39.824526694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:39.826619 containerd[1931]: time="2025-01-13T20:08:39.826553254Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Jan 13 20:08:39.827397 containerd[1931]: time="2025-01-13T20:08:39.827059030Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:39.832521 containerd[1931]: time="2025-01-13T20:08:39.832424050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:39.834978 containerd[1931]: time="2025-01-13T20:08:39.834784306Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.684965092s" Jan 13 20:08:39.834978 containerd[1931]: time="2025-01-13T20:08:39.834836710Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 20:08:39.879522 containerd[1931]: time="2025-01-13T20:08:39.879458387Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:08:40.888163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:08:40.896728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:41.152311 containerd[1931]: time="2025-01-13T20:08:41.152148189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:41.159981 containerd[1931]: time="2025-01-13T20:08:41.159871377Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Jan 13 20:08:41.172315 containerd[1931]: time="2025-01-13T20:08:41.171698253Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:41.187717 containerd[1931]: time="2025-01-13T20:08:41.187628769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:41.191284 containerd[1931]: time="2025-01-13T20:08:41.190146117Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.310624646s" Jan 13 20:08:41.191284 containerd[1931]: time="2025-01-13T20:08:41.190232121Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 20:08:41.242742 containerd[1931]: time="2025-01-13T20:08:41.242697249Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:08:41.318676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:41.331070 (kubelet)[2555]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:08:41.426379 kubelet[2555]: E0113 20:08:41.425695 2555 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:08:41.434249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:08:41.434669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:08:42.513489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4180133545.mount: Deactivated successfully. Jan 13 20:08:42.963466 containerd[1931]: time="2025-01-13T20:08:42.963275438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:42.964901 containerd[1931]: time="2025-01-13T20:08:42.964812590Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Jan 13 20:08:42.966024 containerd[1931]: time="2025-01-13T20:08:42.965955758Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:42.969634 containerd[1931]: time="2025-01-13T20:08:42.969538586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:42.971215 containerd[1931]: time="2025-01-13T20:08:42.971013506Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.728012993s" Jan 13 20:08:42.971215 containerd[1931]: time="2025-01-13T20:08:42.971060738Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 20:08:43.014904 containerd[1931]: time="2025-01-13T20:08:43.014834026Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:08:43.523695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287646652.mount: Deactivated successfully. Jan 13 20:08:44.583609 containerd[1931]: time="2025-01-13T20:08:44.583534646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:44.588287 containerd[1931]: time="2025-01-13T20:08:44.588207518Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 13 20:08:44.592616 containerd[1931]: time="2025-01-13T20:08:44.592512674Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:44.599208 containerd[1931]: time="2025-01-13T20:08:44.599125010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:44.601645 containerd[1931]: time="2025-01-13T20:08:44.601592894Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.586691092s" Jan 13 20:08:44.601984 containerd[1931]: time="2025-01-13T20:08:44.601789562Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:08:44.668398 containerd[1931]: time="2025-01-13T20:08:44.668241854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:08:45.164687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616837799.mount: Deactivated successfully. Jan 13 20:08:45.172059 containerd[1931]: time="2025-01-13T20:08:45.171732013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:45.173007 containerd[1931]: time="2025-01-13T20:08:45.172936429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 13 20:08:45.173940 containerd[1931]: time="2025-01-13T20:08:45.173840881Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:45.178904 containerd[1931]: time="2025-01-13T20:08:45.178825477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:45.180773 containerd[1931]: time="2025-01-13T20:08:45.180584425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 512.284311ms" Jan 13 20:08:45.180773 containerd[1931]: time="2025-01-13T20:08:45.180635857Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:08:45.219585 containerd[1931]: time="2025-01-13T20:08:45.219518785Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:08:45.770496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76849449.mount: Deactivated successfully. Jan 13 20:08:47.692782 containerd[1931]: time="2025-01-13T20:08:47.692718629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:47.697570 containerd[1931]: time="2025-01-13T20:08:47.697518341Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jan 13 20:08:47.706262 containerd[1931]: time="2025-01-13T20:08:47.706191462Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:47.717803 containerd[1931]: time="2025-01-13T20:08:47.717717114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:47.721613 containerd[1931]: time="2025-01-13T20:08:47.721557078Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.501976541s" Jan 13 20:08:47.721805 containerd[1931]: time="2025-01-13T20:08:47.721774794Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 20:08:51.685007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:08:51.694788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:52.008743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:52.019073 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:08:52.105709 kubelet[2745]: E0113 20:08:52.105496 2745 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:08:52.110762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:08:52.111249 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:08:53.811726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:53.820906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:53.862809 systemd[1]: Reloading requested from client PID 2760 ('systemctl') (unit session-7.scope)... Jan 13 20:08:53.862835 systemd[1]: Reloading... Jan 13 20:08:54.069398 zram_generator::config[2801]: No configuration found. Jan 13 20:08:54.329264 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:08:54.499731 systemd[1]: Reloading finished in 636 ms. Jan 13 20:08:54.583345 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:08:54.583811 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:08:54.584411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:54.591030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:08:54.870521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:08:54.886158 (kubelet)[2864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:08:54.968108 kubelet[2864]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:08:54.968747 kubelet[2864]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:08:54.968837 kubelet[2864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:08:54.969097 kubelet[2864]: I0113 20:08:54.969039 2864 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:08:55.832726 kubelet[2864]: I0113 20:08:55.832623 2864 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:08:55.832726 kubelet[2864]: I0113 20:08:55.832667 2864 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:08:55.833049 kubelet[2864]: I0113 20:08:55.833005 2864 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:08:55.864471 kubelet[2864]: I0113 20:08:55.864131 2864 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:08:55.864731 kubelet[2864]: E0113 20:08:55.864706 2864 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.22.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:55.878857 kubelet[2864]: I0113 20:08:55.878808 2864 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:08:55.882414 kubelet[2864]: I0113 20:08:55.881713 2864 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:08:55.882414 kubelet[2864]: I0113 20:08:55.882309 2864 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:08:55.882765 kubelet[2864]: I0113 20:08:55.882729 2864 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:08:55.882890 kubelet[2864]: I0113 20:08:55.882868 2864 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:08:55.883191 kubelet[2864]: I0113 20:08:55.883166 2864 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:08:55.888248 kubelet[2864]: I0113 20:08:55.888185 2864 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:08:55.888248 kubelet[2864]: I0113 20:08:55.888251 2864 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:08:55.888509 kubelet[2864]: I0113 20:08:55.888296 2864 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:08:55.888509 kubelet[2864]: I0113 20:08:55.888332 2864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:08:55.892926 kubelet[2864]: W0113 20:08:55.892302 2864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.22.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:55.892926 kubelet[2864]: E0113 20:08:55.892422 2864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:55.892926 kubelet[2864]: W0113 20:08:55.892827 2864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.22.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-29&limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:55.892926 kubelet[2864]: E0113 20:08:55.892888 2864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-29&limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:55.894409 kubelet[2864]: I0113 20:08:55.893808 2864 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:08:55.894409 kubelet[2864]: I0113 20:08:55.894333 2864 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:08:55.894687 kubelet[2864]: W0113 20:08:55.894665 2864 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:08:55.895835 kubelet[2864]: I0113 20:08:55.895798 2864 server.go:1256] "Started kubelet" Jan 13 20:08:55.900490 kubelet[2864]: I0113 20:08:55.900439 2864 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:08:55.901665 kubelet[2864]: I0113 20:08:55.901626 2864 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:08:55.902403 kubelet[2864]: I0113 20:08:55.901878 2864 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:08:55.902403 kubelet[2864]: I0113 20:08:55.902270 2864 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:08:55.907689 kubelet[2864]: E0113 20:08:55.907640 2864 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.29:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-29.181a596e2457ee7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-29,UID:ip-172-31-22-29,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-29,},FirstTimestamp:2025-01-13 20:08:55.895764602 +0000 UTC m=+1.001672322,LastTimestamp:2025-01-13 20:08:55.895764602 +0000 UTC m=+1.001672322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-29,}" Jan 13 20:08:55.911152 kubelet[2864]: I0113 20:08:55.911113 2864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:08:55.911526 kubelet[2864]: I0113 20:08:55.911490 2864 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:08:55.916409 kubelet[2864]: I0113 20:08:55.916040 2864 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:08:55.916409 kubelet[2864]: I0113 20:08:55.916150 2864 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:08:55.917088 kubelet[2864]: W0113 20:08:55.917019 2864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.22.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:55.917277 kubelet[2864]: E0113 20:08:55.917245 2864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:55.918504 kubelet[2864]: E0113 20:08:55.918473 2864 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-22-29\" not found" Jan 13 20:08:55.919767 kubelet[2864]: E0113 20:08:55.919123 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-29?timeout=10s\": dial tcp 172.31.22.29:6443: connect: connection refused" interval="200ms" Jan 13 20:08:55.920276 kubelet[2864]: E0113 20:08:55.920246 2864 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:08:55.921110 kubelet[2864]: I0113 20:08:55.920907 2864 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:08:55.921540 kubelet[2864]: I0113 20:08:55.921490 2864 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:08:55.924034 kubelet[2864]: I0113 20:08:55.923997 2864 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:08:55.954397 kubelet[2864]: I0113 20:08:55.954034 2864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:08:55.956929 kubelet[2864]: I0113 20:08:55.956895 2864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:08:55.957704 kubelet[2864]: I0113 20:08:55.957116 2864 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:08:55.957704 kubelet[2864]: I0113 20:08:55.957159 2864 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:08:55.957704 kubelet[2864]: E0113 20:08:55.957230 2864 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:08:55.965437 kubelet[2864]: W0113 20:08:55.965312 2864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.22.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:55.965667 kubelet[2864]: E0113 20:08:55.965640 2864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:55.967332 kubelet[2864]: I0113 20:08:55.967299 2864 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:08:55.967937 kubelet[2864]: I0113 20:08:55.967569 2864 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:08:55.967937 kubelet[2864]: I0113 20:08:55.967608 2864 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:08:55.970194 kubelet[2864]: I0113 20:08:55.970033 2864 policy_none.go:49] "None policy: Start" Jan 13 20:08:55.971920 kubelet[2864]: I0113 20:08:55.971424 2864 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:08:55.971920 kubelet[2864]: I0113 20:08:55.971489 2864 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:08:55.981467 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:08:55.997559 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:08:56.004160 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:08:56.014045 kubelet[2864]: I0113 20:08:56.013920 2864 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:08:56.015814 kubelet[2864]: I0113 20:08:56.015783 2864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:08:56.021560 kubelet[2864]: I0113 20:08:56.021525 2864 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-29" Jan 13 20:08:56.025379 kubelet[2864]: E0113 20:08:56.025294 2864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.29:6443/api/v1/nodes\": dial tcp 172.31.22.29:6443: connect: connection refused" node="ip-172-31-22-29" Jan 13 20:08:56.025926 kubelet[2864]: E0113 20:08:56.025886 2864 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-29\" not found" Jan 13 20:08:56.058065 kubelet[2864]: I0113 20:08:56.058025 2864 topology_manager.go:215] "Topology Admit Handler" podUID="1ae021de4163d25e49f328188d77f62c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-29" Jan 13 20:08:56.060237 kubelet[2864]: I0113 20:08:56.060179 2864 topology_manager.go:215] "Topology Admit Handler" podUID="ad6e4ebcf97d9bc668879a19c08787dd" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-29" Jan 13 20:08:56.062570 kubelet[2864]: I0113 20:08:56.062183 2864 topology_manager.go:215] "Topology Admit Handler" podUID="3d6ca76261fbb72f03c8846f20633886" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-29" Jan 13 20:08:56.074065 systemd[1]: Created slice kubepods-burstable-pod1ae021de4163d25e49f328188d77f62c.slice - libcontainer container kubepods-burstable-pod1ae021de4163d25e49f328188d77f62c.slice. Jan 13 20:08:56.096143 systemd[1]: Created slice kubepods-burstable-podad6e4ebcf97d9bc668879a19c08787dd.slice - libcontainer container kubepods-burstable-podad6e4ebcf97d9bc668879a19c08787dd.slice. Jan 13 20:08:56.110328 systemd[1]: Created slice kubepods-burstable-pod3d6ca76261fbb72f03c8846f20633886.slice - libcontainer container kubepods-burstable-pod3d6ca76261fbb72f03c8846f20633886.slice. Jan 13 20:08:56.120382 kubelet[2864]: E0113 20:08:56.120328 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-29?timeout=10s\": dial tcp 172.31.22.29:6443: connect: connection refused" interval="400ms" Jan 13 20:08:56.217757 kubelet[2864]: I0113 20:08:56.217709 2864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ae021de4163d25e49f328188d77f62c-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-29\" (UID: \"1ae021de4163d25e49f328188d77f62c\") " pod="kube-system/kube-apiserver-ip-172-31-22-29" Jan 13 20:08:56.217869 kubelet[2864]: I0113 20:08:56.217777 2864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad6e4ebcf97d9bc668879a19c08787dd-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-29\" (UID: \"ad6e4ebcf97d9bc668879a19c08787dd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-29" Jan 13 20:08:56.217869 kubelet[2864]: I0113 20:08:56.217825 2864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad6e4ebcf97d9bc668879a19c08787dd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-29\" (UID: \"ad6e4ebcf97d9bc668879a19c08787dd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-29" Jan 13 20:08:56.217972 kubelet[2864]: I0113 20:08:56.217872 2864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad6e4ebcf97d9bc668879a19c08787dd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-29\" (UID: \"ad6e4ebcf97d9bc668879a19c08787dd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-29" Jan 13 20:08:56.217972 kubelet[2864]: I0113 20:08:56.217916 2864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d6ca76261fbb72f03c8846f20633886-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-29\" (UID: \"3d6ca76261fbb72f03c8846f20633886\") " pod="kube-system/kube-scheduler-ip-172-31-22-29" Jan 13 20:08:56.217972 kubelet[2864]: I0113 20:08:56.217961 2864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ae021de4163d25e49f328188d77f62c-ca-certs\") pod \"kube-apiserver-ip-172-31-22-29\" (UID: \"1ae021de4163d25e49f328188d77f62c\") " pod="kube-system/kube-apiserver-ip-172-31-22-29" Jan 13 20:08:56.218123 kubelet[2864]: I0113 20:08:56.218005 2864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ae021de4163d25e49f328188d77f62c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-29\" (UID: \"1ae021de4163d25e49f328188d77f62c\") " pod="kube-system/kube-apiserver-ip-172-31-22-29" Jan 13 20:08:56.218123 kubelet[2864]: I0113 20:08:56.218069 2864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad6e4ebcf97d9bc668879a19c08787dd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-29\" (UID: \"ad6e4ebcf97d9bc668879a19c08787dd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-29" Jan 13 20:08:56.218123 kubelet[2864]: I0113 20:08:56.218120 2864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad6e4ebcf97d9bc668879a19c08787dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-29\" (UID: \"ad6e4ebcf97d9bc668879a19c08787dd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-29" Jan 13 20:08:56.227555 kubelet[2864]: I0113 20:08:56.227499 2864 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-29" Jan 13 20:08:56.228109 kubelet[2864]: E0113 20:08:56.228079 2864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.29:6443/api/v1/nodes\": dial tcp 172.31.22.29:6443: connect: connection refused" node="ip-172-31-22-29" Jan 13 20:08:56.391787 containerd[1931]: time="2025-01-13T20:08:56.391626313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-29,Uid:1ae021de4163d25e49f328188d77f62c,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:56.405728 containerd[1931]: time="2025-01-13T20:08:56.405627925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-29,Uid:ad6e4ebcf97d9bc668879a19c08787dd,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:56.417153 containerd[1931]: time="2025-01-13T20:08:56.416743393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-29,Uid:3d6ca76261fbb72f03c8846f20633886,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:56.521532 kubelet[2864]: E0113 20:08:56.521473 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-29?timeout=10s\": dial tcp 172.31.22.29:6443: connect: connection refused" interval="800ms" Jan 13 20:08:56.630249 kubelet[2864]: I0113 20:08:56.630200 2864 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-29" Jan 13 20:08:56.632095 kubelet[2864]: E0113 20:08:56.632056 2864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.29:6443/api/v1/nodes\": dial tcp 172.31.22.29:6443: connect: connection refused" node="ip-172-31-22-29" Jan 13 20:08:56.864864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3414776806.mount: Deactivated successfully. Jan 13 20:08:56.873341 containerd[1931]: time="2025-01-13T20:08:56.873277971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:56.875279 containerd[1931]: time="2025-01-13T20:08:56.875222559Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:56.877231 containerd[1931]: time="2025-01-13T20:08:56.877141119Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 20:08:56.878097 containerd[1931]: time="2025-01-13T20:08:56.878036703Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:08:56.880693 containerd[1931]: time="2025-01-13T20:08:56.880510539Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:56.881894 containerd[1931]: time="2025-01-13T20:08:56.881737779Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:08:56.885816 containerd[1931]: time="2025-01-13T20:08:56.885768999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:56.890930 containerd[1931]: time="2025-01-13T20:08:56.890626371Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 484.893998ms" Jan 13 20:08:56.892655 containerd[1931]: time="2025-01-13T20:08:56.892596507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:08:56.896616 containerd[1931]: time="2025-01-13T20:08:56.896539911Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 504.806798ms" Jan 13 20:08:56.906711 containerd[1931]: time="2025-01-13T20:08:56.906652203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 489.796262ms" Jan 13 20:08:56.934106 kubelet[2864]: W0113 20:08:56.933947 2864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.22.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:56.934106 kubelet[2864]: E0113 20:08:56.934055 2864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:56.967578 kubelet[2864]: W0113 20:08:56.967497 2864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.22.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-29&limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:56.967726 kubelet[2864]: E0113 20:08:56.967586 2864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-29&limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:57.153930 containerd[1931]: time="2025-01-13T20:08:57.152963616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:57.153930 containerd[1931]: time="2025-01-13T20:08:57.153108624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:57.153930 containerd[1931]: time="2025-01-13T20:08:57.153145224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:57.158296 containerd[1931]: time="2025-01-13T20:08:57.158012388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:57.159216 containerd[1931]: time="2025-01-13T20:08:57.159070740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:57.159309 containerd[1931]: time="2025-01-13T20:08:57.159166476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:57.161387 containerd[1931]: time="2025-01-13T20:08:57.159966804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:57.161387 containerd[1931]: time="2025-01-13T20:08:57.160045752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:57.161387 containerd[1931]: time="2025-01-13T20:08:57.160070832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:57.161387 containerd[1931]: time="2025-01-13T20:08:57.160194996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:57.161820 containerd[1931]: time="2025-01-13T20:08:57.161716896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:57.162128 containerd[1931]: time="2025-01-13T20:08:57.162048637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:57.216688 systemd[1]: Started cri-containerd-04ed7e709749ae18d00545f76d19fccfb067befb039cedceb4c6a5563b6a1893.scope - libcontainer container 04ed7e709749ae18d00545f76d19fccfb067befb039cedceb4c6a5563b6a1893. Jan 13 20:08:57.220491 systemd[1]: Started cri-containerd-b0d74946123614d3a8f45e3948d41ee876417426de3706f65d0db15f9c6f450b.scope - libcontainer container b0d74946123614d3a8f45e3948d41ee876417426de3706f65d0db15f9c6f450b. Jan 13 20:08:57.228691 systemd[1]: Started cri-containerd-c2ac76665d25e0754ecf8cab00dd05c5a0b25a3d4c0ed22f4606759ff8f260c8.scope - libcontainer container c2ac76665d25e0754ecf8cab00dd05c5a0b25a3d4c0ed22f4606759ff8f260c8. Jan 13 20:08:57.322313 kubelet[2864]: E0113 20:08:57.322179 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-29?timeout=10s\": dial tcp 172.31.22.29:6443: connect: connection refused" interval="1.6s" Jan 13 20:08:57.327492 containerd[1931]: time="2025-01-13T20:08:57.326843485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-29,Uid:1ae021de4163d25e49f328188d77f62c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0d74946123614d3a8f45e3948d41ee876417426de3706f65d0db15f9c6f450b\"" Jan 13 20:08:57.334267 kubelet[2864]: W0113 20:08:57.334022 2864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.22.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:57.334664 kubelet[2864]: E0113 20:08:57.334409 2864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:57.344378 containerd[1931]: time="2025-01-13T20:08:57.343753033Z" level=info msg="CreateContainer within sandbox \"b0d74946123614d3a8f45e3948d41ee876417426de3706f65d0db15f9c6f450b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:08:57.348080 containerd[1931]: time="2025-01-13T20:08:57.348017161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-29,Uid:ad6e4ebcf97d9bc668879a19c08787dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"04ed7e709749ae18d00545f76d19fccfb067befb039cedceb4c6a5563b6a1893\"" Jan 13 20:08:57.357753 containerd[1931]: time="2025-01-13T20:08:57.357451357Z" level=info msg="CreateContainer within sandbox \"04ed7e709749ae18d00545f76d19fccfb067befb039cedceb4c6a5563b6a1893\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:08:57.376385 containerd[1931]: time="2025-01-13T20:08:57.375746846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-29,Uid:3d6ca76261fbb72f03c8846f20633886,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2ac76665d25e0754ecf8cab00dd05c5a0b25a3d4c0ed22f4606759ff8f260c8\"" Jan 13 20:08:57.378754 containerd[1931]: time="2025-01-13T20:08:57.378467366Z" level=info msg="CreateContainer within sandbox \"b0d74946123614d3a8f45e3948d41ee876417426de3706f65d0db15f9c6f450b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2ff5b18faae49f09539a9f9afd0bdf54481651f9075b1c28076f73b4d8153eb0\"" Jan 13 20:08:57.381476 containerd[1931]: time="2025-01-13T20:08:57.381212378Z" level=info msg="StartContainer for \"2ff5b18faae49f09539a9f9afd0bdf54481651f9075b1c28076f73b4d8153eb0\"" Jan 13 20:08:57.383551 containerd[1931]: time="2025-01-13T20:08:57.383255246Z" level=info msg="CreateContainer within sandbox \"c2ac76665d25e0754ecf8cab00dd05c5a0b25a3d4c0ed22f4606759ff8f260c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:08:57.395390 containerd[1931]: time="2025-01-13T20:08:57.394872218Z" level=info msg="CreateContainer within sandbox \"04ed7e709749ae18d00545f76d19fccfb067befb039cedceb4c6a5563b6a1893\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cee14136e10d412d47481aae739939e1a3858227a750478035f9736247c48b48\"" Jan 13 20:08:57.399691 containerd[1931]: time="2025-01-13T20:08:57.399231998Z" level=info msg="StartContainer for \"cee14136e10d412d47481aae739939e1a3858227a750478035f9736247c48b48\"" Jan 13 20:08:57.420841 containerd[1931]: time="2025-01-13T20:08:57.420621926Z" level=info msg="CreateContainer within sandbox \"c2ac76665d25e0754ecf8cab00dd05c5a0b25a3d4c0ed22f4606759ff8f260c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b419348d59b688693ca3691b7a8ec60837015700a4d21316d21c476d51ad3d7\"" Jan 13 20:08:57.423880 containerd[1931]: time="2025-01-13T20:08:57.423833810Z" level=info msg="StartContainer for \"6b419348d59b688693ca3691b7a8ec60837015700a4d21316d21c476d51ad3d7\"" Jan 13 20:08:57.428303 kubelet[2864]: W0113 20:08:57.428207 2864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.22.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:57.428546 kubelet[2864]: E0113 20:08:57.428510 2864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.29:6443: connect: connection refused Jan 13 20:08:57.437861 kubelet[2864]: I0113 20:08:57.436326 2864 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-29" Jan 13 20:08:57.437861 kubelet[2864]: E0113 20:08:57.436823 2864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.29:6443/api/v1/nodes\": dial tcp 172.31.22.29:6443: connect: connection refused" node="ip-172-31-22-29" Jan 13 20:08:57.459695 systemd[1]: Started cri-containerd-2ff5b18faae49f09539a9f9afd0bdf54481651f9075b1c28076f73b4d8153eb0.scope - libcontainer container 2ff5b18faae49f09539a9f9afd0bdf54481651f9075b1c28076f73b4d8153eb0. Jan 13 20:08:57.489903 systemd[1]: Started cri-containerd-cee14136e10d412d47481aae739939e1a3858227a750478035f9736247c48b48.scope - libcontainer container cee14136e10d412d47481aae739939e1a3858227a750478035f9736247c48b48. Jan 13 20:08:57.523662 systemd[1]: Started cri-containerd-6b419348d59b688693ca3691b7a8ec60837015700a4d21316d21c476d51ad3d7.scope - libcontainer container 6b419348d59b688693ca3691b7a8ec60837015700a4d21316d21c476d51ad3d7. Jan 13 20:08:57.580041 containerd[1931]: time="2025-01-13T20:08:57.578876835Z" level=info msg="StartContainer for \"2ff5b18faae49f09539a9f9afd0bdf54481651f9075b1c28076f73b4d8153eb0\" returns successfully" Jan 13 20:08:57.579623 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:08:57.664833 containerd[1931]: time="2025-01-13T20:08:57.663742383Z" level=info msg="StartContainer for \"cee14136e10d412d47481aae739939e1a3858227a750478035f9736247c48b48\" returns successfully" Jan 13 20:08:57.674038 containerd[1931]: time="2025-01-13T20:08:57.673122351Z" level=info msg="StartContainer for \"6b419348d59b688693ca3691b7a8ec60837015700a4d21316d21c476d51ad3d7\" returns successfully" Jan 13 20:08:59.040384 kubelet[2864]: I0113 20:08:59.039328 2864 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-29" Jan 13 20:09:01.845550 kubelet[2864]: E0113 20:09:01.845486 2864 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-22-29.181a596e2457ee7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-29,UID:ip-172-31-22-29,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-29,},FirstTimestamp:2025-01-13 20:08:55.895764602 +0000 UTC m=+1.001672322,LastTimestamp:2025-01-13 20:08:55.895764602 +0000 UTC m=+1.001672322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-29,}" Jan 13 20:09:01.862049 kubelet[2864]: I0113 20:09:01.862006 2864 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-29" Jan 13 20:09:01.894501 kubelet[2864]: I0113 20:09:01.894449 2864 apiserver.go:52] "Watching apiserver" Jan 13 20:09:01.991390 kubelet[2864]: E0113 20:09:01.991333 2864 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-22-29.181a596e25cd093a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-29,UID:ip-172-31-22-29,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-22-29,},FirstTimestamp:2025-01-13 20:08:55.920216378 +0000 UTC m=+1.026124110,LastTimestamp:2025-01-13 20:08:55.920216378 +0000 UTC m=+1.026124110,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-29,}" Jan 13 20:09:02.008048 kubelet[2864]: E0113 20:09:02.007972 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 13 20:09:02.017302 kubelet[2864]: I0113 20:09:02.017183 2864 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:09:02.250618 kubelet[2864]: E0113 20:09:02.249656 2864 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-22-29\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-22-29" Jan 13 20:09:04.592896 systemd[1]: Reloading requested from client PID 3144 ('systemctl') (unit session-7.scope)... Jan 13 20:09:04.593450 systemd[1]: Reloading... Jan 13 20:09:04.770398 zram_generator::config[3188]: No configuration found. Jan 13 20:09:04.983430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:09:05.205115 systemd[1]: Reloading finished in 610 ms. Jan 13 20:09:05.290106 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:05.305147 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:09:05.305634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:05.305725 systemd[1]: kubelet.service: Consumed 1.711s CPU time, 112.0M memory peak, 0B memory swap peak. Jan 13 20:09:05.312903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:09:05.638673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:09:05.652032 (kubelet)[3244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:09:05.764460 kubelet[3244]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:05.764460 kubelet[3244]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:09:05.764460 kubelet[3244]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:09:05.764460 kubelet[3244]: I0113 20:09:05.764238 3244 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:09:05.773749 kubelet[3244]: I0113 20:09:05.773687 3244 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:09:05.773749 kubelet[3244]: I0113 20:09:05.773742 3244 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:09:05.776554 kubelet[3244]: I0113 20:09:05.774124 3244 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:09:05.777642 kubelet[3244]: I0113 20:09:05.777587 3244 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:09:05.781539 kubelet[3244]: I0113 20:09:05.781481 3244 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:09:05.783058 sudo[3258]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:09:05.784613 sudo[3258]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:09:05.800297 kubelet[3244]: I0113 20:09:05.800247 3244 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:09:05.800730 kubelet[3244]: I0113 20:09:05.800694 3244 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:09:05.801050 kubelet[3244]: I0113 20:09:05.801003 3244 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:09:05.801193 kubelet[3244]: I0113 20:09:05.801059 3244 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:09:05.801193 kubelet[3244]: I0113 20:09:05.801081 3244 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:09:05.801193 kubelet[3244]: I0113 20:09:05.801139 3244 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:05.801404 kubelet[3244]: I0113 20:09:05.801316 3244 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:09:05.801404 kubelet[3244]: I0113 20:09:05.801344 3244 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:09:05.801526 kubelet[3244]: I0113 20:09:05.801433 3244 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:09:05.801526 kubelet[3244]: I0113 20:09:05.801468 3244 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:09:05.802944 kubelet[3244]: I0113 20:09:05.802880 3244 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:09:05.807661 kubelet[3244]: I0113 20:09:05.803256 3244 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:09:05.807661 kubelet[3244]: I0113 20:09:05.803966 3244 server.go:1256] "Started kubelet" Jan 13 20:09:05.807661 kubelet[3244]: I0113 20:09:05.807541 3244 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:09:05.821007 kubelet[3244]: I0113 20:09:05.819951 3244 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:09:05.822201 kubelet[3244]: I0113 20:09:05.821673 3244 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:09:05.824324 kubelet[3244]: I0113 20:09:05.823653 3244 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:09:05.824324 kubelet[3244]: I0113 20:09:05.824009 3244 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:09:05.828972 kubelet[3244]: I0113 20:09:05.828329 3244 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:09:05.833977 kubelet[3244]: I0113 20:09:05.831721 3244 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:09:05.833977 kubelet[3244]: I0113 20:09:05.831996 3244 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:09:05.837426 kubelet[3244]: I0113 20:09:05.835259 3244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:09:05.837665 kubelet[3244]: I0113 20:09:05.837624 3244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:09:05.837736 kubelet[3244]: I0113 20:09:05.837680 3244 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:09:05.837736 kubelet[3244]: I0113 20:09:05.837719 3244 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:09:05.837850 kubelet[3244]: E0113 20:09:05.837808 3244 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:09:05.856444 kubelet[3244]: I0113 20:09:05.855054 3244 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:09:05.858731 kubelet[3244]: I0113 20:09:05.858691 3244 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:09:05.877375 kubelet[3244]: I0113 20:09:05.877324 3244 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:09:05.939726 kubelet[3244]: E0113 20:09:05.938430 3244 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:09:05.942573 kubelet[3244]: E0113 20:09:05.940488 3244 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jan 13 20:09:05.970258 kubelet[3244]: I0113 20:09:05.970208 3244 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-29" Jan 13 20:09:06.002226 kubelet[3244]: I0113 20:09:06.000648 3244 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-22-29" Jan 13 20:09:06.002226 kubelet[3244]: I0113 20:09:06.000761 3244 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-29" Jan 13 20:09:06.076871 kubelet[3244]: I0113 20:09:06.076738 3244 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:09:06.076871 kubelet[3244]: I0113 20:09:06.076776 3244 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:09:06.076871 kubelet[3244]: I0113 20:09:06.076809 3244 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:09:06.077134 kubelet[3244]: I0113 20:09:06.077069 3244 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:09:06.077134 kubelet[3244]: I0113 20:09:06.077109 3244 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:09:06.077134 kubelet[3244]: I0113 20:09:06.077126 3244 policy_none.go:49] "None policy: Start" Jan 13 20:09:06.078507 kubelet[3244]: I0113 20:09:06.078459 3244 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:09:06.078950 kubelet[3244]: I0113 20:09:06.078515 3244 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:09:06.078950 kubelet[3244]: I0113 20:09:06.078742 3244 state_mem.go:75] "Updated machine memory state" Jan 13 20:09:06.087950 kubelet[3244]: I0113 20:09:06.087902 3244 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:09:06.089638 kubelet[3244]: I0113 20:09:06.089595 3244 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:09:06.138606 kubelet[3244]: I0113 20:09:06.138554 3244 topology_manager.go:215] "Topology Admit Handler" podUID="3d6ca76261fbb72f03c8846f20633886" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-29" Jan 13 20:09:06.138755 kubelet[3244]: I0113 20:09:06.138671 3244 topology_manager.go:215] "Topology Admit Handler" podUID="1ae021de4163d25e49f328188d77f62c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-29" Jan 13 20:09:06.138810 kubelet[3244]: I0113 20:09:06.138771 3244 topology_manager.go:215] "Topology Admit Handler" podUID="ad6e4ebcf97d9bc668879a19c08787dd" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-29" Jan 13 20:09:06.154992 kubelet[3244]: E0113 20:09:06.154422 3244 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-22-29\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-29" Jan 13 20:09:06.238405 kubelet[3244]: I0113 20:09:06.237990 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ae021de4163d25e49f328188d77f62c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-29\" (UID: \"1ae021de4163d25e49f328188d77f62c\") " pod="kube-system/kube-apiserver-ip-172-31-22-29" Jan 13 20:09:06.238405 kubelet[3244]: I0113 20:09:06.238067 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ae021de4163d25e49f328188d77f62c-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-29\" (UID: \"1ae021de4163d25e49f328188d77f62c\") " pod="kube-system/kube-apiserver-ip-172-31-22-29" Jan 13 20:09:06.238405 kubelet[3244]: I0113 20:09:06.238115 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ae021de4163d25e49f328188d77f62c-ca-certs\") pod \"kube-apiserver-ip-172-31-22-29\" (UID: \"1ae021de4163d25e49f328188d77f62c\") " pod="kube-system/kube-apiserver-ip-172-31-22-29" Jan 13 20:09:06.238405 kubelet[3244]: I0113 20:09:06.238162 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad6e4ebcf97d9bc668879a19c08787dd-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-29\" (UID: \"ad6e4ebcf97d9bc668879a19c08787dd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-29" Jan 13 20:09:06.238405 kubelet[3244]: I0113 20:09:06.238224 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad6e4ebcf97d9bc668879a19c08787dd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-29\" (UID: \"ad6e4ebcf97d9bc668879a19c08787dd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-29" Jan 13 20:09:06.240512 kubelet[3244]: I0113 20:09:06.238268 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad6e4ebcf97d9bc668879a19c08787dd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-29\" (UID: \"ad6e4ebcf97d9bc668879a19c08787dd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-29" Jan 13 20:09:06.240512 kubelet[3244]: I0113 20:09:06.238311 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad6e4ebcf97d9bc668879a19c08787dd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-29\" (UID: \"ad6e4ebcf97d9bc668879a19c08787dd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-29" Jan 13 20:09:06.240512 kubelet[3244]: I0113 20:09:06.238720 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad6e4ebcf97d9bc668879a19c08787dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-29\" (UID: \"ad6e4ebcf97d9bc668879a19c08787dd\") " pod="kube-system/kube-controller-manager-ip-172-31-22-29" Jan 13 20:09:06.240512 kubelet[3244]: I0113 20:09:06.238893 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d6ca76261fbb72f03c8846f20633886-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-29\" (UID: \"3d6ca76261fbb72f03c8846f20633886\") " pod="kube-system/kube-scheduler-ip-172-31-22-29" Jan 13 20:09:06.721917 sudo[3258]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:06.823392 kubelet[3244]: I0113 20:09:06.821944 3244 apiserver.go:52] "Watching apiserver" Jan 13 20:09:06.835185 kubelet[3244]: I0113 20:09:06.833577 3244 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:09:06.889665 kubelet[3244]: I0113 20:09:06.889586 3244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-29" podStartSLOduration=0.889476061 podStartE2EDuration="889.476061ms" podCreationTimestamp="2025-01-13 20:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:06.885831589 +0000 UTC m=+1.221610927" watchObservedRunningTime="2025-01-13 20:09:06.889476061 +0000 UTC m=+1.225255387" Jan 13 20:09:06.918651 kubelet[3244]: I0113 20:09:06.918587 3244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-29" podStartSLOduration=0.918459757 podStartE2EDuration="918.459757ms" podCreationTimestamp="2025-01-13 20:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:06.915864625 +0000 UTC m=+1.251643975" watchObservedRunningTime="2025-01-13 20:09:06.918459757 +0000 UTC m=+1.254239095" Jan 13 20:09:06.919885 kubelet[3244]: I0113 20:09:06.919017 3244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-29" podStartSLOduration=2.918951529 podStartE2EDuration="2.918951529s" podCreationTimestamp="2025-01-13 20:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:06.900290353 +0000 UTC m=+1.236069703" watchObservedRunningTime="2025-01-13 20:09:06.918951529 +0000 UTC m=+1.254730867" Jan 13 20:09:09.804925 sudo[2255]: pam_unix(sudo:session): session closed for user root Jan 13 20:09:09.827478 sshd[2254]: Connection closed by 147.75.109.163 port 56740 Jan 13 20:09:09.828307 sshd-session[2252]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:09.835690 systemd[1]: sshd@6-172.31.22.29:22-147.75.109.163:56740.service: Deactivated successfully. Jan 13 20:09:09.840229 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:09:09.841169 systemd[1]: session-7.scope: Consumed 10.257s CPU time, 187.8M memory peak, 0B memory swap peak. Jan 13 20:09:09.842725 systemd-logind[1913]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:09:09.846296 systemd-logind[1913]: Removed session 7. Jan 13 20:09:11.414431 update_engine[1916]: I20250113 20:09:11.413470 1916 update_attempter.cc:509] Updating boot flags... Jan 13 20:09:11.493447 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3325) Jan 13 20:09:11.767609 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3329) Jan 13 20:09:12.068402 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3329) Jan 13 20:09:18.221017 kubelet[3244]: I0113 20:09:18.220968 3244 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:09:18.221812 containerd[1931]: time="2025-01-13T20:09:18.221713485Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:09:18.222891 kubelet[3244]: I0113 20:09:18.222066 3244 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:09:19.178778 kubelet[3244]: I0113 20:09:19.178701 3244 topology_manager.go:215] "Topology Admit Handler" podUID="0b2fbb61-3210-41ce-967f-937852e7da95" podNamespace="kube-system" podName="kube-proxy-9f7f5" Jan 13 20:09:19.197896 systemd[1]: Created slice kubepods-besteffort-pod0b2fbb61_3210_41ce_967f_937852e7da95.slice - libcontainer container kubepods-besteffort-pod0b2fbb61_3210_41ce_967f_937852e7da95.slice. Jan 13 20:09:19.201408 kubelet[3244]: I0113 20:09:19.201114 3244 topology_manager.go:215] "Topology Admit Handler" podUID="16e9fb72-3c31-4856-8a9b-f6a97a009515" podNamespace="kube-system" podName="cilium-6f8ml" Jan 13 20:09:19.215295 kubelet[3244]: I0113 20:09:19.215235 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cni-path\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.215471 kubelet[3244]: I0113 20:09:19.215311 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-etc-cni-netd\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.215471 kubelet[3244]: I0113 20:09:19.215419 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-host-proc-sys-net\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.215471 kubelet[3244]: I0113 20:09:19.215468 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-config-path\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.215658 kubelet[3244]: I0113 20:09:19.215518 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b2fbb61-3210-41ce-967f-937852e7da95-lib-modules\") pod \"kube-proxy-9f7f5\" (UID: \"0b2fbb61-3210-41ce-967f-937852e7da95\") " pod="kube-system/kube-proxy-9f7f5" Jan 13 20:09:19.215658 kubelet[3244]: I0113 20:09:19.215561 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-cgroup\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.215658 kubelet[3244]: I0113 20:09:19.215606 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16e9fb72-3c31-4856-8a9b-f6a97a009515-hubble-tls\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.215658 kubelet[3244]: I0113 20:09:19.215651 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b2fbb61-3210-41ce-967f-937852e7da95-kube-proxy\") pod \"kube-proxy-9f7f5\" (UID: \"0b2fbb61-3210-41ce-967f-937852e7da95\") " pod="kube-system/kube-proxy-9f7f5" Jan 13 20:09:19.215870 kubelet[3244]: I0113 20:09:19.215693 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b2fbb61-3210-41ce-967f-937852e7da95-xtables-lock\") pod \"kube-proxy-9f7f5\" (UID: \"0b2fbb61-3210-41ce-967f-937852e7da95\") " pod="kube-system/kube-proxy-9f7f5" Jan 13 20:09:19.215870 kubelet[3244]: I0113 20:09:19.215737 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-xtables-lock\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.215870 kubelet[3244]: I0113 20:09:19.215782 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-host-proc-sys-kernel\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.215870 kubelet[3244]: I0113 20:09:19.215831 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16e9fb72-3c31-4856-8a9b-f6a97a009515-clustermesh-secrets\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.216082 kubelet[3244]: I0113 20:09:19.215875 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnvzg\" (UniqueName: \"kubernetes.io/projected/16e9fb72-3c31-4856-8a9b-f6a97a009515-kube-api-access-pnvzg\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.216082 kubelet[3244]: I0113 20:09:19.215920 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-run\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.216082 kubelet[3244]: I0113 20:09:19.215965 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-bpf-maps\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.218653 kubelet[3244]: I0113 20:09:19.218308 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-hostproc\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.218653 kubelet[3244]: I0113 20:09:19.218417 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-lib-modules\") pod \"cilium-6f8ml\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " pod="kube-system/cilium-6f8ml" Jan 13 20:09:19.218653 kubelet[3244]: I0113 20:09:19.218474 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fxcp\" (UniqueName: \"kubernetes.io/projected/0b2fbb61-3210-41ce-967f-937852e7da95-kube-api-access-5fxcp\") pod \"kube-proxy-9f7f5\" (UID: \"0b2fbb61-3210-41ce-967f-937852e7da95\") " pod="kube-system/kube-proxy-9f7f5" Jan 13 20:09:19.226658 systemd[1]: Created slice kubepods-burstable-pod16e9fb72_3c31_4856_8a9b_f6a97a009515.slice - libcontainer container kubepods-burstable-pod16e9fb72_3c31_4856_8a9b_f6a97a009515.slice. Jan 13 20:09:19.361190 kubelet[3244]: I0113 20:09:19.356566 3244 topology_manager.go:215] "Topology Admit Handler" podUID="b84dd74e-0d19-4431-8dd2-34e56913efdb" podNamespace="kube-system" podName="cilium-operator-5cc964979-5r7b6" Jan 13 20:09:19.378016 systemd[1]: Created slice kubepods-besteffort-podb84dd74e_0d19_4431_8dd2_34e56913efdb.slice - libcontainer container kubepods-besteffort-podb84dd74e_0d19_4431_8dd2_34e56913efdb.slice. Jan 13 20:09:19.419693 kubelet[3244]: I0113 20:09:19.419501 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wft6j\" (UniqueName: \"kubernetes.io/projected/b84dd74e-0d19-4431-8dd2-34e56913efdb-kube-api-access-wft6j\") pod \"cilium-operator-5cc964979-5r7b6\" (UID: \"b84dd74e-0d19-4431-8dd2-34e56913efdb\") " pod="kube-system/cilium-operator-5cc964979-5r7b6" Jan 13 20:09:19.419693 kubelet[3244]: I0113 20:09:19.419594 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b84dd74e-0d19-4431-8dd2-34e56913efdb-cilium-config-path\") pod \"cilium-operator-5cc964979-5r7b6\" (UID: \"b84dd74e-0d19-4431-8dd2-34e56913efdb\") " pod="kube-system/cilium-operator-5cc964979-5r7b6" Jan 13 20:09:19.513242 containerd[1931]: time="2025-01-13T20:09:19.513086016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9f7f5,Uid:0b2fbb61-3210-41ce-967f-937852e7da95,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:19.531657 containerd[1931]: time="2025-01-13T20:09:19.531605400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6f8ml,Uid:16e9fb72-3c31-4856-8a9b-f6a97a009515,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:19.633408 containerd[1931]: time="2025-01-13T20:09:19.631297596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:19.633408 containerd[1931]: time="2025-01-13T20:09:19.631418544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:19.633408 containerd[1931]: time="2025-01-13T20:09:19.631456512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:19.634508 containerd[1931]: time="2025-01-13T20:09:19.633516492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:19.639428 containerd[1931]: time="2025-01-13T20:09:19.638621724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:19.639428 containerd[1931]: time="2025-01-13T20:09:19.638716440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:19.639428 containerd[1931]: time="2025-01-13T20:09:19.638753676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:19.640747 containerd[1931]: time="2025-01-13T20:09:19.638905308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:19.672695 systemd[1]: Started cri-containerd-616529c82261ac919bed47fe8f0f4a80733e2654762a7fd8c8472bc33e547a39.scope - libcontainer container 616529c82261ac919bed47fe8f0f4a80733e2654762a7fd8c8472bc33e547a39. Jan 13 20:09:19.686690 systemd[1]: Started cri-containerd-91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d.scope - libcontainer container 91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d. Jan 13 20:09:19.689130 containerd[1931]: time="2025-01-13T20:09:19.688611288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5r7b6,Uid:b84dd74e-0d19-4431-8dd2-34e56913efdb,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:19.751721 containerd[1931]: time="2025-01-13T20:09:19.751636357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9f7f5,Uid:0b2fbb61-3210-41ce-967f-937852e7da95,Namespace:kube-system,Attempt:0,} returns sandbox id \"616529c82261ac919bed47fe8f0f4a80733e2654762a7fd8c8472bc33e547a39\"" Jan 13 20:09:19.764669 containerd[1931]: time="2025-01-13T20:09:19.764385865Z" level=info msg="CreateContainer within sandbox \"616529c82261ac919bed47fe8f0f4a80733e2654762a7fd8c8472bc33e547a39\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:09:19.775398 containerd[1931]: time="2025-01-13T20:09:19.775089781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6f8ml,Uid:16e9fb72-3c31-4856-8a9b-f6a97a009515,Namespace:kube-system,Attempt:0,} returns sandbox id \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\"" Jan 13 20:09:19.778011 containerd[1931]: time="2025-01-13T20:09:19.776843965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:19.778011 containerd[1931]: time="2025-01-13T20:09:19.776945737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:19.778011 containerd[1931]: time="2025-01-13T20:09:19.776983681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:19.778011 containerd[1931]: time="2025-01-13T20:09:19.777164137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:19.784882 containerd[1931]: time="2025-01-13T20:09:19.784558453Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:09:19.812685 containerd[1931]: time="2025-01-13T20:09:19.811397065Z" level=info msg="CreateContainer within sandbox \"616529c82261ac919bed47fe8f0f4a80733e2654762a7fd8c8472bc33e547a39\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"225c4e83c65a71c5e81d371043bfcbfe6bd364a48c16e7d6f8136e09b9c6a696\"" Jan 13 20:09:19.812685 containerd[1931]: time="2025-01-13T20:09:19.812430373Z" level=info msg="StartContainer for \"225c4e83c65a71c5e81d371043bfcbfe6bd364a48c16e7d6f8136e09b9c6a696\"" Jan 13 20:09:19.819783 systemd[1]: Started cri-containerd-5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b.scope - libcontainer container 5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b. Jan 13 20:09:19.880643 systemd[1]: Started cri-containerd-225c4e83c65a71c5e81d371043bfcbfe6bd364a48c16e7d6f8136e09b9c6a696.scope - libcontainer container 225c4e83c65a71c5e81d371043bfcbfe6bd364a48c16e7d6f8136e09b9c6a696. Jan 13 20:09:19.960152 containerd[1931]: time="2025-01-13T20:09:19.959987066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-5r7b6,Uid:b84dd74e-0d19-4431-8dd2-34e56913efdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b\"" Jan 13 20:09:19.983497 containerd[1931]: time="2025-01-13T20:09:19.983137958Z" level=info msg="StartContainer for \"225c4e83c65a71c5e81d371043bfcbfe6bd364a48c16e7d6f8136e09b9c6a696\" returns successfully" Jan 13 20:09:25.367550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335555709.mount: Deactivated successfully. Jan 13 20:09:27.845790 containerd[1931]: time="2025-01-13T20:09:27.845735721Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:27.847813 containerd[1931]: time="2025-01-13T20:09:27.847714953Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651602" Jan 13 20:09:27.848701 containerd[1931]: time="2025-01-13T20:09:27.848241081Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:27.851979 containerd[1931]: time="2025-01-13T20:09:27.851649333Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.06703112s" Jan 13 20:09:27.851979 containerd[1931]: time="2025-01-13T20:09:27.851711613Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:09:27.853308 containerd[1931]: time="2025-01-13T20:09:27.852505737Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:09:27.858864 containerd[1931]: time="2025-01-13T20:09:27.858793017Z" level=info msg="CreateContainer within sandbox \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:09:27.880798 containerd[1931]: time="2025-01-13T20:09:27.880719249Z" level=info msg="CreateContainer within sandbox \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b\"" Jan 13 20:09:27.882419 containerd[1931]: time="2025-01-13T20:09:27.881640357Z" level=info msg="StartContainer for \"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b\"" Jan 13 20:09:27.938681 systemd[1]: Started cri-containerd-641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b.scope - libcontainer container 641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b. Jan 13 20:09:27.990389 containerd[1931]: time="2025-01-13T20:09:27.990297826Z" level=info msg="StartContainer for \"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b\" returns successfully" Jan 13 20:09:28.012500 systemd[1]: cri-containerd-641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b.scope: Deactivated successfully. Jan 13 20:09:28.129526 kubelet[3244]: I0113 20:09:28.128343 3244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9f7f5" podStartSLOduration=9.128286306 podStartE2EDuration="9.128286306s" podCreationTimestamp="2025-01-13 20:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:20.077123698 +0000 UTC m=+14.412903264" watchObservedRunningTime="2025-01-13 20:09:28.128286306 +0000 UTC m=+22.464065644" Jan 13 20:09:28.872210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b-rootfs.mount: Deactivated successfully. Jan 13 20:09:29.291644 containerd[1931]: time="2025-01-13T20:09:29.291482288Z" level=info msg="shim disconnected" id=641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b namespace=k8s.io Jan 13 20:09:29.291644 containerd[1931]: time="2025-01-13T20:09:29.291620120Z" level=warning msg="cleaning up after shim disconnected" id=641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b namespace=k8s.io Jan 13 20:09:29.291644 containerd[1931]: time="2025-01-13T20:09:29.291644336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:30.113756 containerd[1931]: time="2025-01-13T20:09:30.113141036Z" level=info msg="CreateContainer within sandbox \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:09:30.143252 containerd[1931]: time="2025-01-13T20:09:30.143095436Z" level=info msg="CreateContainer within sandbox \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795\"" Jan 13 20:09:30.144515 containerd[1931]: time="2025-01-13T20:09:30.144181952Z" level=info msg="StartContainer for \"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795\"" Jan 13 20:09:30.208859 systemd[1]: Started cri-containerd-09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795.scope - libcontainer container 09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795. Jan 13 20:09:30.254443 containerd[1931]: time="2025-01-13T20:09:30.254294529Z" level=info msg="StartContainer for \"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795\" returns successfully" Jan 13 20:09:30.275047 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:09:30.275624 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:30.275746 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:09:30.284923 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:09:30.285691 systemd[1]: cri-containerd-09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795.scope: Deactivated successfully. Jan 13 20:09:30.334615 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:09:30.340887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795-rootfs.mount: Deactivated successfully. Jan 13 20:09:30.348189 containerd[1931]: time="2025-01-13T20:09:30.348047901Z" level=info msg="shim disconnected" id=09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795 namespace=k8s.io Jan 13 20:09:30.348189 containerd[1931]: time="2025-01-13T20:09:30.348120489Z" level=warning msg="cleaning up after shim disconnected" id=09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795 namespace=k8s.io Jan 13 20:09:30.348189 containerd[1931]: time="2025-01-13T20:09:30.348140313Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:31.124129 containerd[1931]: time="2025-01-13T20:09:31.124054989Z" level=info msg="CreateContainer within sandbox \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:09:31.157004 containerd[1931]: time="2025-01-13T20:09:31.156867393Z" level=info msg="CreateContainer within sandbox \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8\"" Jan 13 20:09:31.158699 containerd[1931]: time="2025-01-13T20:09:31.158632245Z" level=info msg="StartContainer for \"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8\"" Jan 13 20:09:31.216661 systemd[1]: Started cri-containerd-3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8.scope - libcontainer container 3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8. Jan 13 20:09:31.273221 containerd[1931]: time="2025-01-13T20:09:31.273143566Z" level=info msg="StartContainer for \"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8\" returns successfully" Jan 13 20:09:31.277397 systemd[1]: cri-containerd-3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8.scope: Deactivated successfully. Jan 13 20:09:31.314943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8-rootfs.mount: Deactivated successfully. Jan 13 20:09:31.321242 containerd[1931]: time="2025-01-13T20:09:31.321152806Z" level=info msg="shim disconnected" id=3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8 namespace=k8s.io Jan 13 20:09:31.321242 containerd[1931]: time="2025-01-13T20:09:31.321227938Z" level=warning msg="cleaning up after shim disconnected" id=3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8 namespace=k8s.io Jan 13 20:09:31.321657 containerd[1931]: time="2025-01-13T20:09:31.321249058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:32.137994 containerd[1931]: time="2025-01-13T20:09:32.137719186Z" level=info msg="CreateContainer within sandbox \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:09:32.173889 containerd[1931]: time="2025-01-13T20:09:32.173809810Z" level=info msg="CreateContainer within sandbox \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e\"" Jan 13 20:09:32.175289 containerd[1931]: time="2025-01-13T20:09:32.175222762Z" level=info msg="StartContainer for \"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e\"" Jan 13 20:09:32.223415 systemd[1]: run-containerd-runc-k8s.io-4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e-runc.OpxFzG.mount: Deactivated successfully. Jan 13 20:09:32.236672 systemd[1]: Started cri-containerd-4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e.scope - libcontainer container 4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e. Jan 13 20:09:32.283778 systemd[1]: cri-containerd-4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e.scope: Deactivated successfully. Jan 13 20:09:32.287273 containerd[1931]: time="2025-01-13T20:09:32.286317431Z" level=info msg="StartContainer for \"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e\" returns successfully" Jan 13 20:09:32.324265 containerd[1931]: time="2025-01-13T20:09:32.323906279Z" level=info msg="shim disconnected" id=4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e namespace=k8s.io Jan 13 20:09:32.324265 containerd[1931]: time="2025-01-13T20:09:32.324008111Z" level=warning msg="cleaning up after shim disconnected" id=4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e namespace=k8s.io Jan 13 20:09:32.324265 containerd[1931]: time="2025-01-13T20:09:32.324028595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:33.134804 containerd[1931]: time="2025-01-13T20:09:33.134091707Z" level=info msg="CreateContainer within sandbox \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:09:33.164984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e-rootfs.mount: Deactivated successfully. Jan 13 20:09:33.165221 containerd[1931]: time="2025-01-13T20:09:33.165104423Z" level=info msg="CreateContainer within sandbox \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\"" Jan 13 20:09:33.167954 containerd[1931]: time="2025-01-13T20:09:33.167598287Z" level=info msg="StartContainer for \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\"" Jan 13 20:09:33.225680 systemd[1]: Started cri-containerd-a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9.scope - libcontainer container a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9. Jan 13 20:09:33.296919 containerd[1931]: time="2025-01-13T20:09:33.295930056Z" level=info msg="StartContainer for \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\" returns successfully" Jan 13 20:09:33.350182 systemd[1]: run-containerd-runc-k8s.io-a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9-runc.qgcyYB.mount: Deactivated successfully. Jan 13 20:09:33.530298 kubelet[3244]: I0113 20:09:33.530089 3244 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:09:33.576513 kubelet[3244]: I0113 20:09:33.576460 3244 topology_manager.go:215] "Topology Admit Handler" podUID="98ccbad4-e1fa-46fb-85af-254698d90ab8" podNamespace="kube-system" podName="coredns-76f75df574-h7whz" Jan 13 20:09:33.582704 kubelet[3244]: I0113 20:09:33.582645 3244 topology_manager.go:215] "Topology Admit Handler" podUID="a8a10ead-033a-49b9-a982-6400b8050037" podNamespace="kube-system" podName="coredns-76f75df574-zh6gf" Jan 13 20:09:33.596581 systemd[1]: Created slice kubepods-burstable-pod98ccbad4_e1fa_46fb_85af_254698d90ab8.slice - libcontainer container kubepods-burstable-pod98ccbad4_e1fa_46fb_85af_254698d90ab8.slice. Jan 13 20:09:33.620973 systemd[1]: Created slice kubepods-burstable-poda8a10ead_033a_49b9_a982_6400b8050037.slice - libcontainer container kubepods-burstable-poda8a10ead_033a_49b9_a982_6400b8050037.slice. Jan 13 20:09:33.636954 kubelet[3244]: I0113 20:09:33.636833 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vjh\" (UniqueName: \"kubernetes.io/projected/a8a10ead-033a-49b9-a982-6400b8050037-kube-api-access-l8vjh\") pod \"coredns-76f75df574-zh6gf\" (UID: \"a8a10ead-033a-49b9-a982-6400b8050037\") " pod="kube-system/coredns-76f75df574-zh6gf" Jan 13 20:09:33.636954 kubelet[3244]: I0113 20:09:33.636918 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v48ms\" (UniqueName: \"kubernetes.io/projected/98ccbad4-e1fa-46fb-85af-254698d90ab8-kube-api-access-v48ms\") pod \"coredns-76f75df574-h7whz\" (UID: \"98ccbad4-e1fa-46fb-85af-254698d90ab8\") " pod="kube-system/coredns-76f75df574-h7whz" Jan 13 20:09:33.637317 kubelet[3244]: I0113 20:09:33.636964 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8a10ead-033a-49b9-a982-6400b8050037-config-volume\") pod \"coredns-76f75df574-zh6gf\" (UID: \"a8a10ead-033a-49b9-a982-6400b8050037\") " pod="kube-system/coredns-76f75df574-zh6gf" Jan 13 20:09:33.637317 kubelet[3244]: I0113 20:09:33.637018 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98ccbad4-e1fa-46fb-85af-254698d90ab8-config-volume\") pod \"coredns-76f75df574-h7whz\" (UID: \"98ccbad4-e1fa-46fb-85af-254698d90ab8\") " pod="kube-system/coredns-76f75df574-h7whz" Jan 13 20:09:33.911339 containerd[1931]: time="2025-01-13T20:09:33.911197167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h7whz,Uid:98ccbad4-e1fa-46fb-85af-254698d90ab8,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:33.931146 containerd[1931]: time="2025-01-13T20:09:33.930477615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zh6gf,Uid:a8a10ead-033a-49b9-a982-6400b8050037,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:34.183673 kubelet[3244]: I0113 20:09:34.183396 3244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6f8ml" podStartSLOduration=7.109324816 podStartE2EDuration="15.183308712s" podCreationTimestamp="2025-01-13 20:09:19 +0000 UTC" firstStartedPulling="2025-01-13 20:09:19.778187029 +0000 UTC m=+14.113966343" lastFinishedPulling="2025-01-13 20:09:27.852170901 +0000 UTC m=+22.187950239" observedRunningTime="2025-01-13 20:09:34.177942432 +0000 UTC m=+28.513721770" watchObservedRunningTime="2025-01-13 20:09:34.183308712 +0000 UTC m=+28.519088038" Jan 13 20:09:36.289498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799758314.mount: Deactivated successfully. Jan 13 20:09:36.948978 containerd[1931]: time="2025-01-13T20:09:36.948820062Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:36.950466 containerd[1931]: time="2025-01-13T20:09:36.950376522Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138310" Jan 13 20:09:36.951817 containerd[1931]: time="2025-01-13T20:09:36.951745434Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:09:36.955330 containerd[1931]: time="2025-01-13T20:09:36.954927258Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 9.102336885s" Jan 13 20:09:36.955330 containerd[1931]: time="2025-01-13T20:09:36.954990462Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:09:36.959848 containerd[1931]: time="2025-01-13T20:09:36.959647974Z" level=info msg="CreateContainer within sandbox \"5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:09:36.989512 containerd[1931]: time="2025-01-13T20:09:36.989438502Z" level=info msg="CreateContainer within sandbox \"5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\"" Jan 13 20:09:36.990599 containerd[1931]: time="2025-01-13T20:09:36.990494922Z" level=info msg="StartContainer for \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\"" Jan 13 20:09:37.037666 systemd[1]: Started cri-containerd-55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7.scope - libcontainer container 55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7. Jan 13 20:09:37.086145 containerd[1931]: time="2025-01-13T20:09:37.085922151Z" level=info msg="StartContainer for \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\" returns successfully" Jan 13 20:09:41.139457 systemd-networkd[1849]: cilium_host: Link UP Jan 13 20:09:41.139773 systemd-networkd[1849]: cilium_net: Link UP Jan 13 20:09:41.140074 systemd-networkd[1849]: cilium_net: Gained carrier Jan 13 20:09:41.142892 systemd-networkd[1849]: cilium_host: Gained carrier Jan 13 20:09:41.150160 (udev-worker)[4330]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:41.154525 (udev-worker)[4331]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:41.331444 systemd-networkd[1849]: cilium_vxlan: Link UP Jan 13 20:09:41.331467 systemd-networkd[1849]: cilium_vxlan: Gained carrier Jan 13 20:09:41.808413 kernel: NET: Registered PF_ALG protocol family Jan 13 20:09:41.812193 systemd-networkd[1849]: cilium_net: Gained IPv6LL Jan 13 20:09:41.812875 systemd-networkd[1849]: cilium_host: Gained IPv6LL Jan 13 20:09:43.092049 systemd-networkd[1849]: cilium_vxlan: Gained IPv6LL Jan 13 20:09:43.116570 systemd-networkd[1849]: lxc_health: Link UP Jan 13 20:09:43.118335 (udev-worker)[4342]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:43.127309 systemd-networkd[1849]: lxc_health: Gained carrier Jan 13 20:09:43.518836 systemd-networkd[1849]: lxc60cfec5e63b8: Link UP Jan 13 20:09:43.525416 kernel: eth0: renamed from tmp9539a Jan 13 20:09:43.533142 systemd-networkd[1849]: lxc60cfec5e63b8: Gained carrier Jan 13 20:09:43.557610 systemd-networkd[1849]: lxc77fa1dde3fb6: Link UP Jan 13 20:09:43.583284 kernel: eth0: renamed from tmp874da Jan 13 20:09:43.598636 systemd-networkd[1849]: lxc77fa1dde3fb6: Gained carrier Jan 13 20:09:43.603307 (udev-worker)[4328]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:09:43.611615 kubelet[3244]: I0113 20:09:43.611558 3244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-5r7b6" podStartSLOduration=7.619676491 podStartE2EDuration="24.611462723s" podCreationTimestamp="2025-01-13 20:09:19 +0000 UTC" firstStartedPulling="2025-01-13 20:09:19.96360989 +0000 UTC m=+14.299389216" lastFinishedPulling="2025-01-13 20:09:36.955396098 +0000 UTC m=+31.291175448" observedRunningTime="2025-01-13 20:09:37.166547379 +0000 UTC m=+31.502326729" watchObservedRunningTime="2025-01-13 20:09:43.611462723 +0000 UTC m=+37.947242373" Jan 13 20:09:44.435556 systemd-networkd[1849]: lxc_health: Gained IPv6LL Jan 13 20:09:44.755586 systemd-networkd[1849]: lxc60cfec5e63b8: Gained IPv6LL Jan 13 20:09:45.267717 systemd-networkd[1849]: lxc77fa1dde3fb6: Gained IPv6LL Jan 13 20:09:47.606767 ntpd[1908]: Listen normally on 7 cilium_host 192.168.0.80:123 Jan 13 20:09:47.606902 ntpd[1908]: Listen normally on 8 cilium_net [fe80::88e8:9bff:fe66:408d%4]:123 Jan 13 20:09:47.608028 ntpd[1908]: 13 Jan 20:09:47 ntpd[1908]: Listen normally on 7 cilium_host 192.168.0.80:123 Jan 13 20:09:47.608028 ntpd[1908]: 13 Jan 20:09:47 ntpd[1908]: Listen normally on 8 cilium_net [fe80::88e8:9bff:fe66:408d%4]:123 Jan 13 20:09:47.608028 ntpd[1908]: 13 Jan 20:09:47 ntpd[1908]: Listen normally on 9 cilium_host [fe80::866:d1ff:fe33:418f%5]:123 Jan 13 20:09:47.608028 ntpd[1908]: 13 Jan 20:09:47 ntpd[1908]: Listen normally on 10 cilium_vxlan [fe80::84de:5ff:fe3c:730e%6]:123 Jan 13 20:09:47.608028 ntpd[1908]: 13 Jan 20:09:47 ntpd[1908]: Listen normally on 11 lxc_health [fe80::244b:82ff:fe70:9548%8]:123 Jan 13 20:09:47.608028 ntpd[1908]: 13 Jan 20:09:47 ntpd[1908]: Listen normally on 12 lxc60cfec5e63b8 [fe80::54d7:75ff:fe25:db97%10]:123 Jan 13 20:09:47.608028 ntpd[1908]: 13 Jan 20:09:47 ntpd[1908]: Listen normally on 13 lxc77fa1dde3fb6 [fe80::44d1:1cff:fece:c8c6%12]:123 Jan 13 20:09:47.606996 ntpd[1908]: Listen normally on 9 cilium_host [fe80::866:d1ff:fe33:418f%5]:123 Jan 13 20:09:47.607079 ntpd[1908]: Listen normally on 10 cilium_vxlan [fe80::84de:5ff:fe3c:730e%6]:123 Jan 13 20:09:47.607155 ntpd[1908]: Listen normally on 11 lxc_health [fe80::244b:82ff:fe70:9548%8]:123 Jan 13 20:09:47.607333 ntpd[1908]: Listen normally on 12 lxc60cfec5e63b8 [fe80::54d7:75ff:fe25:db97%10]:123 Jan 13 20:09:47.607438 ntpd[1908]: Listen normally on 13 lxc77fa1dde3fb6 [fe80::44d1:1cff:fece:c8c6%12]:123 Jan 13 20:09:51.922612 containerd[1931]: time="2025-01-13T20:09:51.920055368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:51.922612 containerd[1931]: time="2025-01-13T20:09:51.920174684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:51.922612 containerd[1931]: time="2025-01-13T20:09:51.920212448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:51.922612 containerd[1931]: time="2025-01-13T20:09:51.920498768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:51.981711 systemd[1]: Started cri-containerd-9539a13774cb859b86ef669de1515176a744d0fed614ec9131d5da65c2526938.scope - libcontainer container 9539a13774cb859b86ef669de1515176a744d0fed614ec9131d5da65c2526938. Jan 13 20:09:52.089027 containerd[1931]: time="2025-01-13T20:09:52.088485653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:52.089027 containerd[1931]: time="2025-01-13T20:09:52.088625393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:52.089027 containerd[1931]: time="2025-01-13T20:09:52.088663589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:52.089027 containerd[1931]: time="2025-01-13T20:09:52.088823237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:52.136819 systemd[1]: Started sshd@7-172.31.22.29:22-147.75.109.163:36976.service - OpenSSH per-connection server daemon (147.75.109.163:36976). Jan 13 20:09:52.144690 containerd[1931]: time="2025-01-13T20:09:52.144494706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h7whz,Uid:98ccbad4-e1fa-46fb-85af-254698d90ab8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9539a13774cb859b86ef669de1515176a744d0fed614ec9131d5da65c2526938\"" Jan 13 20:09:52.160044 containerd[1931]: time="2025-01-13T20:09:52.159993954Z" level=info msg="CreateContainer within sandbox \"9539a13774cb859b86ef669de1515176a744d0fed614ec9131d5da65c2526938\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:09:52.196213 systemd[1]: run-containerd-runc-k8s.io-874da5b84dd607f6f2b97fd4bfc98eba7fed5bff2f018328c66a7d2e2cc3b437-runc.oqNW5L.mount: Deactivated successfully. Jan 13 20:09:52.222751 systemd[1]: Started cri-containerd-874da5b84dd607f6f2b97fd4bfc98eba7fed5bff2f018328c66a7d2e2cc3b437.scope - libcontainer container 874da5b84dd607f6f2b97fd4bfc98eba7fed5bff2f018328c66a7d2e2cc3b437. Jan 13 20:09:52.248452 containerd[1931]: time="2025-01-13T20:09:52.248378190Z" level=info msg="CreateContainer within sandbox \"9539a13774cb859b86ef669de1515176a744d0fed614ec9131d5da65c2526938\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eef8601966082af3a3357bc76ddba80637848caecfa9289e47684b5b67c8d712\"" Jan 13 20:09:52.250335 containerd[1931]: time="2025-01-13T20:09:52.250272330Z" level=info msg="StartContainer for \"eef8601966082af3a3357bc76ddba80637848caecfa9289e47684b5b67c8d712\"" Jan 13 20:09:52.345866 systemd[1]: Started cri-containerd-eef8601966082af3a3357bc76ddba80637848caecfa9289e47684b5b67c8d712.scope - libcontainer container eef8601966082af3a3357bc76ddba80637848caecfa9289e47684b5b67c8d712. Jan 13 20:09:52.383051 containerd[1931]: time="2025-01-13T20:09:52.382525063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zh6gf,Uid:a8a10ead-033a-49b9-a982-6400b8050037,Namespace:kube-system,Attempt:0,} returns sandbox id \"874da5b84dd607f6f2b97fd4bfc98eba7fed5bff2f018328c66a7d2e2cc3b437\"" Jan 13 20:09:52.396169 containerd[1931]: time="2025-01-13T20:09:52.395149975Z" level=info msg="CreateContainer within sandbox \"874da5b84dd607f6f2b97fd4bfc98eba7fed5bff2f018328c66a7d2e2cc3b437\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:09:52.409274 sshd[4772]: Accepted publickey for core from 147.75.109.163 port 36976 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:52.415878 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:52.435727 systemd-logind[1913]: New session 8 of user core. Jan 13 20:09:52.439721 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:09:52.450656 containerd[1931]: time="2025-01-13T20:09:52.448458415Z" level=info msg="CreateContainer within sandbox \"874da5b84dd607f6f2b97fd4bfc98eba7fed5bff2f018328c66a7d2e2cc3b437\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2572841c9426bfc7739dcc3212c6b997550a5f29e97508d2ec81397d03a086e\"" Jan 13 20:09:52.454331 containerd[1931]: time="2025-01-13T20:09:52.452011327Z" level=info msg="StartContainer for \"a2572841c9426bfc7739dcc3212c6b997550a5f29e97508d2ec81397d03a086e\"" Jan 13 20:09:52.485852 containerd[1931]: time="2025-01-13T20:09:52.485606227Z" level=info msg="StartContainer for \"eef8601966082af3a3357bc76ddba80637848caecfa9289e47684b5b67c8d712\" returns successfully" Jan 13 20:09:52.543716 systemd[1]: Started cri-containerd-a2572841c9426bfc7739dcc3212c6b997550a5f29e97508d2ec81397d03a086e.scope - libcontainer container a2572841c9426bfc7739dcc3212c6b997550a5f29e97508d2ec81397d03a086e. Jan 13 20:09:52.650694 containerd[1931]: time="2025-01-13T20:09:52.650635964Z" level=info msg="StartContainer for \"a2572841c9426bfc7739dcc3212c6b997550a5f29e97508d2ec81397d03a086e\" returns successfully" Jan 13 20:09:52.806515 sshd[4820]: Connection closed by 147.75.109.163 port 36976 Jan 13 20:09:52.807916 sshd-session[4772]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:52.814528 systemd[1]: sshd@7-172.31.22.29:22-147.75.109.163:36976.service: Deactivated successfully. Jan 13 20:09:52.819458 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:09:52.820850 systemd-logind[1913]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:09:52.822883 systemd-logind[1913]: Removed session 8. Jan 13 20:09:53.279829 kubelet[3244]: I0113 20:09:53.279013 3244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-h7whz" podStartSLOduration=34.278955619 podStartE2EDuration="34.278955619s" podCreationTimestamp="2025-01-13 20:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:53.239082103 +0000 UTC m=+47.574861417" watchObservedRunningTime="2025-01-13 20:09:53.278955619 +0000 UTC m=+47.614734969" Jan 13 20:09:53.279829 kubelet[3244]: I0113 20:09:53.279157 3244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zh6gf" podStartSLOduration=34.279125539 podStartE2EDuration="34.279125539s" podCreationTimestamp="2025-01-13 20:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:53.275617279 +0000 UTC m=+47.611397025" watchObservedRunningTime="2025-01-13 20:09:53.279125539 +0000 UTC m=+47.614904889" Jan 13 20:09:57.848957 systemd[1]: Started sshd@8-172.31.22.29:22-147.75.109.163:42818.service - OpenSSH per-connection server daemon (147.75.109.163:42818). Jan 13 20:09:58.038059 sshd[4902]: Accepted publickey for core from 147.75.109.163 port 42818 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:09:58.040588 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:58.049244 systemd-logind[1913]: New session 9 of user core. Jan 13 20:09:58.056681 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:09:58.295608 sshd[4904]: Connection closed by 147.75.109.163 port 42818 Jan 13 20:09:58.296659 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:58.301450 systemd[1]: sshd@8-172.31.22.29:22-147.75.109.163:42818.service: Deactivated successfully. Jan 13 20:09:58.305062 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:09:58.309118 systemd-logind[1913]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:09:58.311385 systemd-logind[1913]: Removed session 9. Jan 13 20:10:03.336885 systemd[1]: Started sshd@9-172.31.22.29:22-147.75.109.163:42834.service - OpenSSH per-connection server daemon (147.75.109.163:42834). Jan 13 20:10:03.520120 sshd[4916]: Accepted publickey for core from 147.75.109.163 port 42834 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:03.522718 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:03.530162 systemd-logind[1913]: New session 10 of user core. Jan 13 20:10:03.537641 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:10:03.784953 sshd[4918]: Connection closed by 147.75.109.163 port 42834 Jan 13 20:10:03.785850 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:03.791345 systemd[1]: sshd@9-172.31.22.29:22-147.75.109.163:42834.service: Deactivated successfully. Jan 13 20:10:03.795261 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:10:03.799100 systemd-logind[1913]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:10:03.801594 systemd-logind[1913]: Removed session 10. Jan 13 20:10:08.822892 systemd[1]: Started sshd@10-172.31.22.29:22-147.75.109.163:35598.service - OpenSSH per-connection server daemon (147.75.109.163:35598). Jan 13 20:10:09.010330 sshd[4931]: Accepted publickey for core from 147.75.109.163 port 35598 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:09.012884 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:09.020203 systemd-logind[1913]: New session 11 of user core. Jan 13 20:10:09.027622 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:10:09.272682 sshd[4933]: Connection closed by 147.75.109.163 port 35598 Jan 13 20:10:09.273556 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:09.280299 systemd[1]: sshd@10-172.31.22.29:22-147.75.109.163:35598.service: Deactivated successfully. Jan 13 20:10:09.286151 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:10:09.288752 systemd-logind[1913]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:10:09.290606 systemd-logind[1913]: Removed session 11. Jan 13 20:10:14.316822 systemd[1]: Started sshd@11-172.31.22.29:22-147.75.109.163:35602.service - OpenSSH per-connection server daemon (147.75.109.163:35602). Jan 13 20:10:14.490937 sshd[4945]: Accepted publickey for core from 147.75.109.163 port 35602 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:14.493603 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:14.505783 systemd-logind[1913]: New session 12 of user core. Jan 13 20:10:14.516651 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:10:14.769437 sshd[4947]: Connection closed by 147.75.109.163 port 35602 Jan 13 20:10:14.770291 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:14.776841 systemd[1]: sshd@11-172.31.22.29:22-147.75.109.163:35602.service: Deactivated successfully. Jan 13 20:10:14.781407 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:10:14.783935 systemd-logind[1913]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:10:14.785966 systemd-logind[1913]: Removed session 12. Jan 13 20:10:14.808885 systemd[1]: Started sshd@12-172.31.22.29:22-147.75.109.163:35606.service - OpenSSH per-connection server daemon (147.75.109.163:35606). Jan 13 20:10:14.999802 sshd[4959]: Accepted publickey for core from 147.75.109.163 port 35606 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:15.002232 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:15.010502 systemd-logind[1913]: New session 13 of user core. Jan 13 20:10:15.016631 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:10:15.332445 sshd[4961]: Connection closed by 147.75.109.163 port 35606 Jan 13 20:10:15.333318 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:15.341217 systemd[1]: sshd@12-172.31.22.29:22-147.75.109.163:35606.service: Deactivated successfully. Jan 13 20:10:15.347751 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:10:15.354773 systemd-logind[1913]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:10:15.377913 systemd[1]: Started sshd@13-172.31.22.29:22-147.75.109.163:35616.service - OpenSSH per-connection server daemon (147.75.109.163:35616). Jan 13 20:10:15.381254 systemd-logind[1913]: Removed session 13. Jan 13 20:10:15.565157 sshd[4970]: Accepted publickey for core from 147.75.109.163 port 35616 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:15.567732 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:15.575306 systemd-logind[1913]: New session 14 of user core. Jan 13 20:10:15.582623 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:10:15.828560 sshd[4972]: Connection closed by 147.75.109.163 port 35616 Jan 13 20:10:15.829391 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:15.842117 systemd[1]: sshd@13-172.31.22.29:22-147.75.109.163:35616.service: Deactivated successfully. Jan 13 20:10:15.850407 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:10:15.856340 systemd-logind[1913]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:10:15.858554 systemd-logind[1913]: Removed session 14. Jan 13 20:10:20.869882 systemd[1]: Started sshd@14-172.31.22.29:22-147.75.109.163:45930.service - OpenSSH per-connection server daemon (147.75.109.163:45930). Jan 13 20:10:21.050142 sshd[4986]: Accepted publickey for core from 147.75.109.163 port 45930 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:21.052735 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:21.059870 systemd-logind[1913]: New session 15 of user core. Jan 13 20:10:21.065640 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:10:21.310745 sshd[4988]: Connection closed by 147.75.109.163 port 45930 Jan 13 20:10:21.311647 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:21.318151 systemd[1]: sshd@14-172.31.22.29:22-147.75.109.163:45930.service: Deactivated successfully. Jan 13 20:10:21.324609 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:10:21.326404 systemd-logind[1913]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:10:21.328202 systemd-logind[1913]: Removed session 15. Jan 13 20:10:26.352919 systemd[1]: Started sshd@15-172.31.22.29:22-147.75.109.163:45932.service - OpenSSH per-connection server daemon (147.75.109.163:45932). Jan 13 20:10:26.549966 sshd[5000]: Accepted publickey for core from 147.75.109.163 port 45932 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:26.552481 sshd-session[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:26.561712 systemd-logind[1913]: New session 16 of user core. Jan 13 20:10:26.567634 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:10:26.814416 sshd[5002]: Connection closed by 147.75.109.163 port 45932 Jan 13 20:10:26.815236 sshd-session[5000]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:26.820679 systemd[1]: sshd@15-172.31.22.29:22-147.75.109.163:45932.service: Deactivated successfully. Jan 13 20:10:26.824190 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:10:26.828436 systemd-logind[1913]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:10:26.831201 systemd-logind[1913]: Removed session 16. Jan 13 20:10:31.853861 systemd[1]: Started sshd@16-172.31.22.29:22-147.75.109.163:57640.service - OpenSSH per-connection server daemon (147.75.109.163:57640). Jan 13 20:10:32.039281 sshd[5013]: Accepted publickey for core from 147.75.109.163 port 57640 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:32.042313 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:32.051049 systemd-logind[1913]: New session 17 of user core. Jan 13 20:10:32.057650 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:10:32.301771 sshd[5015]: Connection closed by 147.75.109.163 port 57640 Jan 13 20:10:32.302615 sshd-session[5013]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:32.309102 systemd[1]: sshd@16-172.31.22.29:22-147.75.109.163:57640.service: Deactivated successfully. Jan 13 20:10:32.313662 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:10:32.315599 systemd-logind[1913]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:10:32.317194 systemd-logind[1913]: Removed session 17. Jan 13 20:10:37.344895 systemd[1]: Started sshd@17-172.31.22.29:22-147.75.109.163:37304.service - OpenSSH per-connection server daemon (147.75.109.163:37304). Jan 13 20:10:37.527347 sshd[5027]: Accepted publickey for core from 147.75.109.163 port 37304 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:37.529919 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:37.537789 systemd-logind[1913]: New session 18 of user core. Jan 13 20:10:37.545620 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:10:37.795620 sshd[5029]: Connection closed by 147.75.109.163 port 37304 Jan 13 20:10:37.795499 sshd-session[5027]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:37.800650 systemd-logind[1913]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:10:37.801470 systemd[1]: sshd@17-172.31.22.29:22-147.75.109.163:37304.service: Deactivated successfully. Jan 13 20:10:37.804899 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:10:37.809308 systemd-logind[1913]: Removed session 18. Jan 13 20:10:37.835860 systemd[1]: Started sshd@18-172.31.22.29:22-147.75.109.163:37306.service - OpenSSH per-connection server daemon (147.75.109.163:37306). Jan 13 20:10:38.032865 sshd[5040]: Accepted publickey for core from 147.75.109.163 port 37306 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:38.035397 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:38.043502 systemd-logind[1913]: New session 19 of user core. Jan 13 20:10:38.052617 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:10:38.344988 sshd[5042]: Connection closed by 147.75.109.163 port 37306 Jan 13 20:10:38.344071 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:38.351191 systemd[1]: sshd@18-172.31.22.29:22-147.75.109.163:37306.service: Deactivated successfully. Jan 13 20:10:38.354591 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:10:38.356007 systemd-logind[1913]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:10:38.358778 systemd-logind[1913]: Removed session 19. Jan 13 20:10:38.380891 systemd[1]: Started sshd@19-172.31.22.29:22-147.75.109.163:37318.service - OpenSSH per-connection server daemon (147.75.109.163:37318). Jan 13 20:10:38.564007 sshd[5051]: Accepted publickey for core from 147.75.109.163 port 37318 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:38.566550 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:38.575342 systemd-logind[1913]: New session 20 of user core. Jan 13 20:10:38.582623 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:10:41.041725 sshd[5053]: Connection closed by 147.75.109.163 port 37318 Jan 13 20:10:41.045819 sshd-session[5051]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:41.054038 systemd[1]: sshd@19-172.31.22.29:22-147.75.109.163:37318.service: Deactivated successfully. Jan 13 20:10:41.063481 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:10:41.071077 systemd-logind[1913]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:10:41.095543 systemd[1]: Started sshd@20-172.31.22.29:22-147.75.109.163:37324.service - OpenSSH per-connection server daemon (147.75.109.163:37324). Jan 13 20:10:41.097264 systemd-logind[1913]: Removed session 20. Jan 13 20:10:41.281175 sshd[5069]: Accepted publickey for core from 147.75.109.163 port 37324 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:41.283646 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:41.291747 systemd-logind[1913]: New session 21 of user core. Jan 13 20:10:41.302661 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:10:41.783161 sshd[5071]: Connection closed by 147.75.109.163 port 37324 Jan 13 20:10:41.784032 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:41.790260 systemd-logind[1913]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:10:41.791821 systemd[1]: sshd@20-172.31.22.29:22-147.75.109.163:37324.service: Deactivated successfully. Jan 13 20:10:41.796904 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:10:41.799643 systemd-logind[1913]: Removed session 21. Jan 13 20:10:41.820988 systemd[1]: Started sshd@21-172.31.22.29:22-147.75.109.163:37340.service - OpenSSH per-connection server daemon (147.75.109.163:37340). Jan 13 20:10:42.014087 sshd[5080]: Accepted publickey for core from 147.75.109.163 port 37340 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:42.017196 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:42.024502 systemd-logind[1913]: New session 22 of user core. Jan 13 20:10:42.035605 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:10:42.275340 sshd[5082]: Connection closed by 147.75.109.163 port 37340 Jan 13 20:10:42.276195 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:42.281976 systemd[1]: sshd@21-172.31.22.29:22-147.75.109.163:37340.service: Deactivated successfully. Jan 13 20:10:42.286292 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:10:42.288064 systemd-logind[1913]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:10:42.291216 systemd-logind[1913]: Removed session 22. Jan 13 20:10:47.315860 systemd[1]: Started sshd@22-172.31.22.29:22-147.75.109.163:37352.service - OpenSSH per-connection server daemon (147.75.109.163:37352). Jan 13 20:10:47.509487 sshd[5093]: Accepted publickey for core from 147.75.109.163 port 37352 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:47.511973 sshd-session[5093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:47.520600 systemd-logind[1913]: New session 23 of user core. Jan 13 20:10:47.528628 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:10:47.785977 sshd[5095]: Connection closed by 147.75.109.163 port 37352 Jan 13 20:10:47.786950 sshd-session[5093]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:47.793385 systemd[1]: sshd@22-172.31.22.29:22-147.75.109.163:37352.service: Deactivated successfully. Jan 13 20:10:47.797066 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:10:47.798878 systemd-logind[1913]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:10:47.800976 systemd-logind[1913]: Removed session 23. Jan 13 20:10:52.829943 systemd[1]: Started sshd@23-172.31.22.29:22-147.75.109.163:40154.service - OpenSSH per-connection server daemon (147.75.109.163:40154). Jan 13 20:10:53.021308 sshd[5112]: Accepted publickey for core from 147.75.109.163 port 40154 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:53.023935 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:53.031653 systemd-logind[1913]: New session 24 of user core. Jan 13 20:10:53.042631 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:10:53.277750 sshd[5114]: Connection closed by 147.75.109.163 port 40154 Jan 13 20:10:53.278689 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:53.285248 systemd[1]: sshd@23-172.31.22.29:22-147.75.109.163:40154.service: Deactivated successfully. Jan 13 20:10:53.289193 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:10:53.291237 systemd-logind[1913]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:10:53.293307 systemd-logind[1913]: Removed session 24. Jan 13 20:10:58.316890 systemd[1]: Started sshd@24-172.31.22.29:22-147.75.109.163:56918.service - OpenSSH per-connection server daemon (147.75.109.163:56918). Jan 13 20:10:58.503414 sshd[5125]: Accepted publickey for core from 147.75.109.163 port 56918 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:10:58.506159 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:10:58.514472 systemd-logind[1913]: New session 25 of user core. Jan 13 20:10:58.519663 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:10:58.761371 sshd[5127]: Connection closed by 147.75.109.163 port 56918 Jan 13 20:10:58.762205 sshd-session[5125]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:58.768423 systemd[1]: sshd@24-172.31.22.29:22-147.75.109.163:56918.service: Deactivated successfully. Jan 13 20:10:58.774289 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:10:58.776771 systemd-logind[1913]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:10:58.778767 systemd-logind[1913]: Removed session 25. Jan 13 20:11:03.799893 systemd[1]: Started sshd@25-172.31.22.29:22-147.75.109.163:56932.service - OpenSSH per-connection server daemon (147.75.109.163:56932). Jan 13 20:11:03.992749 sshd[5137]: Accepted publickey for core from 147.75.109.163 port 56932 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:03.995965 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:04.003302 systemd-logind[1913]: New session 26 of user core. Jan 13 20:11:04.019648 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:11:04.255587 sshd[5139]: Connection closed by 147.75.109.163 port 56932 Jan 13 20:11:04.256485 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:04.262908 systemd[1]: sshd@25-172.31.22.29:22-147.75.109.163:56932.service: Deactivated successfully. Jan 13 20:11:04.269375 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:11:04.271528 systemd-logind[1913]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:11:04.273682 systemd-logind[1913]: Removed session 26. Jan 13 20:11:04.293884 systemd[1]: Started sshd@26-172.31.22.29:22-147.75.109.163:56938.service - OpenSSH per-connection server daemon (147.75.109.163:56938). Jan 13 20:11:04.479285 sshd[5150]: Accepted publickey for core from 147.75.109.163 port 56938 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:04.481896 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:04.490610 systemd-logind[1913]: New session 27 of user core. Jan 13 20:11:04.496644 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:11:06.836402 containerd[1931]: time="2025-01-13T20:11:06.836274309Z" level=info msg="StopContainer for \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\" with timeout 30 (s)" Jan 13 20:11:06.838188 containerd[1931]: time="2025-01-13T20:11:06.837404997Z" level=info msg="Stop container \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\" with signal terminated" Jan 13 20:11:06.874984 systemd[1]: cri-containerd-55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7.scope: Deactivated successfully. Jan 13 20:11:06.902241 containerd[1931]: time="2025-01-13T20:11:06.901903557Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:11:06.916856 containerd[1931]: time="2025-01-13T20:11:06.916439373Z" level=info msg="StopContainer for \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\" with timeout 2 (s)" Jan 13 20:11:06.917233 containerd[1931]: time="2025-01-13T20:11:06.917192013Z" level=info msg="Stop container \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\" with signal terminated" Jan 13 20:11:06.939849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7-rootfs.mount: Deactivated successfully. Jan 13 20:11:06.951423 systemd-networkd[1849]: lxc_health: Link DOWN Jan 13 20:11:06.951438 systemd-networkd[1849]: lxc_health: Lost carrier Jan 13 20:11:06.965642 containerd[1931]: time="2025-01-13T20:11:06.964822425Z" level=info msg="shim disconnected" id=55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7 namespace=k8s.io Jan 13 20:11:06.965831 containerd[1931]: time="2025-01-13T20:11:06.965639529Z" level=warning msg="cleaning up after shim disconnected" id=55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7 namespace=k8s.io Jan 13 20:11:06.965831 containerd[1931]: time="2025-01-13T20:11:06.965688657Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:06.984540 systemd[1]: cri-containerd-a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9.scope: Deactivated successfully. Jan 13 20:11:06.985985 systemd[1]: cri-containerd-a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9.scope: Consumed 14.285s CPU time. Jan 13 20:11:07.007465 containerd[1931]: time="2025-01-13T20:11:07.007408409Z" level=info msg="StopContainer for \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\" returns successfully" Jan 13 20:11:07.008981 containerd[1931]: time="2025-01-13T20:11:07.008635181Z" level=info msg="StopPodSandbox for \"5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b\"" Jan 13 20:11:07.008981 containerd[1931]: time="2025-01-13T20:11:07.008719913Z" level=info msg="Container to stop \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:07.013810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b-shm.mount: Deactivated successfully. Jan 13 20:11:07.032759 systemd[1]: cri-containerd-5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b.scope: Deactivated successfully. Jan 13 20:11:07.041812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9-rootfs.mount: Deactivated successfully. Jan 13 20:11:07.051900 containerd[1931]: time="2025-01-13T20:11:07.051603234Z" level=info msg="shim disconnected" id=a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9 namespace=k8s.io Jan 13 20:11:07.051900 containerd[1931]: time="2025-01-13T20:11:07.051715470Z" level=warning msg="cleaning up after shim disconnected" id=a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9 namespace=k8s.io Jan 13 20:11:07.051900 containerd[1931]: time="2025-01-13T20:11:07.051737106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:07.090233 containerd[1931]: time="2025-01-13T20:11:07.089572422Z" level=info msg="shim disconnected" id=5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b namespace=k8s.io Jan 13 20:11:07.090233 containerd[1931]: time="2025-01-13T20:11:07.089651898Z" level=warning msg="cleaning up after shim disconnected" id=5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b namespace=k8s.io Jan 13 20:11:07.090233 containerd[1931]: time="2025-01-13T20:11:07.089675790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:07.090233 containerd[1931]: time="2025-01-13T20:11:07.089740266Z" level=info msg="StopContainer for \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\" returns successfully" Jan 13 20:11:07.091558 containerd[1931]: time="2025-01-13T20:11:07.090665646Z" level=info msg="StopPodSandbox for \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\"" Jan 13 20:11:07.091558 containerd[1931]: time="2025-01-13T20:11:07.090751290Z" level=info msg="Container to stop \"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:07.091558 containerd[1931]: time="2025-01-13T20:11:07.090776778Z" level=info msg="Container to stop \"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:07.091558 containerd[1931]: time="2025-01-13T20:11:07.090797790Z" level=info msg="Container to stop \"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:07.091558 containerd[1931]: time="2025-01-13T20:11:07.090818022Z" level=info msg="Container to stop \"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:07.091558 containerd[1931]: time="2025-01-13T20:11:07.090838542Z" level=info msg="Container to stop \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:11:07.107598 systemd[1]: cri-containerd-91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d.scope: Deactivated successfully. Jan 13 20:11:07.127143 containerd[1931]: time="2025-01-13T20:11:07.126990150Z" level=info msg="TearDown network for sandbox \"5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b\" successfully" Jan 13 20:11:07.127505 containerd[1931]: time="2025-01-13T20:11:07.127320894Z" level=info msg="StopPodSandbox for \"5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b\" returns successfully" Jan 13 20:11:07.169731 containerd[1931]: time="2025-01-13T20:11:07.169406670Z" level=info msg="shim disconnected" id=91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d namespace=k8s.io Jan 13 20:11:07.169731 containerd[1931]: time="2025-01-13T20:11:07.169495062Z" level=warning msg="cleaning up after shim disconnected" id=91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d namespace=k8s.io Jan 13 20:11:07.169731 containerd[1931]: time="2025-01-13T20:11:07.169514238Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:07.193720 containerd[1931]: time="2025-01-13T20:11:07.193649454Z" level=info msg="TearDown network for sandbox \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" successfully" Jan 13 20:11:07.193720 containerd[1931]: time="2025-01-13T20:11:07.193704114Z" level=info msg="StopPodSandbox for \"91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d\" returns successfully" Jan 13 20:11:07.250483 kubelet[3244]: I0113 20:11:07.250442 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wft6j\" (UniqueName: \"kubernetes.io/projected/b84dd74e-0d19-4431-8dd2-34e56913efdb-kube-api-access-wft6j\") pod \"b84dd74e-0d19-4431-8dd2-34e56913efdb\" (UID: \"b84dd74e-0d19-4431-8dd2-34e56913efdb\") " Jan 13 20:11:07.252506 kubelet[3244]: I0113 20:11:07.251135 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b84dd74e-0d19-4431-8dd2-34e56913efdb-cilium-config-path\") pod \"b84dd74e-0d19-4431-8dd2-34e56913efdb\" (UID: \"b84dd74e-0d19-4431-8dd2-34e56913efdb\") " Jan 13 20:11:07.256517 kubelet[3244]: I0113 20:11:07.256324 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b84dd74e-0d19-4431-8dd2-34e56913efdb-kube-api-access-wft6j" (OuterVolumeSpecName: "kube-api-access-wft6j") pod "b84dd74e-0d19-4431-8dd2-34e56913efdb" (UID: "b84dd74e-0d19-4431-8dd2-34e56913efdb"). InnerVolumeSpecName "kube-api-access-wft6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:11:07.257216 kubelet[3244]: I0113 20:11:07.257181 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b84dd74e-0d19-4431-8dd2-34e56913efdb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b84dd74e-0d19-4431-8dd2-34e56913efdb" (UID: "b84dd74e-0d19-4431-8dd2-34e56913efdb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:11:07.351862 kubelet[3244]: I0113 20:11:07.351701 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-host-proc-sys-net\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.351862 kubelet[3244]: I0113 20:11:07.351775 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-run\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.351862 kubelet[3244]: I0113 20:11:07.351826 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16e9fb72-3c31-4856-8a9b-f6a97a009515-clustermesh-secrets\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352134 kubelet[3244]: I0113 20:11:07.351885 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-bpf-maps\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352134 kubelet[3244]: I0113 20:11:07.351935 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-config-path\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352134 kubelet[3244]: I0113 20:11:07.351996 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-cgroup\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352134 kubelet[3244]: I0113 20:11:07.352044 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16e9fb72-3c31-4856-8a9b-f6a97a009515-hubble-tls\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352134 kubelet[3244]: I0113 20:11:07.352088 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnvzg\" (UniqueName: \"kubernetes.io/projected/16e9fb72-3c31-4856-8a9b-f6a97a009515-kube-api-access-pnvzg\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352134 kubelet[3244]: I0113 20:11:07.352134 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-host-proc-sys-kernel\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352491 kubelet[3244]: I0113 20:11:07.352172 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-hostproc\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352491 kubelet[3244]: I0113 20:11:07.352220 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-lib-modules\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352491 kubelet[3244]: I0113 20:11:07.352267 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cni-path\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352491 kubelet[3244]: I0113 20:11:07.352306 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-xtables-lock\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352491 kubelet[3244]: I0113 20:11:07.352346 3244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-etc-cni-netd\") pod \"16e9fb72-3c31-4856-8a9b-f6a97a009515\" (UID: \"16e9fb72-3c31-4856-8a9b-f6a97a009515\") " Jan 13 20:11:07.352491 kubelet[3244]: I0113 20:11:07.352433 3244 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wft6j\" (UniqueName: \"kubernetes.io/projected/b84dd74e-0d19-4431-8dd2-34e56913efdb-kube-api-access-wft6j\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.352491 kubelet[3244]: I0113 20:11:07.352460 3244 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b84dd74e-0d19-4431-8dd2-34e56913efdb-cilium-config-path\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.352840 kubelet[3244]: I0113 20:11:07.352532 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:07.352840 kubelet[3244]: I0113 20:11:07.352590 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:07.352840 kubelet[3244]: I0113 20:11:07.352628 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:07.362054 kubelet[3244]: I0113 20:11:07.359792 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:07.362054 kubelet[3244]: I0113 20:11:07.360644 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:07.362054 kubelet[3244]: I0113 20:11:07.360691 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-hostproc" (OuterVolumeSpecName: "hostproc") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:07.362054 kubelet[3244]: I0113 20:11:07.360732 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:07.362054 kubelet[3244]: I0113 20:11:07.360810 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cni-path" (OuterVolumeSpecName: "cni-path") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:07.362506 kubelet[3244]: I0113 20:11:07.360854 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:07.362506 kubelet[3244]: I0113 20:11:07.360905 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:11:07.375647 kubelet[3244]: I0113 20:11:07.375583 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e9fb72-3c31-4856-8a9b-f6a97a009515-kube-api-access-pnvzg" (OuterVolumeSpecName: "kube-api-access-pnvzg") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "kube-api-access-pnvzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:11:07.381411 kubelet[3244]: I0113 20:11:07.379500 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16e9fb72-3c31-4856-8a9b-f6a97a009515-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:11:07.382447 kubelet[3244]: I0113 20:11:07.382382 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16e9fb72-3c31-4856-8a9b-f6a97a009515-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:11:07.383258 kubelet[3244]: I0113 20:11:07.382892 3244 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "16e9fb72-3c31-4856-8a9b-f6a97a009515" (UID: "16e9fb72-3c31-4856-8a9b-f6a97a009515"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:11:07.422397 kubelet[3244]: I0113 20:11:07.419459 3244 scope.go:117] "RemoveContainer" containerID="a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9" Jan 13 20:11:07.430770 containerd[1931]: time="2025-01-13T20:11:07.430644248Z" level=info msg="RemoveContainer for \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\"" Jan 13 20:11:07.443214 containerd[1931]: time="2025-01-13T20:11:07.443146688Z" level=info msg="RemoveContainer for \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\" returns successfully" Jan 13 20:11:07.444280 kubelet[3244]: I0113 20:11:07.444003 3244 scope.go:117] "RemoveContainer" containerID="4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e" Jan 13 20:11:07.447652 systemd[1]: Removed slice kubepods-burstable-pod16e9fb72_3c31_4856_8a9b_f6a97a009515.slice - libcontainer container kubepods-burstable-pod16e9fb72_3c31_4856_8a9b_f6a97a009515.slice. Jan 13 20:11:07.448286 systemd[1]: kubepods-burstable-pod16e9fb72_3c31_4856_8a9b_f6a97a009515.slice: Consumed 14.429s CPU time. Jan 13 20:11:07.452336 containerd[1931]: time="2025-01-13T20:11:07.451711484Z" level=info msg="RemoveContainer for \"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e\"" Jan 13 20:11:07.453764 kubelet[3244]: I0113 20:11:07.452747 3244 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-host-proc-sys-net\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.453764 kubelet[3244]: I0113 20:11:07.452787 3244 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-run\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.453764 kubelet[3244]: I0113 20:11:07.452840 3244 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16e9fb72-3c31-4856-8a9b-f6a97a009515-clustermesh-secrets\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.453764 kubelet[3244]: I0113 20:11:07.452868 3244 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-bpf-maps\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.453764 kubelet[3244]: I0113 20:11:07.452923 3244 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-config-path\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.453764 kubelet[3244]: I0113 20:11:07.452952 3244 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cilium-cgroup\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.453764 kubelet[3244]: I0113 20:11:07.452976 3244 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16e9fb72-3c31-4856-8a9b-f6a97a009515-hubble-tls\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.453764 kubelet[3244]: I0113 20:11:07.453033 3244 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pnvzg\" (UniqueName: \"kubernetes.io/projected/16e9fb72-3c31-4856-8a9b-f6a97a009515-kube-api-access-pnvzg\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.454251 kubelet[3244]: I0113 20:11:07.453062 3244 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-host-proc-sys-kernel\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.454251 kubelet[3244]: I0113 20:11:07.453115 3244 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-hostproc\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.454251 kubelet[3244]: I0113 20:11:07.453142 3244 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-lib-modules\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.454251 kubelet[3244]: I0113 20:11:07.453192 3244 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-cni-path\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.454251 kubelet[3244]: I0113 20:11:07.453220 3244 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-xtables-lock\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.454251 kubelet[3244]: I0113 20:11:07.453242 3244 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16e9fb72-3c31-4856-8a9b-f6a97a009515-etc-cni-netd\") on node \"ip-172-31-22-29\" DevicePath \"\"" Jan 13 20:11:07.458388 systemd[1]: Removed slice kubepods-besteffort-podb84dd74e_0d19_4431_8dd2_34e56913efdb.slice - libcontainer container kubepods-besteffort-podb84dd74e_0d19_4431_8dd2_34e56913efdb.slice. Jan 13 20:11:07.465235 containerd[1931]: time="2025-01-13T20:11:07.465180872Z" level=info msg="RemoveContainer for \"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e\" returns successfully" Jan 13 20:11:07.466337 kubelet[3244]: I0113 20:11:07.466299 3244 scope.go:117] "RemoveContainer" containerID="3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8" Jan 13 20:11:07.480009 containerd[1931]: time="2025-01-13T20:11:07.479259404Z" level=info msg="RemoveContainer for \"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8\"" Jan 13 20:11:07.489873 containerd[1931]: time="2025-01-13T20:11:07.489805172Z" level=info msg="RemoveContainer for \"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8\" returns successfully" Jan 13 20:11:07.490171 kubelet[3244]: I0113 20:11:07.490127 3244 scope.go:117] "RemoveContainer" containerID="09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795" Jan 13 20:11:07.494190 containerd[1931]: time="2025-01-13T20:11:07.494126216Z" level=info msg="RemoveContainer for \"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795\"" Jan 13 20:11:07.502231 containerd[1931]: time="2025-01-13T20:11:07.502126412Z" level=info msg="RemoveContainer for \"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795\" returns successfully" Jan 13 20:11:07.502696 kubelet[3244]: I0113 20:11:07.502454 3244 scope.go:117] "RemoveContainer" containerID="641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b" Jan 13 20:11:07.505906 containerd[1931]: time="2025-01-13T20:11:07.505814312Z" level=info msg="RemoveContainer for \"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b\"" Jan 13 20:11:07.514033 containerd[1931]: time="2025-01-13T20:11:07.513951272Z" level=info msg="RemoveContainer for \"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b\" returns successfully" Jan 13 20:11:07.514565 kubelet[3244]: I0113 20:11:07.514306 3244 scope.go:117] "RemoveContainer" containerID="a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9" Jan 13 20:11:07.514946 containerd[1931]: time="2025-01-13T20:11:07.514840484Z" level=error msg="ContainerStatus for \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\": not found" Jan 13 20:11:07.515478 kubelet[3244]: E0113 20:11:07.515427 3244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\": not found" containerID="a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9" Jan 13 20:11:07.515948 kubelet[3244]: I0113 20:11:07.515596 3244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9"} err="failed to get container status \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"a34fcba0fc51c0ea19b21a8c7909720ff1825cd660a06ecbbd74c4853427a4f9\": not found" Jan 13 20:11:07.515948 kubelet[3244]: I0113 20:11:07.515629 3244 scope.go:117] "RemoveContainer" containerID="4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e" Jan 13 20:11:07.516811 containerd[1931]: time="2025-01-13T20:11:07.516010844Z" level=error msg="ContainerStatus for \"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e\": not found" Jan 13 20:11:07.518669 kubelet[3244]: E0113 20:11:07.517385 3244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e\": not found" containerID="4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e" Jan 13 20:11:07.518669 kubelet[3244]: I0113 20:11:07.517446 3244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e"} err="failed to get container status \"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ac96c9fed1a9a6fbf0d9aadd05354648d0aff85b3f853997856c5b3106b869e\": not found" Jan 13 20:11:07.518669 kubelet[3244]: I0113 20:11:07.517472 3244 scope.go:117] "RemoveContainer" containerID="3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8" Jan 13 20:11:07.519450 containerd[1931]: time="2025-01-13T20:11:07.519278696Z" level=error msg="ContainerStatus for \"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8\": not found" Jan 13 20:11:07.519838 kubelet[3244]: E0113 20:11:07.519705 3244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8\": not found" containerID="3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8" Jan 13 20:11:07.519838 kubelet[3244]: I0113 20:11:07.519765 3244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8"} err="failed to get container status \"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ffef2efe4b0faaaecd45e55697046d64dda9275f4b67298af4e801eb94db5b8\": not found" Jan 13 20:11:07.519838 kubelet[3244]: I0113 20:11:07.519791 3244 scope.go:117] "RemoveContainer" containerID="09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795" Jan 13 20:11:07.521146 containerd[1931]: time="2025-01-13T20:11:07.520848572Z" level=error msg="ContainerStatus for \"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795\": not found" Jan 13 20:11:07.521534 kubelet[3244]: E0113 20:11:07.521495 3244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795\": not found" containerID="09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795" Jan 13 20:11:07.521629 kubelet[3244]: I0113 20:11:07.521564 3244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795"} err="failed to get container status \"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795\": rpc error: code = NotFound desc = an error occurred when try to find container \"09d3218d0e8e96dc75268ae39f3a7f934f2a1522cb25ca7a025c9d5f1cc45795\": not found" Jan 13 20:11:07.521629 kubelet[3244]: I0113 20:11:07.521591 3244 scope.go:117] "RemoveContainer" containerID="641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b" Jan 13 20:11:07.522499 containerd[1931]: time="2025-01-13T20:11:07.522427700Z" level=error msg="ContainerStatus for \"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b\": not found" Jan 13 20:11:07.522757 kubelet[3244]: E0113 20:11:07.522715 3244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b\": not found" containerID="641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b" Jan 13 20:11:07.522866 kubelet[3244]: I0113 20:11:07.522780 3244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b"} err="failed to get container status \"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b\": rpc error: code = NotFound desc = an error occurred when try to find container \"641da771cd71ef2aadd7046f40c9569883edf0c77ee807e826ae7570a9c7f31b\": not found" Jan 13 20:11:07.522866 kubelet[3244]: I0113 20:11:07.522805 3244 scope.go:117] "RemoveContainer" containerID="55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7" Jan 13 20:11:07.525136 containerd[1931]: time="2025-01-13T20:11:07.525085856Z" level=info msg="RemoveContainer for \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\"" Jan 13 20:11:07.530731 containerd[1931]: time="2025-01-13T20:11:07.530675996Z" level=info msg="RemoveContainer for \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\" returns successfully" Jan 13 20:11:07.531064 kubelet[3244]: I0113 20:11:07.531021 3244 scope.go:117] "RemoveContainer" containerID="55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7" Jan 13 20:11:07.531724 containerd[1931]: time="2025-01-13T20:11:07.531552092Z" level=error msg="ContainerStatus for \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\": not found" Jan 13 20:11:07.531844 kubelet[3244]: E0113 20:11:07.531806 3244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\": not found" containerID="55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7" Jan 13 20:11:07.531924 kubelet[3244]: I0113 20:11:07.531861 3244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7"} err="failed to get container status \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"55ab1b3a35b2b0483e68fb1d0962e40576f41ff386ac38f076113be8ee1294d7\": not found" Jan 13 20:11:07.843563 kubelet[3244]: I0113 20:11:07.843496 3244 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="16e9fb72-3c31-4856-8a9b-f6a97a009515" path="/var/lib/kubelet/pods/16e9fb72-3c31-4856-8a9b-f6a97a009515/volumes" Jan 13 20:11:07.845944 kubelet[3244]: I0113 20:11:07.845684 3244 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b84dd74e-0d19-4431-8dd2-34e56913efdb" path="/var/lib/kubelet/pods/b84dd74e-0d19-4431-8dd2-34e56913efdb/volumes" Jan 13 20:11:07.860937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e327f96c4d68d2b91ac92f49aeafc92208541fb69491ef3a78efd0707ab8c5b-rootfs.mount: Deactivated successfully. Jan 13 20:11:07.861323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d-rootfs.mount: Deactivated successfully. Jan 13 20:11:07.861606 systemd[1]: var-lib-kubelet-pods-b84dd74e\x2d0d19\x2d4431\x2d8dd2\x2d34e56913efdb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwft6j.mount: Deactivated successfully. Jan 13 20:11:07.861927 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91668022691598425b6cd52f4c54fd2b333626b3efca169478dbc42c0fa16d4d-shm.mount: Deactivated successfully. Jan 13 20:11:07.862180 systemd[1]: var-lib-kubelet-pods-16e9fb72\x2d3c31\x2d4856\x2d8a9b\x2df6a97a009515-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpnvzg.mount: Deactivated successfully. Jan 13 20:11:07.862463 systemd[1]: var-lib-kubelet-pods-16e9fb72\x2d3c31\x2d4856\x2d8a9b\x2df6a97a009515-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:11:07.862711 systemd[1]: var-lib-kubelet-pods-16e9fb72\x2d3c31\x2d4856\x2d8a9b\x2df6a97a009515-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:11:08.775869 sshd[5152]: Connection closed by 147.75.109.163 port 56938 Jan 13 20:11:08.776396 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:08.783923 systemd[1]: sshd@26-172.31.22.29:22-147.75.109.163:56938.service: Deactivated successfully. Jan 13 20:11:08.788240 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:11:08.788903 systemd[1]: session-27.scope: Consumed 1.597s CPU time. Jan 13 20:11:08.790440 systemd-logind[1913]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:11:08.792669 systemd-logind[1913]: Removed session 27. Jan 13 20:11:08.815893 systemd[1]: Started sshd@27-172.31.22.29:22-147.75.109.163:37766.service - OpenSSH per-connection server daemon (147.75.109.163:37766). Jan 13 20:11:08.998536 sshd[5313]: Accepted publickey for core from 147.75.109.163 port 37766 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:09.001611 sshd-session[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:09.011737 systemd-logind[1913]: New session 28 of user core. Jan 13 20:11:09.016634 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:11:09.606755 ntpd[1908]: Deleting interface #11 lxc_health, fe80::244b:82ff:fe70:9548%8#123, interface stats: received=0, sent=0, dropped=0, active_time=82 secs Jan 13 20:11:09.607815 ntpd[1908]: 13 Jan 20:11:09 ntpd[1908]: Deleting interface #11 lxc_health, fe80::244b:82ff:fe70:9548%8#123, interface stats: received=0, sent=0, dropped=0, active_time=82 secs Jan 13 20:11:10.706675 sshd[5315]: Connection closed by 147.75.109.163 port 37766 Jan 13 20:11:10.707139 sshd-session[5313]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:10.716048 systemd[1]: sshd@27-172.31.22.29:22-147.75.109.163:37766.service: Deactivated successfully. Jan 13 20:11:10.722322 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:11:10.725613 systemd[1]: session-28.scope: Consumed 1.487s CPU time. Jan 13 20:11:10.730417 systemd-logind[1913]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:11:10.736921 kubelet[3244]: I0113 20:11:10.734636 3244 topology_manager.go:215] "Topology Admit Handler" podUID="b71b127a-e2c3-4d99-ae37-f3d39f74095d" podNamespace="kube-system" podName="cilium-gn58d" Jan 13 20:11:10.739258 kubelet[3244]: E0113 20:11:10.737557 3244 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16e9fb72-3c31-4856-8a9b-f6a97a009515" containerName="mount-cgroup" Jan 13 20:11:10.739258 kubelet[3244]: E0113 20:11:10.737604 3244 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16e9fb72-3c31-4856-8a9b-f6a97a009515" containerName="apply-sysctl-overwrites" Jan 13 20:11:10.739258 kubelet[3244]: E0113 20:11:10.737624 3244 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16e9fb72-3c31-4856-8a9b-f6a97a009515" containerName="clean-cilium-state" Jan 13 20:11:10.739258 kubelet[3244]: E0113 20:11:10.737642 3244 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b84dd74e-0d19-4431-8dd2-34e56913efdb" containerName="cilium-operator" Jan 13 20:11:10.739258 kubelet[3244]: E0113 20:11:10.737659 3244 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16e9fb72-3c31-4856-8a9b-f6a97a009515" containerName="mount-bpf-fs" Jan 13 20:11:10.739258 kubelet[3244]: E0113 20:11:10.737678 3244 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="16e9fb72-3c31-4856-8a9b-f6a97a009515" containerName="cilium-agent" Jan 13 20:11:10.739258 kubelet[3244]: I0113 20:11:10.737736 3244 memory_manager.go:354] "RemoveStaleState removing state" podUID="16e9fb72-3c31-4856-8a9b-f6a97a009515" containerName="cilium-agent" Jan 13 20:11:10.739258 kubelet[3244]: I0113 20:11:10.737754 3244 memory_manager.go:354] "RemoveStaleState removing state" podUID="b84dd74e-0d19-4431-8dd2-34e56913efdb" containerName="cilium-operator" Jan 13 20:11:10.759895 kubelet[3244]: W0113 20:11:10.759398 3244 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-22-29" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-29' and this object Jan 13 20:11:10.759895 kubelet[3244]: E0113 20:11:10.759463 3244 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-22-29" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-29' and this object Jan 13 20:11:10.759895 kubelet[3244]: W0113 20:11:10.759533 3244 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-22-29" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-29' and this object Jan 13 20:11:10.759895 kubelet[3244]: E0113 20:11:10.759557 3244 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-22-29" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-29' and this object Jan 13 20:11:10.759895 kubelet[3244]: W0113 20:11:10.759611 3244 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-22-29" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-29' and this object Jan 13 20:11:10.760282 kubelet[3244]: E0113 20:11:10.759636 3244 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-22-29" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-29' and this object Jan 13 20:11:10.760282 kubelet[3244]: W0113 20:11:10.759689 3244 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-22-29" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-29' and this object Jan 13 20:11:10.760282 kubelet[3244]: E0113 20:11:10.759712 3244 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-22-29" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-22-29' and this object Jan 13 20:11:10.765745 systemd[1]: Started sshd@28-172.31.22.29:22-147.75.109.163:37774.service - OpenSSH per-connection server daemon (147.75.109.163:37774). Jan 13 20:11:10.768557 systemd-logind[1913]: Removed session 28. Jan 13 20:11:10.781730 systemd[1]: Created slice kubepods-burstable-podb71b127a_e2c3_4d99_ae37_f3d39f74095d.slice - libcontainer container kubepods-burstable-podb71b127a_e2c3_4d99_ae37_f3d39f74095d.slice. Jan 13 20:11:10.874439 kubelet[3244]: I0113 20:11:10.874280 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b71b127a-e2c3-4d99-ae37-f3d39f74095d-bpf-maps\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.874439 kubelet[3244]: I0113 20:11:10.874384 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b71b127a-e2c3-4d99-ae37-f3d39f74095d-host-proc-sys-kernel\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.874744 kubelet[3244]: I0113 20:11:10.874500 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b71b127a-e2c3-4d99-ae37-f3d39f74095d-hostproc\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.874744 kubelet[3244]: I0113 20:11:10.874553 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b71b127a-e2c3-4d99-ae37-f3d39f74095d-xtables-lock\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.874744 kubelet[3244]: I0113 20:11:10.874622 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b71b127a-e2c3-4d99-ae37-f3d39f74095d-cilium-config-path\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.874744 kubelet[3244]: I0113 20:11:10.874669 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frlz5\" (UniqueName: \"kubernetes.io/projected/b71b127a-e2c3-4d99-ae37-f3d39f74095d-kube-api-access-frlz5\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.874744 kubelet[3244]: I0113 20:11:10.874713 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b71b127a-e2c3-4d99-ae37-f3d39f74095d-cilium-run\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.875115 kubelet[3244]: I0113 20:11:10.874755 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b71b127a-e2c3-4d99-ae37-f3d39f74095d-cilium-ipsec-secrets\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.875115 kubelet[3244]: I0113 20:11:10.874804 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b71b127a-e2c3-4d99-ae37-f3d39f74095d-host-proc-sys-net\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.875115 kubelet[3244]: I0113 20:11:10.874847 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b71b127a-e2c3-4d99-ae37-f3d39f74095d-hubble-tls\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.875115 kubelet[3244]: I0113 20:11:10.874890 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b71b127a-e2c3-4d99-ae37-f3d39f74095d-cni-path\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.875115 kubelet[3244]: I0113 20:11:10.874955 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b71b127a-e2c3-4d99-ae37-f3d39f74095d-etc-cni-netd\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.875115 kubelet[3244]: I0113 20:11:10.875006 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b71b127a-e2c3-4d99-ae37-f3d39f74095d-clustermesh-secrets\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.875457 kubelet[3244]: I0113 20:11:10.875057 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b71b127a-e2c3-4d99-ae37-f3d39f74095d-cilium-cgroup\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:10.875457 kubelet[3244]: I0113 20:11:10.875099 3244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b71b127a-e2c3-4d99-ae37-f3d39f74095d-lib-modules\") pod \"cilium-gn58d\" (UID: \"b71b127a-e2c3-4d99-ae37-f3d39f74095d\") " pod="kube-system/cilium-gn58d" Jan 13 20:11:11.002657 sshd[5325]: Accepted publickey for core from 147.75.109.163 port 37774 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:11.000672 sshd-session[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:11.011220 systemd-logind[1913]: New session 29 of user core. Jan 13 20:11:11.017627 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:11:11.131800 kubelet[3244]: E0113 20:11:11.131740 3244 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:11:11.137391 sshd[5328]: Connection closed by 147.75.109.163 port 37774 Jan 13 20:11:11.137668 sshd-session[5325]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:11.142860 systemd-logind[1913]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:11:11.143687 systemd[1]: sshd@28-172.31.22.29:22-147.75.109.163:37774.service: Deactivated successfully. Jan 13 20:11:11.146875 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:11:11.151315 systemd-logind[1913]: Removed session 29. Jan 13 20:11:11.175172 systemd[1]: Started sshd@29-172.31.22.29:22-147.75.109.163:37780.service - OpenSSH per-connection server daemon (147.75.109.163:37780). Jan 13 20:11:11.367593 sshd[5334]: Accepted publickey for core from 147.75.109.163 port 37780 ssh2: RSA SHA256:IRHkteilZRLg/mCVEzdResksy7NfUBDRRywgALKaHg0 Jan 13 20:11:11.369578 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:11:11.378377 systemd-logind[1913]: New session 30 of user core. Jan 13 20:11:11.381649 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 20:11:11.976566 kubelet[3244]: E0113 20:11:11.976433 3244 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 13 20:11:11.976566 kubelet[3244]: E0113 20:11:11.976490 3244 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 13 20:11:11.976566 kubelet[3244]: E0113 20:11:11.976513 3244 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-gn58d: failed to sync secret cache: timed out waiting for the condition Jan 13 20:11:11.976566 kubelet[3244]: E0113 20:11:11.976437 3244 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 13 20:11:11.978383 kubelet[3244]: E0113 20:11:11.977165 3244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b71b127a-e2c3-4d99-ae37-f3d39f74095d-clustermesh-secrets podName:b71b127a-e2c3-4d99-ae37-f3d39f74095d nodeName:}" failed. No retries permitted until 2025-01-13 20:11:12.476524446 +0000 UTC m=+126.812303772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/b71b127a-e2c3-4d99-ae37-f3d39f74095d-clustermesh-secrets") pod "cilium-gn58d" (UID: "b71b127a-e2c3-4d99-ae37-f3d39f74095d") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:11:11.978383 kubelet[3244]: E0113 20:11:11.977216 3244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b71b127a-e2c3-4d99-ae37-f3d39f74095d-hubble-tls podName:b71b127a-e2c3-4d99-ae37-f3d39f74095d nodeName:}" failed. No retries permitted until 2025-01-13 20:11:12.47719743 +0000 UTC m=+126.812976756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/b71b127a-e2c3-4d99-ae37-f3d39f74095d-hubble-tls") pod "cilium-gn58d" (UID: "b71b127a-e2c3-4d99-ae37-f3d39f74095d") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:11:11.978383 kubelet[3244]: E0113 20:11:11.977246 3244 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b71b127a-e2c3-4d99-ae37-f3d39f74095d-cilium-ipsec-secrets podName:b71b127a-e2c3-4d99-ae37-f3d39f74095d nodeName:}" failed. No retries permitted until 2025-01-13 20:11:12.477228474 +0000 UTC m=+126.813007800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/b71b127a-e2c3-4d99-ae37-f3d39f74095d-cilium-ipsec-secrets") pod "cilium-gn58d" (UID: "b71b127a-e2c3-4d99-ae37-f3d39f74095d") : failed to sync secret cache: timed out waiting for the condition Jan 13 20:11:12.592437 containerd[1931]: time="2025-01-13T20:11:12.592296757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gn58d,Uid:b71b127a-e2c3-4d99-ae37-f3d39f74095d,Namespace:kube-system,Attempt:0,}" Jan 13 20:11:12.638572 containerd[1931]: time="2025-01-13T20:11:12.638038993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:11:12.638572 containerd[1931]: time="2025-01-13T20:11:12.638160577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:11:12.638572 containerd[1931]: time="2025-01-13T20:11:12.638255821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:11:12.638816 containerd[1931]: time="2025-01-13T20:11:12.638509573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:11:12.691672 systemd[1]: Started cri-containerd-8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633.scope - libcontainer container 8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633. Jan 13 20:11:12.745984 containerd[1931]: time="2025-01-13T20:11:12.745570430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gn58d,Uid:b71b127a-e2c3-4d99-ae37-f3d39f74095d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633\"" Jan 13 20:11:12.751280 containerd[1931]: time="2025-01-13T20:11:12.751226054Z" level=info msg="CreateContainer within sandbox \"8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:11:12.773747 containerd[1931]: time="2025-01-13T20:11:12.773667878Z" level=info msg="CreateContainer within sandbox \"8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"178a7d3ba9033483d7cb334eba24d32c6c5c04258a2c3e2969064fabb578bf86\"" Jan 13 20:11:12.774950 containerd[1931]: time="2025-01-13T20:11:12.774836258Z" level=info msg="StartContainer for \"178a7d3ba9033483d7cb334eba24d32c6c5c04258a2c3e2969064fabb578bf86\"" Jan 13 20:11:12.817670 systemd[1]: Started cri-containerd-178a7d3ba9033483d7cb334eba24d32c6c5c04258a2c3e2969064fabb578bf86.scope - libcontainer container 178a7d3ba9033483d7cb334eba24d32c6c5c04258a2c3e2969064fabb578bf86. Jan 13 20:11:12.839561 kubelet[3244]: E0113 20:11:12.839455 3244 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-h7whz" podUID="98ccbad4-e1fa-46fb-85af-254698d90ab8" Jan 13 20:11:12.878456 containerd[1931]: time="2025-01-13T20:11:12.878019507Z" level=info msg="StartContainer for \"178a7d3ba9033483d7cb334eba24d32c6c5c04258a2c3e2969064fabb578bf86\" returns successfully" Jan 13 20:11:12.894670 systemd[1]: cri-containerd-178a7d3ba9033483d7cb334eba24d32c6c5c04258a2c3e2969064fabb578bf86.scope: Deactivated successfully. Jan 13 20:11:12.946303 containerd[1931]: time="2025-01-13T20:11:12.946221831Z" level=info msg="shim disconnected" id=178a7d3ba9033483d7cb334eba24d32c6c5c04258a2c3e2969064fabb578bf86 namespace=k8s.io Jan 13 20:11:12.946303 containerd[1931]: time="2025-01-13T20:11:12.946300443Z" level=warning msg="cleaning up after shim disconnected" id=178a7d3ba9033483d7cb334eba24d32c6c5c04258a2c3e2969064fabb578bf86 namespace=k8s.io Jan 13 20:11:12.946851 containerd[1931]: time="2025-01-13T20:11:12.946321731Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:13.456247 containerd[1931]: time="2025-01-13T20:11:13.456163669Z" level=info msg="CreateContainer within sandbox \"8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:11:13.482747 containerd[1931]: time="2025-01-13T20:11:13.482672114Z" level=info msg="CreateContainer within sandbox \"8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"679c03cb9b1b8e1e43c556995fb314c0b0579acc1c0a632a79f579a5efac09a7\"" Jan 13 20:11:13.484057 containerd[1931]: time="2025-01-13T20:11:13.483995738Z" level=info msg="StartContainer for \"679c03cb9b1b8e1e43c556995fb314c0b0579acc1c0a632a79f579a5efac09a7\"" Jan 13 20:11:13.539761 systemd[1]: run-containerd-runc-k8s.io-679c03cb9b1b8e1e43c556995fb314c0b0579acc1c0a632a79f579a5efac09a7-runc.1osIYO.mount: Deactivated successfully. Jan 13 20:11:13.550677 systemd[1]: Started cri-containerd-679c03cb9b1b8e1e43c556995fb314c0b0579acc1c0a632a79f579a5efac09a7.scope - libcontainer container 679c03cb9b1b8e1e43c556995fb314c0b0579acc1c0a632a79f579a5efac09a7. Jan 13 20:11:13.598931 containerd[1931]: time="2025-01-13T20:11:13.598875518Z" level=info msg="StartContainer for \"679c03cb9b1b8e1e43c556995fb314c0b0579acc1c0a632a79f579a5efac09a7\" returns successfully" Jan 13 20:11:13.614893 systemd[1]: cri-containerd-679c03cb9b1b8e1e43c556995fb314c0b0579acc1c0a632a79f579a5efac09a7.scope: Deactivated successfully. Jan 13 20:11:13.696215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-679c03cb9b1b8e1e43c556995fb314c0b0579acc1c0a632a79f579a5efac09a7-rootfs.mount: Deactivated successfully. Jan 13 20:11:13.702083 containerd[1931]: time="2025-01-13T20:11:13.701993139Z" level=info msg="shim disconnected" id=679c03cb9b1b8e1e43c556995fb314c0b0579acc1c0a632a79f579a5efac09a7 namespace=k8s.io Jan 13 20:11:13.702083 containerd[1931]: time="2025-01-13T20:11:13.702069063Z" level=warning msg="cleaning up after shim disconnected" id=679c03cb9b1b8e1e43c556995fb314c0b0579acc1c0a632a79f579a5efac09a7 namespace=k8s.io Jan 13 20:11:13.702083 containerd[1931]: time="2025-01-13T20:11:13.702092451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:14.461497 containerd[1931]: time="2025-01-13T20:11:14.461220230Z" level=info msg="CreateContainer within sandbox \"8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:11:14.490729 containerd[1931]: time="2025-01-13T20:11:14.489270207Z" level=info msg="CreateContainer within sandbox \"8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9854ce7a1565bd25b42c62d6ef68a818dd80f453e26f130c37d59fe0218d4c18\"" Jan 13 20:11:14.494385 containerd[1931]: time="2025-01-13T20:11:14.491290875Z" level=info msg="StartContainer for \"9854ce7a1565bd25b42c62d6ef68a818dd80f453e26f130c37d59fe0218d4c18\"" Jan 13 20:11:14.558675 systemd[1]: Started cri-containerd-9854ce7a1565bd25b42c62d6ef68a818dd80f453e26f130c37d59fe0218d4c18.scope - libcontainer container 9854ce7a1565bd25b42c62d6ef68a818dd80f453e26f130c37d59fe0218d4c18. Jan 13 20:11:14.616631 containerd[1931]: time="2025-01-13T20:11:14.616459947Z" level=info msg="StartContainer for \"9854ce7a1565bd25b42c62d6ef68a818dd80f453e26f130c37d59fe0218d4c18\" returns successfully" Jan 13 20:11:14.622476 systemd[1]: cri-containerd-9854ce7a1565bd25b42c62d6ef68a818dd80f453e26f130c37d59fe0218d4c18.scope: Deactivated successfully. Jan 13 20:11:14.669265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9854ce7a1565bd25b42c62d6ef68a818dd80f453e26f130c37d59fe0218d4c18-rootfs.mount: Deactivated successfully. Jan 13 20:11:14.686608 containerd[1931]: time="2025-01-13T20:11:14.686474704Z" level=info msg="shim disconnected" id=9854ce7a1565bd25b42c62d6ef68a818dd80f453e26f130c37d59fe0218d4c18 namespace=k8s.io Jan 13 20:11:14.686873 containerd[1931]: time="2025-01-13T20:11:14.686630308Z" level=warning msg="cleaning up after shim disconnected" id=9854ce7a1565bd25b42c62d6ef68a818dd80f453e26f130c37d59fe0218d4c18 namespace=k8s.io Jan 13 20:11:14.686873 containerd[1931]: time="2025-01-13T20:11:14.686653804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:14.838926 kubelet[3244]: E0113 20:11:14.838770 3244 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-h7whz" podUID="98ccbad4-e1fa-46fb-85af-254698d90ab8" Jan 13 20:11:15.471586 containerd[1931]: time="2025-01-13T20:11:15.471332487Z" level=info msg="CreateContainer within sandbox \"8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:11:15.518243 containerd[1931]: time="2025-01-13T20:11:15.518168020Z" level=info msg="CreateContainer within sandbox \"8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"57cf5c13f76f341ad45876d17ecb3f7239b9188d8cf46cabbbbfa707e6d9f364\"" Jan 13 20:11:15.519409 containerd[1931]: time="2025-01-13T20:11:15.519223096Z" level=info msg="StartContainer for \"57cf5c13f76f341ad45876d17ecb3f7239b9188d8cf46cabbbbfa707e6d9f364\"" Jan 13 20:11:15.576671 systemd[1]: Started cri-containerd-57cf5c13f76f341ad45876d17ecb3f7239b9188d8cf46cabbbbfa707e6d9f364.scope - libcontainer container 57cf5c13f76f341ad45876d17ecb3f7239b9188d8cf46cabbbbfa707e6d9f364. Jan 13 20:11:15.620713 systemd[1]: cri-containerd-57cf5c13f76f341ad45876d17ecb3f7239b9188d8cf46cabbbbfa707e6d9f364.scope: Deactivated successfully. Jan 13 20:11:15.625125 containerd[1931]: time="2025-01-13T20:11:15.625048552Z" level=info msg="StartContainer for \"57cf5c13f76f341ad45876d17ecb3f7239b9188d8cf46cabbbbfa707e6d9f364\" returns successfully" Jan 13 20:11:15.658681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57cf5c13f76f341ad45876d17ecb3f7239b9188d8cf46cabbbbfa707e6d9f364-rootfs.mount: Deactivated successfully. Jan 13 20:11:15.669388 containerd[1931]: time="2025-01-13T20:11:15.669204760Z" level=info msg="shim disconnected" id=57cf5c13f76f341ad45876d17ecb3f7239b9188d8cf46cabbbbfa707e6d9f364 namespace=k8s.io Jan 13 20:11:15.669388 containerd[1931]: time="2025-01-13T20:11:15.669278056Z" level=warning msg="cleaning up after shim disconnected" id=57cf5c13f76f341ad45876d17ecb3f7239b9188d8cf46cabbbbfa707e6d9f364 namespace=k8s.io Jan 13 20:11:15.669388 containerd[1931]: time="2025-01-13T20:11:15.669297040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:16.133111 kubelet[3244]: E0113 20:11:16.133056 3244 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:11:16.476795 containerd[1931]: time="2025-01-13T20:11:16.476470360Z" level=info msg="CreateContainer within sandbox \"8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:11:16.508666 containerd[1931]: time="2025-01-13T20:11:16.506867693Z" level=info msg="CreateContainer within sandbox \"8fc9bed17a920e5bc3f9121d3e92e38504d1d44873699f2372968e6998561633\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"be424c21b5e92131524521bf704979a579602a72f0db53a9c3f77197574616b8\"" Jan 13 20:11:16.511564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3255756636.mount: Deactivated successfully. Jan 13 20:11:16.514433 containerd[1931]: time="2025-01-13T20:11:16.512821985Z" level=info msg="StartContainer for \"be424c21b5e92131524521bf704979a579602a72f0db53a9c3f77197574616b8\"" Jan 13 20:11:16.571705 systemd[1]: Started cri-containerd-be424c21b5e92131524521bf704979a579602a72f0db53a9c3f77197574616b8.scope - libcontainer container be424c21b5e92131524521bf704979a579602a72f0db53a9c3f77197574616b8. Jan 13 20:11:16.633780 containerd[1931]: time="2025-01-13T20:11:16.633585053Z" level=info msg="StartContainer for \"be424c21b5e92131524521bf704979a579602a72f0db53a9c3f77197574616b8\" returns successfully" Jan 13 20:11:16.838328 kubelet[3244]: E0113 20:11:16.838146 3244 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-h7whz" podUID="98ccbad4-e1fa-46fb-85af-254698d90ab8" Jan 13 20:11:17.440423 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:11:17.516504 kubelet[3244]: I0113 20:11:17.516446 3244 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gn58d" podStartSLOduration=7.516389106 podStartE2EDuration="7.516389106s" podCreationTimestamp="2025-01-13 20:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:11:17.514756722 +0000 UTC m=+131.850536072" watchObservedRunningTime="2025-01-13 20:11:17.516389106 +0000 UTC m=+131.852168468" Jan 13 20:11:18.453808 kubelet[3244]: I0113 20:11:18.453741 3244 setters.go:568] "Node became not ready" node="ip-172-31-22-29" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:11:18Z","lastTransitionTime":"2025-01-13T20:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:11:18.838610 kubelet[3244]: E0113 20:11:18.838404 3244 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-h7whz" podUID="98ccbad4-e1fa-46fb-85af-254698d90ab8" Jan 13 20:11:20.170240 systemd[1]: run-containerd-runc-k8s.io-be424c21b5e92131524521bf704979a579602a72f0db53a9c3f77197574616b8-runc.A2CKJP.mount: Deactivated successfully. Jan 13 20:11:20.838846 kubelet[3244]: E0113 20:11:20.838756 3244 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-h7whz" podUID="98ccbad4-e1fa-46fb-85af-254698d90ab8" Jan 13 20:11:21.588284 systemd-networkd[1849]: lxc_health: Link UP Jan 13 20:11:21.596310 (udev-worker)[6175]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:11:21.605434 systemd-networkd[1849]: lxc_health: Gained carrier Jan 13 20:11:22.431379 systemd[1]: run-containerd-runc-k8s.io-be424c21b5e92131524521bf704979a579602a72f0db53a9c3f77197574616b8-runc.eIK0z0.mount: Deactivated successfully. Jan 13 20:11:23.315549 systemd-networkd[1849]: lxc_health: Gained IPv6LL Jan 13 20:11:24.748965 systemd[1]: run-containerd-runc-k8s.io-be424c21b5e92131524521bf704979a579602a72f0db53a9c3f77197574616b8-runc.Oh4Vlu.mount: Deactivated successfully. Jan 13 20:11:25.606833 ntpd[1908]: Listen normally on 14 lxc_health [fe80::7c11:45ff:feaf:fc9c%14]:123 Jan 13 20:11:25.607510 ntpd[1908]: 13 Jan 20:11:25 ntpd[1908]: Listen normally on 14 lxc_health [fe80::7c11:45ff:feaf:fc9c%14]:123 Jan 13 20:11:27.123735 sshd[5336]: Connection closed by 147.75.109.163 port 37780 Jan 13 20:11:27.126711 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Jan 13 20:11:27.136157 systemd[1]: sshd@29-172.31.22.29:22-147.75.109.163:37780.service: Deactivated successfully. Jan 13 20:11:27.141671 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 20:11:27.144692 systemd-logind[1913]: Session 30 logged out. Waiting for processes to exit. Jan 13 20:11:27.148906 systemd-logind[1913]: Removed session 30. Jan 13 20:11:41.887277 systemd[1]: cri-containerd-cee14136e10d412d47481aae739939e1a3858227a750478035f9736247c48b48.scope: Deactivated successfully. Jan 13 20:11:41.889870 systemd[1]: cri-containerd-cee14136e10d412d47481aae739939e1a3858227a750478035f9736247c48b48.scope: Consumed 5.073s CPU time, 22.2M memory peak, 0B memory swap peak. Jan 13 20:11:41.931523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cee14136e10d412d47481aae739939e1a3858227a750478035f9736247c48b48-rootfs.mount: Deactivated successfully. Jan 13 20:11:41.939988 containerd[1931]: time="2025-01-13T20:11:41.939770683Z" level=info msg="shim disconnected" id=cee14136e10d412d47481aae739939e1a3858227a750478035f9736247c48b48 namespace=k8s.io Jan 13 20:11:41.940643 containerd[1931]: time="2025-01-13T20:11:41.939975583Z" level=warning msg="cleaning up after shim disconnected" id=cee14136e10d412d47481aae739939e1a3858227a750478035f9736247c48b48 namespace=k8s.io Jan 13 20:11:41.940643 containerd[1931]: time="2025-01-13T20:11:41.940011331Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:42.558520 kubelet[3244]: I0113 20:11:42.558410 3244 scope.go:117] "RemoveContainer" containerID="cee14136e10d412d47481aae739939e1a3858227a750478035f9736247c48b48" Jan 13 20:11:42.562832 containerd[1931]: time="2025-01-13T20:11:42.562772706Z" level=info msg="CreateContainer within sandbox \"04ed7e709749ae18d00545f76d19fccfb067befb039cedceb4c6a5563b6a1893\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 20:11:42.592715 containerd[1931]: time="2025-01-13T20:11:42.592020114Z" level=info msg="CreateContainer within sandbox \"04ed7e709749ae18d00545f76d19fccfb067befb039cedceb4c6a5563b6a1893\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ceaa62e7c38124f7ebb301afd8c35b2267e9ef06be57b5846a9830da41802de2\"" Jan 13 20:11:42.594305 containerd[1931]: time="2025-01-13T20:11:42.593382714Z" level=info msg="StartContainer for \"ceaa62e7c38124f7ebb301afd8c35b2267e9ef06be57b5846a9830da41802de2\"" Jan 13 20:11:42.652670 systemd[1]: Started cri-containerd-ceaa62e7c38124f7ebb301afd8c35b2267e9ef06be57b5846a9830da41802de2.scope - libcontainer container ceaa62e7c38124f7ebb301afd8c35b2267e9ef06be57b5846a9830da41802de2. Jan 13 20:11:42.728372 containerd[1931]: time="2025-01-13T20:11:42.728245987Z" level=info msg="StartContainer for \"ceaa62e7c38124f7ebb301afd8c35b2267e9ef06be57b5846a9830da41802de2\" returns successfully" Jan 13 20:11:45.824174 systemd[1]: cri-containerd-6b419348d59b688693ca3691b7a8ec60837015700a4d21316d21c476d51ad3d7.scope: Deactivated successfully. Jan 13 20:11:45.824677 systemd[1]: cri-containerd-6b419348d59b688693ca3691b7a8ec60837015700a4d21316d21c476d51ad3d7.scope: Consumed 4.134s CPU time, 17.1M memory peak, 0B memory swap peak. Jan 13 20:11:45.864736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b419348d59b688693ca3691b7a8ec60837015700a4d21316d21c476d51ad3d7-rootfs.mount: Deactivated successfully. Jan 13 20:11:45.879162 containerd[1931]: time="2025-01-13T20:11:45.878899631Z" level=info msg="shim disconnected" id=6b419348d59b688693ca3691b7a8ec60837015700a4d21316d21c476d51ad3d7 namespace=k8s.io Jan 13 20:11:45.879162 containerd[1931]: time="2025-01-13T20:11:45.878976275Z" level=warning msg="cleaning up after shim disconnected" id=6b419348d59b688693ca3691b7a8ec60837015700a4d21316d21c476d51ad3d7 namespace=k8s.io Jan 13 20:11:45.879162 containerd[1931]: time="2025-01-13T20:11:45.878997839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:11:46.573950 kubelet[3244]: I0113 20:11:46.573496 3244 scope.go:117] "RemoveContainer" containerID="6b419348d59b688693ca3691b7a8ec60837015700a4d21316d21c476d51ad3d7" Jan 13 20:11:46.577908 containerd[1931]: time="2025-01-13T20:11:46.577706134Z" level=info msg="CreateContainer within sandbox \"c2ac76665d25e0754ecf8cab00dd05c5a0b25a3d4c0ed22f4606759ff8f260c8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 20:11:46.608955 containerd[1931]: time="2025-01-13T20:11:46.608890654Z" level=info msg="CreateContainer within sandbox \"c2ac76665d25e0754ecf8cab00dd05c5a0b25a3d4c0ed22f4606759ff8f260c8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"60270442491d2fee338f3c9bd8fdeabe380e23d5b484625c229e35b72d33238c\"" Jan 13 20:11:46.609642 containerd[1931]: time="2025-01-13T20:11:46.609596086Z" level=info msg="StartContainer for \"60270442491d2fee338f3c9bd8fdeabe380e23d5b484625c229e35b72d33238c\"" Jan 13 20:11:46.663693 systemd[1]: Started cri-containerd-60270442491d2fee338f3c9bd8fdeabe380e23d5b484625c229e35b72d33238c.scope - libcontainer container 60270442491d2fee338f3c9bd8fdeabe380e23d5b484625c229e35b72d33238c. Jan 13 20:11:46.728729 containerd[1931]: time="2025-01-13T20:11:46.728654435Z" level=info msg="StartContainer for \"60270442491d2fee338f3c9bd8fdeabe380e23d5b484625c229e35b72d33238c\" returns successfully" Jan 13 20:11:49.292810 kubelet[3244]: E0113 20:11:49.292116 3244 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-29?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:11:59.293871 kubelet[3244]: E0113 20:11:59.293336 3244 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-29?timeout=10s\": context deadline exceeded"