Feb 13 19:01:29.177956 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:01:29.178004 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:01:29.178029 kernel: KASLR disabled due to lack of seed Feb 13 19:01:29.178045 kernel: efi: EFI v2.7 by EDK II Feb 13 19:01:29.178061 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Feb 13 19:01:29.178076 kernel: secureboot: Secure boot disabled Feb 13 19:01:29.178094 kernel: ACPI: Early table checksum verification disabled Feb 13 19:01:29.178109 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:01:29.178125 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:01:29.180206 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:01:29.180252 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:01:29.180269 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:01:29.180285 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:01:29.180300 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:01:29.180318 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:01:29.180339 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:01:29.180356 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:01:29.180372 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:01:29.180388 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:01:29.180404 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:01:29.180420 kernel: printk: bootconsole [uart0] enabled Feb 13 19:01:29.180436 kernel: NUMA: Failed to initialise from firmware Feb 13 19:01:29.180452 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:01:29.180469 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:01:29.180485 kernel: Zone ranges: Feb 13 19:01:29.180501 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:01:29.180521 kernel: DMA32 empty Feb 13 19:01:29.180537 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:01:29.180553 kernel: Movable zone start for each node Feb 13 19:01:29.180570 kernel: Early memory node ranges Feb 13 19:01:29.180586 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:01:29.180602 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:01:29.180618 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:01:29.180634 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:01:29.180650 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:01:29.180666 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:01:29.180682 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:01:29.180698 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:01:29.180718 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:01:29.180735 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:01:29.180757 kernel: psci: probing for conduit method from ACPI. Feb 13 19:01:29.180775 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:01:29.180792 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:01:29.180813 kernel: psci: Trusted OS migration not required Feb 13 19:01:29.180830 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:01:29.180847 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:01:29.180864 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:01:29.180882 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:01:29.180899 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:01:29.180916 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:01:29.180933 kernel: CPU features: detected: Spectre-v2 Feb 13 19:01:29.180949 kernel: CPU features: detected: Spectre-v3a Feb 13 19:01:29.180966 kernel: CPU features: detected: Spectre-BHB Feb 13 19:01:29.180983 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:01:29.181000 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:01:29.181022 kernel: alternatives: applying boot alternatives Feb 13 19:01:29.181041 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:01:29.181059 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:01:29.181077 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:01:29.181094 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:01:29.181110 kernel: Fallback order for Node 0: 0 Feb 13 19:01:29.181127 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:01:29.182362 kernel: Policy zone: Normal Feb 13 19:01:29.182393 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:01:29.182411 kernel: software IO TLB: area num 2. Feb 13 19:01:29.182439 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:01:29.182458 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 19:01:29.182476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:01:29.182493 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:01:29.182511 kernel: rcu: RCU event tracing is enabled. Feb 13 19:01:29.182529 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:01:29.182546 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:01:29.182564 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:01:29.182601 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:01:29.182622 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:01:29.182639 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:01:29.182662 kernel: GICv3: 96 SPIs implemented Feb 13 19:01:29.182681 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:01:29.182698 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:01:29.182714 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:01:29.182732 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:01:29.182748 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:01:29.182766 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:01:29.182784 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:01:29.182801 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:01:29.182818 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:01:29.182835 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:01:29.182852 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:01:29.182874 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:01:29.182892 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:01:29.182910 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:01:29.182927 kernel: Console: colour dummy device 80x25 Feb 13 19:01:29.182947 kernel: printk: console [tty1] enabled Feb 13 19:01:29.182965 kernel: ACPI: Core revision 20230628 Feb 13 19:01:29.182982 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:01:29.183000 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:01:29.183017 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:01:29.183035 kernel: landlock: Up and running. Feb 13 19:01:29.183057 kernel: SELinux: Initializing. Feb 13 19:01:29.183075 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:01:29.183092 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:01:29.183110 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:01:29.183128 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:01:29.184209 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:01:29.184236 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:01:29.184254 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:01:29.184280 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:01:29.184298 kernel: Remapping and enabling EFI services. Feb 13 19:01:29.184316 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:01:29.184333 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:01:29.184351 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:01:29.184368 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:01:29.184386 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:01:29.184404 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:01:29.184421 kernel: SMP: Total of 2 processors activated. Feb 13 19:01:29.184438 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:01:29.184461 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:01:29.184478 kernel: CPU features: detected: CRC32 instructions Feb 13 19:01:29.184507 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:01:29.184529 kernel: alternatives: applying system-wide alternatives Feb 13 19:01:29.184547 kernel: devtmpfs: initialized Feb 13 19:01:29.184565 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:01:29.184583 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:01:29.184601 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:01:29.184620 kernel: SMBIOS 3.0.0 present. Feb 13 19:01:29.184642 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:01:29.184660 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:01:29.184678 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:01:29.184696 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:01:29.184714 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:01:29.184732 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:01:29.184750 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Feb 13 19:01:29.184773 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:01:29.184792 kernel: cpuidle: using governor menu Feb 13 19:01:29.184810 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:01:29.184828 kernel: ASID allocator initialised with 65536 entries Feb 13 19:01:29.184846 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:01:29.184864 kernel: Serial: AMBA PL011 UART driver Feb 13 19:01:29.184882 kernel: Modules: 17440 pages in range for non-PLT usage Feb 13 19:01:29.184900 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:01:29.184919 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:01:29.184942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:01:29.184960 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:01:29.184978 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:01:29.184996 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:01:29.185014 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:01:29.185032 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:01:29.185050 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:01:29.185068 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:01:29.185086 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:01:29.185109 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:01:29.185127 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:01:29.185167 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:01:29.185188 kernel: ACPI: Interpreter enabled Feb 13 19:01:29.185206 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:01:29.185224 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:01:29.185242 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:01:29.185555 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:01:29.185766 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:01:29.185963 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:01:29.188289 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:01:29.188565 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:01:29.188591 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:01:29.188610 kernel: acpiphp: Slot [1] registered Feb 13 19:01:29.188630 kernel: acpiphp: Slot [2] registered Feb 13 19:01:29.188648 kernel: acpiphp: Slot [3] registered Feb 13 19:01:29.188677 kernel: acpiphp: Slot [4] registered Feb 13 19:01:29.188696 kernel: acpiphp: Slot [5] registered Feb 13 19:01:29.188713 kernel: acpiphp: Slot [6] registered Feb 13 19:01:29.188731 kernel: acpiphp: Slot [7] registered Feb 13 19:01:29.188749 kernel: acpiphp: Slot [8] registered Feb 13 19:01:29.188767 kernel: acpiphp: Slot [9] registered Feb 13 19:01:29.188785 kernel: acpiphp: Slot [10] registered Feb 13 19:01:29.188803 kernel: acpiphp: Slot [11] registered Feb 13 19:01:29.188821 kernel: acpiphp: Slot [12] registered Feb 13 19:01:29.188839 kernel: acpiphp: Slot [13] registered Feb 13 19:01:29.188862 kernel: acpiphp: Slot [14] registered Feb 13 19:01:29.188880 kernel: acpiphp: Slot [15] registered Feb 13 19:01:29.188898 kernel: acpiphp: Slot [16] registered Feb 13 19:01:29.188915 kernel: acpiphp: Slot [17] registered Feb 13 19:01:29.188933 kernel: acpiphp: Slot [18] registered Feb 13 19:01:29.188951 kernel: acpiphp: Slot [19] registered Feb 13 19:01:29.188969 kernel: acpiphp: Slot [20] registered Feb 13 19:01:29.188987 kernel: acpiphp: Slot [21] registered Feb 13 19:01:29.189005 kernel: acpiphp: Slot [22] registered Feb 13 19:01:29.189027 kernel: acpiphp: Slot [23] registered Feb 13 19:01:29.189046 kernel: acpiphp: Slot [24] registered Feb 13 19:01:29.189065 kernel: acpiphp: Slot [25] registered Feb 13 19:01:29.189083 kernel: acpiphp: Slot [26] registered Feb 13 19:01:29.189102 kernel: acpiphp: Slot [27] registered Feb 13 19:01:29.189121 kernel: acpiphp: Slot [28] registered Feb 13 19:01:29.192470 kernel: acpiphp: Slot [29] registered Feb 13 19:01:29.192538 kernel: acpiphp: Slot [30] registered Feb 13 19:01:29.192557 kernel: acpiphp: Slot [31] registered Feb 13 19:01:29.192576 kernel: PCI host bridge to bus 0000:00 Feb 13 19:01:29.192859 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:01:29.193040 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:01:29.194341 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:01:29.194560 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:01:29.194850 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:01:29.195103 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:01:29.195409 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:01:29.195648 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:01:29.195859 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:01:29.196068 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:01:29.198427 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:01:29.198678 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:01:29.198888 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:01:29.199101 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:01:29.201598 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:01:29.201816 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:01:29.202019 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:01:29.202314 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:01:29.202522 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:01:29.202809 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:01:29.203020 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:01:29.204295 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:01:29.204489 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:01:29.204515 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:01:29.204534 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:01:29.204553 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:01:29.204571 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:01:29.204590 kernel: iommu: Default domain type: Translated Feb 13 19:01:29.204619 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:01:29.204637 kernel: efivars: Registered efivars operations Feb 13 19:01:29.204656 kernel: vgaarb: loaded Feb 13 19:01:29.204674 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:01:29.204692 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:01:29.204710 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:01:29.204728 kernel: pnp: PnP ACPI init Feb 13 19:01:29.204934 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:01:29.204966 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:01:29.204985 kernel: NET: Registered PF_INET protocol family Feb 13 19:01:29.205004 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:01:29.205022 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:01:29.205041 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:01:29.205059 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:01:29.205078 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:01:29.205096 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:01:29.205114 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:01:29.206206 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:01:29.206236 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:01:29.206255 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:01:29.206274 kernel: kvm [1]: HYP mode not available Feb 13 19:01:29.206292 kernel: Initialise system trusted keyrings Feb 13 19:01:29.206311 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:01:29.206330 kernel: Key type asymmetric registered Feb 13 19:01:29.206348 kernel: Asymmetric key parser 'x509' registered Feb 13 19:01:29.206366 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:01:29.206392 kernel: io scheduler mq-deadline registered Feb 13 19:01:29.206411 kernel: io scheduler kyber registered Feb 13 19:01:29.206429 kernel: io scheduler bfq registered Feb 13 19:01:29.206688 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:01:29.206721 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:01:29.206740 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:01:29.206760 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:01:29.206779 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:01:29.206806 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:01:29.206826 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:01:29.207040 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:01:29.207068 kernel: printk: console [ttyS0] disabled Feb 13 19:01:29.207087 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:01:29.207107 kernel: printk: console [ttyS0] enabled Feb 13 19:01:29.207126 kernel: printk: bootconsole [uart0] disabled Feb 13 19:01:29.207544 kernel: thunder_xcv, ver 1.0 Feb 13 19:01:29.207573 kernel: thunder_bgx, ver 1.0 Feb 13 19:01:29.207591 kernel: nicpf, ver 1.0 Feb 13 19:01:29.207618 kernel: nicvf, ver 1.0 Feb 13 19:01:29.207869 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:01:29.208069 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:01:28 UTC (1739473288) Feb 13 19:01:29.208095 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:01:29.208114 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:01:29.208132 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:01:29.208986 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:01:29.209014 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:01:29.209033 kernel: Segment Routing with IPv6 Feb 13 19:01:29.209052 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:01:29.209070 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:01:29.209089 kernel: Key type dns_resolver registered Feb 13 19:01:29.209107 kernel: registered taskstats version 1 Feb 13 19:01:29.209125 kernel: Loading compiled-in X.509 certificates Feb 13 19:01:29.209670 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:01:29.209701 kernel: Key type .fscrypt registered Feb 13 19:01:29.209722 kernel: Key type fscrypt-provisioning registered Feb 13 19:01:29.209750 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:01:29.209769 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:01:29.209787 kernel: ima: No architecture policies found Feb 13 19:01:29.209805 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:01:29.209824 kernel: clk: Disabling unused clocks Feb 13 19:01:29.209842 kernel: Freeing unused kernel memory: 39680K Feb 13 19:01:29.209861 kernel: Run /init as init process Feb 13 19:01:29.209879 kernel: with arguments: Feb 13 19:01:29.209897 kernel: /init Feb 13 19:01:29.209919 kernel: with environment: Feb 13 19:01:29.209937 kernel: HOME=/ Feb 13 19:01:29.209956 kernel: TERM=linux Feb 13 19:01:29.209974 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:01:29.209997 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:01:29.210021 systemd[1]: Detected virtualization amazon. Feb 13 19:01:29.210042 systemd[1]: Detected architecture arm64. Feb 13 19:01:29.210066 systemd[1]: Running in initrd. Feb 13 19:01:29.210086 systemd[1]: No hostname configured, using default hostname. Feb 13 19:01:29.210106 systemd[1]: Hostname set to . Feb 13 19:01:29.210127 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:01:29.210186 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:01:29.210211 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:01:29.210232 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:01:29.210254 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:01:29.210281 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:01:29.210303 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:01:29.210324 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:01:29.210348 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:01:29.210369 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:01:29.210389 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:01:29.210410 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:01:29.210434 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:01:29.210455 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:01:29.210475 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:01:29.210495 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:01:29.210516 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:01:29.210537 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:01:29.210557 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:01:29.210595 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:01:29.210620 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:01:29.210647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:01:29.210667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:01:29.210687 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:01:29.210707 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:01:29.210727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:01:29.210747 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:01:29.210768 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:01:29.210787 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:01:29.210812 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:01:29.210833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:29.210855 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:01:29.210875 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:01:29.210949 systemd-journald[252]: Collecting audit messages is disabled. Feb 13 19:01:29.211002 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:01:29.211024 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:01:29.211045 systemd-journald[252]: Journal started Feb 13 19:01:29.211088 systemd-journald[252]: Runtime Journal (/run/log/journal/ec23df47853a3b067fd6f426ae7ffe05) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:01:29.212933 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:29.197029 systemd-modules-load[253]: Inserted module 'overlay' Feb 13 19:01:29.222299 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:01:29.234194 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:01:29.235616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:29.239988 systemd-modules-load[253]: Inserted module 'br_netfilter' Feb 13 19:01:29.242295 kernel: Bridge firewalling registered Feb 13 19:01:29.252407 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:01:29.254732 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:01:29.259046 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:01:29.266441 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:01:29.268779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:01:29.297285 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:01:29.309180 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:01:29.314690 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:01:29.329437 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:01:29.335197 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:29.348883 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:01:29.391806 dracut-cmdline[289]: dracut-dracut-053 Feb 13 19:01:29.398409 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:01:29.408727 systemd-resolved[285]: Positive Trust Anchors: Feb 13 19:01:29.408769 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:01:29.408828 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:01:29.571175 kernel: SCSI subsystem initialized Feb 13 19:01:29.579189 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:01:29.591248 kernel: iscsi: registered transport (tcp) Feb 13 19:01:29.613829 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:01:29.613901 kernel: QLogic iSCSI HBA Driver Feb 13 19:01:29.670178 kernel: random: crng init done Feb 13 19:01:29.670451 systemd-resolved[285]: Defaulting to hostname 'linux'. Feb 13 19:01:29.673662 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:01:29.677809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:01:29.702426 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:01:29.713496 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:01:29.748366 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:01:29.748441 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:01:29.750090 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:01:29.817191 kernel: raid6: neonx8 gen() 6707 MB/s Feb 13 19:01:29.834176 kernel: raid6: neonx4 gen() 6481 MB/s Feb 13 19:01:29.851178 kernel: raid6: neonx2 gen() 5419 MB/s Feb 13 19:01:29.868175 kernel: raid6: neonx1 gen() 3960 MB/s Feb 13 19:01:29.885176 kernel: raid6: int64x8 gen() 3824 MB/s Feb 13 19:01:29.902175 kernel: raid6: int64x4 gen() 3710 MB/s Feb 13 19:01:29.919177 kernel: raid6: int64x2 gen() 3609 MB/s Feb 13 19:01:29.936923 kernel: raid6: int64x1 gen() 2764 MB/s Feb 13 19:01:29.936960 kernel: raid6: using algorithm neonx8 gen() 6707 MB/s Feb 13 19:01:29.954925 kernel: raid6: .... xor() 4849 MB/s, rmw enabled Feb 13 19:01:29.954963 kernel: raid6: using neon recovery algorithm Feb 13 19:01:29.963363 kernel: xor: measuring software checksum speed Feb 13 19:01:29.963425 kernel: 8regs : 10971 MB/sec Feb 13 19:01:29.964466 kernel: 32regs : 11946 MB/sec Feb 13 19:01:29.965624 kernel: arm64_neon : 9584 MB/sec Feb 13 19:01:29.965656 kernel: xor: using function: 32regs (11946 MB/sec) Feb 13 19:01:30.050185 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:01:30.069155 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:01:30.084390 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:01:30.113961 systemd-udevd[471]: Using default interface naming scheme 'v255'. Feb 13 19:01:30.123252 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:01:30.138388 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:01:30.167770 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Feb 13 19:01:30.224643 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:01:30.234468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:01:30.353021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:01:30.376294 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:01:30.425785 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:01:30.431027 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:01:30.446797 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:01:30.465409 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:01:30.488534 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:01:30.531594 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:01:30.555463 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:01:30.555550 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:01:30.580640 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:01:30.580893 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:01:30.581625 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ec:61:c1:7a:75 Feb 13 19:01:30.583386 (udev-worker)[535]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:01:30.605709 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:01:30.605932 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:30.611506 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:30.616239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:01:30.616521 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:30.625089 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:30.646671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:30.653341 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:01:30.653380 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:01:30.663166 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:01:30.674929 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:01:30.674996 kernel: GPT:9289727 != 16777215 Feb 13 19:01:30.675021 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:01:30.675545 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:30.687962 kernel: GPT:9289727 != 16777215 Feb 13 19:01:30.687997 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:01:30.688022 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:01:30.691954 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:30.727986 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:30.791171 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (534) Feb 13 19:01:30.807180 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (522) Feb 13 19:01:30.821867 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:01:30.893240 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:01:30.931906 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:01:30.957491 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:01:30.963063 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:01:30.974493 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:01:30.986865 disk-uuid[664]: Primary Header is updated. Feb 13 19:01:30.986865 disk-uuid[664]: Secondary Entries is updated. Feb 13 19:01:30.986865 disk-uuid[664]: Secondary Header is updated. Feb 13 19:01:30.998191 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:01:32.013196 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:01:32.014507 disk-uuid[665]: The operation has completed successfully. Feb 13 19:01:32.206317 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:01:32.206550 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:01:32.259400 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:01:32.268618 sh[925]: Success Feb 13 19:01:32.294253 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:01:32.415534 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:01:32.422266 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:01:32.425305 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:01:32.477493 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:01:32.477570 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:32.477597 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:01:32.479194 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:01:32.480410 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:01:32.510191 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:01:32.523425 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:01:32.527685 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:01:32.539478 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:01:32.549543 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:01:32.589115 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:01:32.589204 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:32.590354 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:01:32.597514 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:01:32.619291 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:01:32.622083 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:01:32.630130 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:01:32.641498 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:01:32.748696 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:01:32.763531 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:01:32.828263 systemd-networkd[1117]: lo: Link UP Feb 13 19:01:32.828284 systemd-networkd[1117]: lo: Gained carrier Feb 13 19:01:32.832579 systemd-networkd[1117]: Enumeration completed Feb 13 19:01:32.834311 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:01:32.840786 ignition[1044]: Ignition 2.20.0 Feb 13 19:01:32.834558 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:32.840802 ignition[1044]: Stage: fetch-offline Feb 13 19:01:32.834578 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:01:32.841254 ignition[1044]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:32.842584 systemd[1]: Reached target network.target - Network. Feb 13 19:01:32.841278 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:32.848230 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:01:32.841869 ignition[1044]: Ignition finished successfully Feb 13 19:01:32.850285 systemd-networkd[1117]: eth0: Link UP Feb 13 19:01:32.850293 systemd-networkd[1117]: eth0: Gained carrier Feb 13 19:01:32.850311 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:32.884375 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.26.138/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:01:32.895533 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:01:32.921600 ignition[1124]: Ignition 2.20.0 Feb 13 19:01:32.921622 ignition[1124]: Stage: fetch Feb 13 19:01:32.925075 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:32.926901 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:32.927919 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:32.956066 ignition[1124]: PUT result: OK Feb 13 19:01:32.959406 ignition[1124]: parsed url from cmdline: "" Feb 13 19:01:32.959423 ignition[1124]: no config URL provided Feb 13 19:01:32.959438 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:01:32.959463 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:01:32.959496 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:32.963326 ignition[1124]: PUT result: OK Feb 13 19:01:32.963405 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:01:32.967539 ignition[1124]: GET result: OK Feb 13 19:01:32.968717 ignition[1124]: parsing config with SHA512: 8d833bf06af2fa8f58e2cc60f2fc5f39d639e1c4603a3c6bc52f7f686e09d0936a2ca50175ae31cb514704dcdcde7559cfb8c9e241a160c59a7f1def1edba0cd Feb 13 19:01:32.980791 unknown[1124]: fetched base config from "system" Feb 13 19:01:32.981577 ignition[1124]: fetch: fetch complete Feb 13 19:01:32.980813 unknown[1124]: fetched base config from "system" Feb 13 19:01:32.981589 ignition[1124]: fetch: fetch passed Feb 13 19:01:32.980826 unknown[1124]: fetched user config from "aws" Feb 13 19:01:32.981669 ignition[1124]: Ignition finished successfully Feb 13 19:01:32.987595 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:01:33.011383 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:01:33.040328 ignition[1132]: Ignition 2.20.0 Feb 13 19:01:33.040357 ignition[1132]: Stage: kargs Feb 13 19:01:33.041425 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:33.041454 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:33.041627 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:33.045109 ignition[1132]: PUT result: OK Feb 13 19:01:33.053928 ignition[1132]: kargs: kargs passed Feb 13 19:01:33.054254 ignition[1132]: Ignition finished successfully Feb 13 19:01:33.059320 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:01:33.069515 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:01:33.100102 ignition[1138]: Ignition 2.20.0 Feb 13 19:01:33.100131 ignition[1138]: Stage: disks Feb 13 19:01:33.100937 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:33.100964 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:33.101113 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:33.102414 ignition[1138]: PUT result: OK Feb 13 19:01:33.114047 ignition[1138]: disks: disks passed Feb 13 19:01:33.114205 ignition[1138]: Ignition finished successfully Feb 13 19:01:33.119229 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:01:33.123629 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:01:33.126045 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:01:33.128670 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:01:33.130636 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:01:33.132753 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:01:33.158723 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:01:33.207087 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:01:33.214669 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:01:33.225351 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:01:33.323188 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:01:33.325526 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:01:33.328898 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:01:33.346325 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:01:33.357279 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:01:33.360960 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:01:33.361078 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:01:33.361792 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:01:33.388763 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:01:33.396398 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1165) Feb 13 19:01:33.396452 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:01:33.396479 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:33.396506 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:01:33.404544 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:01:33.414167 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:01:33.416611 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:01:33.526845 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:01:33.535453 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:01:33.544633 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:01:33.552409 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:01:33.714373 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:01:33.730374 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:01:33.734486 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:01:33.754239 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:01:33.755496 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:01:33.801129 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:01:33.809491 ignition[1278]: INFO : Ignition 2.20.0 Feb 13 19:01:33.809491 ignition[1278]: INFO : Stage: mount Feb 13 19:01:33.812863 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:33.812863 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:33.817374 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:33.820347 ignition[1278]: INFO : PUT result: OK Feb 13 19:01:33.825824 ignition[1278]: INFO : mount: mount passed Feb 13 19:01:33.825824 ignition[1278]: INFO : Ignition finished successfully Feb 13 19:01:33.831282 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:01:33.840347 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:01:33.867120 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:01:33.889189 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1289) Feb 13 19:01:33.894261 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:01:33.894335 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:33.894362 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:01:33.901203 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:01:33.904110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:01:33.940162 ignition[1306]: INFO : Ignition 2.20.0 Feb 13 19:01:33.940162 ignition[1306]: INFO : Stage: files Feb 13 19:01:33.943903 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:33.943903 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:33.943903 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:33.950815 ignition[1306]: INFO : PUT result: OK Feb 13 19:01:33.956911 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:01:33.960701 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:01:33.960701 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:01:33.971117 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:01:33.974073 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:01:33.976705 unknown[1306]: wrote ssh authorized keys file for user: core Feb 13 19:01:33.978894 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:01:33.983591 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:01:33.983591 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Feb 13 19:01:34.067648 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:01:34.074255 systemd-networkd[1117]: eth0: Gained IPv6LL Feb 13 19:01:34.426232 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Feb 13 19:01:34.426232 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:01:34.433544 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 19:01:34.763540 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:01:34.903315 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:01:34.907208 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 19:01:35.297049 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:01:35.637701 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:01:35.637701 ignition[1306]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:01:35.644604 ignition[1306]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:01:35.644604 ignition[1306]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:01:35.644604 ignition[1306]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:01:35.644604 ignition[1306]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:01:35.644604 ignition[1306]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:01:35.644604 ignition[1306]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:01:35.644604 ignition[1306]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:01:35.644604 ignition[1306]: INFO : files: files passed Feb 13 19:01:35.644604 ignition[1306]: INFO : Ignition finished successfully Feb 13 19:01:35.670733 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:01:35.679557 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:01:35.690834 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:01:35.710741 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:01:35.710941 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:01:35.737318 initrd-setup-root-after-ignition[1335]: grep: Feb 13 19:01:35.739717 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:01:35.743802 initrd-setup-root-after-ignition[1335]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:01:35.743802 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:01:35.749313 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:01:35.757263 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:01:35.764469 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:01:35.825197 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:01:35.827227 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:01:35.835227 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:01:35.837687 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:01:35.841450 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:01:35.860597 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:01:35.895190 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:01:35.911200 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:01:35.936672 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:01:35.939494 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:01:35.942639 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:01:35.950260 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:01:35.950748 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:01:35.958053 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:01:35.960321 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:01:35.962790 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:01:35.970164 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:01:35.973844 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:01:35.980708 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:01:35.982994 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:01:35.986049 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:01:35.995263 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:01:35.997473 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:01:36.000046 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:01:36.000429 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:01:36.010398 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:01:36.013193 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:01:36.020156 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:01:36.022233 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:01:36.025840 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:01:36.026091 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:01:36.036032 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:01:36.036608 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:01:36.043977 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:01:36.044891 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:01:36.057607 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:01:36.065990 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:01:36.068102 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:01:36.068463 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:01:36.069890 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:01:36.079048 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:01:36.097762 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:01:36.100919 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:01:36.121200 ignition[1359]: INFO : Ignition 2.20.0 Feb 13 19:01:36.121200 ignition[1359]: INFO : Stage: umount Feb 13 19:01:36.127514 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:36.127514 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:36.132202 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:36.135051 ignition[1359]: INFO : PUT result: OK Feb 13 19:01:36.141083 ignition[1359]: INFO : umount: umount passed Feb 13 19:01:36.143096 ignition[1359]: INFO : Ignition finished successfully Feb 13 19:01:36.147795 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:01:36.149272 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:01:36.155344 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:01:36.155478 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:01:36.159239 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:01:36.159383 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:01:36.162415 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:01:36.164266 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:01:36.172683 systemd[1]: Stopped target network.target - Network. Feb 13 19:01:36.174575 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:01:36.174710 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:01:36.178460 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:01:36.187317 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:01:36.188945 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:01:36.193278 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:01:36.199252 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:01:36.201219 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:01:36.201318 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:01:36.203369 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:01:36.203460 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:01:36.205786 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:01:36.205896 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:01:36.208007 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:01:36.208122 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:01:36.209725 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:01:36.210540 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:01:36.221009 systemd-networkd[1117]: eth0: DHCPv6 lease lost Feb 13 19:01:36.230255 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:01:36.235307 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:01:36.237936 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:01:36.240803 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:01:36.240997 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:01:36.248406 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:01:36.248684 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:01:36.264813 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:01:36.264903 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:01:36.275433 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:01:36.275564 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:01:36.292372 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:01:36.294628 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:01:36.294775 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:01:36.297438 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:01:36.297545 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:01:36.301549 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:01:36.301662 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:01:36.310440 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:01:36.310574 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:01:36.314913 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:01:36.349968 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:01:36.350204 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:01:36.355007 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:01:36.355836 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:01:36.364910 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:01:36.365421 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:01:36.372523 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:01:36.372615 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:01:36.374680 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:01:36.374785 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:01:36.380074 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:01:36.380214 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:01:36.384197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:01:36.384293 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:36.407442 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:01:36.422757 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:01:36.422900 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:01:36.425919 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:01:36.426045 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:01:36.440694 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:01:36.440818 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:01:36.443961 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:01:36.444077 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:36.448970 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:01:36.449395 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:01:36.460047 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:01:36.481613 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:01:36.501432 systemd[1]: Switching root. Feb 13 19:01:36.538908 systemd-journald[252]: Journal stopped Feb 13 19:01:38.435109 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Feb 13 19:01:38.435320 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:01:38.435373 kernel: SELinux: policy capability open_perms=1 Feb 13 19:01:38.435406 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:01:38.435454 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:01:38.435488 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:01:38.435521 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:01:38.435550 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:01:38.435581 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:01:38.435622 kernel: audit: type=1403 audit(1739473296.909:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:01:38.435665 systemd[1]: Successfully loaded SELinux policy in 51.510ms. Feb 13 19:01:38.435715 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.843ms. Feb 13 19:01:38.435756 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:01:38.435787 systemd[1]: Detected virtualization amazon. Feb 13 19:01:38.435819 systemd[1]: Detected architecture arm64. Feb 13 19:01:38.435852 systemd[1]: Detected first boot. Feb 13 19:01:38.435889 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:01:38.435922 zram_generator::config[1402]: No configuration found. Feb 13 19:01:38.435957 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:01:38.435992 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:01:38.436023 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:01:38.436060 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:01:38.436094 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:01:38.436127 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:01:38.436195 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:01:38.436234 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:01:38.436267 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:01:38.436300 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:01:38.436331 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:01:38.436370 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:01:38.436403 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:01:38.436437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:01:38.436466 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:01:38.436498 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:01:38.436530 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:01:38.436564 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:01:38.436596 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:01:38.436635 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:01:38.436671 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:01:38.436702 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:01:38.436735 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:01:38.436764 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:01:38.436797 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:01:38.436829 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:01:38.436860 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:01:38.436894 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:01:38.436929 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:01:38.436961 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:01:38.436993 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:01:38.437023 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:01:38.437054 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:01:38.437084 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:01:38.437116 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:01:38.437182 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:01:38.437216 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:01:38.437253 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:01:38.437284 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:01:38.437313 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:01:38.437350 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:01:38.437381 systemd[1]: Reached target machines.target - Containers. Feb 13 19:01:38.437412 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:01:38.437441 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:01:38.437469 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:01:38.437498 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:01:38.437533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:01:38.437562 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:01:38.437591 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:01:38.437620 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:01:38.437651 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:01:38.437680 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:01:38.437709 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:01:38.437737 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:01:38.437772 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:01:38.437801 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:01:38.437829 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:01:38.437857 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:01:38.437885 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:01:38.437913 kernel: fuse: init (API version 7.39) Feb 13 19:01:38.437956 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:01:38.437995 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:01:38.438070 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:01:38.438111 systemd[1]: Stopped verity-setup.service. Feb 13 19:01:38.438162 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:01:38.440177 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:01:38.440252 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:01:38.440284 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:01:38.440332 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:01:38.440364 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:01:38.440425 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:01:38.440456 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:01:38.440486 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:01:38.440517 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:01:38.440552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:01:38.440584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:01:38.440615 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:01:38.440702 systemd-journald[1480]: Collecting audit messages is disabled. Feb 13 19:01:38.440767 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:01:38.440798 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:01:38.440827 systemd-journald[1480]: Journal started Feb 13 19:01:38.440876 systemd-journald[1480]: Runtime Journal (/run/log/journal/ec23df47853a3b067fd6f426ae7ffe05) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:01:37.926459 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:01:37.948550 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:01:37.949365 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:01:38.448441 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:01:38.448555 kernel: loop: module loaded Feb 13 19:01:38.454316 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:01:38.455528 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:01:38.457309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:01:38.460361 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:01:38.463653 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:01:38.500167 kernel: ACPI: bus type drm_connector registered Feb 13 19:01:38.504272 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:01:38.505761 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:01:38.509847 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:01:38.521495 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:01:38.534383 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:01:38.536605 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:01:38.536671 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:01:38.544425 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:01:38.555569 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:01:38.563509 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:01:38.566547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:01:38.582660 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:01:38.595552 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:01:38.598369 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:01:38.609497 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:01:38.612422 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:01:38.618501 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:01:38.632525 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:01:38.643574 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:01:38.649845 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:01:38.653607 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:01:38.658492 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:01:38.701356 systemd-journald[1480]: Time spent on flushing to /var/log/journal/ec23df47853a3b067fd6f426ae7ffe05 is 51.979ms for 911 entries. Feb 13 19:01:38.701356 systemd-journald[1480]: System Journal (/var/log/journal/ec23df47853a3b067fd6f426ae7ffe05) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:01:38.767685 systemd-journald[1480]: Received client request to flush runtime journal. Feb 13 19:01:38.722634 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:01:38.725315 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:01:38.750943 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:01:38.772298 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:01:38.776930 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:01:38.807186 kernel: loop0: detected capacity change from 0 to 201592 Feb 13 19:01:38.816956 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:01:38.820111 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:01:38.868395 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:01:38.870800 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:01:38.907616 systemd-tmpfiles[1528]: ACLs are not supported, ignoring. Feb 13 19:01:38.907655 systemd-tmpfiles[1528]: ACLs are not supported, ignoring. Feb 13 19:01:38.919214 kernel: loop1: detected capacity change from 0 to 53784 Feb 13 19:01:38.919648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:01:38.924455 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:01:38.938527 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:01:38.951521 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:01:38.995174 udevadm[1550]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:01:39.049711 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:01:39.068866 kernel: loop2: detected capacity change from 0 to 116808 Feb 13 19:01:39.082428 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:01:39.141215 kernel: loop3: detected capacity change from 0 to 113536 Feb 13 19:01:39.158831 systemd-tmpfiles[1554]: ACLs are not supported, ignoring. Feb 13 19:01:39.158868 systemd-tmpfiles[1554]: ACLs are not supported, ignoring. Feb 13 19:01:39.180117 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:01:39.212188 kernel: loop4: detected capacity change from 0 to 201592 Feb 13 19:01:39.260188 kernel: loop5: detected capacity change from 0 to 53784 Feb 13 19:01:39.282225 kernel: loop6: detected capacity change from 0 to 116808 Feb 13 19:01:39.318187 kernel: loop7: detected capacity change from 0 to 113536 Feb 13 19:01:39.352275 (sd-merge)[1559]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:01:39.354990 (sd-merge)[1559]: Merged extensions into '/usr'. Feb 13 19:01:39.367607 systemd[1]: Reloading requested from client PID 1527 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:01:39.367897 systemd[1]: Reloading... Feb 13 19:01:39.573172 zram_generator::config[1586]: No configuration found. Feb 13 19:01:39.669130 ldconfig[1518]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:01:39.896782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:01:40.011472 systemd[1]: Reloading finished in 642 ms. Feb 13 19:01:40.048754 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:01:40.051544 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:01:40.054532 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:01:40.071539 systemd[1]: Starting ensure-sysext.service... Feb 13 19:01:40.082502 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:01:40.093584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:01:40.110503 systemd[1]: Reloading requested from client PID 1639 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:01:40.110536 systemd[1]: Reloading... Feb 13 19:01:40.170225 systemd-tmpfiles[1640]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:01:40.170928 systemd-tmpfiles[1640]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:01:40.175019 systemd-tmpfiles[1640]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:01:40.177557 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Feb 13 19:01:40.177731 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Feb 13 19:01:40.185464 systemd-udevd[1641]: Using default interface naming scheme 'v255'. Feb 13 19:01:40.186252 systemd-tmpfiles[1640]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:01:40.186266 systemd-tmpfiles[1640]: Skipping /boot Feb 13 19:01:40.218481 systemd-tmpfiles[1640]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:01:40.218513 systemd-tmpfiles[1640]: Skipping /boot Feb 13 19:01:40.277193 zram_generator::config[1665]: No configuration found. Feb 13 19:01:40.534585 (udev-worker)[1675]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:01:40.698460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:01:40.845997 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:01:40.846501 systemd[1]: Reloading finished in 735 ms. Feb 13 19:01:40.865205 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1689) Feb 13 19:01:40.880648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:01:40.899284 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:01:40.972354 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:01:40.978997 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:01:40.988688 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:01:40.995752 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:01:41.003685 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:01:41.012698 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:01:41.020565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:41.028016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:01:41.034689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:01:41.058897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:01:41.131576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:01:41.134506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:01:41.149706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:01:41.152232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:01:41.197032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:01:41.199300 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:01:41.215371 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:01:41.216398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:01:41.235845 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:01:41.275311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:01:41.285789 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:01:41.299817 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:01:41.311195 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:01:41.319657 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:01:41.321785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:01:41.322244 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:01:41.338495 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:01:41.340462 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:01:41.345887 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:01:41.352923 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:01:41.356482 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:01:41.359936 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:01:41.361532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:01:41.375531 systemd[1]: Finished ensure-sysext.service. Feb 13 19:01:41.381052 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:01:41.383302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:01:41.390777 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:01:41.391124 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:01:41.413313 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:01:41.416635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:01:41.418303 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:01:41.441246 augenrules[1882]: No rules Feb 13 19:01:41.449645 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:01:41.461967 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:01:41.466359 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:01:41.466512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:01:41.469408 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:01:41.474197 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:01:41.475187 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:01:41.477996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:41.507200 lvm[1889]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:01:41.530810 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:01:41.538684 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:01:41.545870 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:01:41.569253 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:01:41.572802 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:01:41.584550 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:01:41.622192 lvm[1902]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:01:41.662496 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:01:41.673936 systemd-networkd[1814]: lo: Link UP Feb 13 19:01:41.674474 systemd-networkd[1814]: lo: Gained carrier Feb 13 19:01:41.677429 systemd-networkd[1814]: Enumeration completed Feb 13 19:01:41.677921 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:01:41.680221 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:41.680229 systemd-networkd[1814]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:01:41.682767 systemd-networkd[1814]: eth0: Link UP Feb 13 19:01:41.684371 systemd-networkd[1814]: eth0: Gained carrier Feb 13 19:01:41.684540 systemd-networkd[1814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:41.687428 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:01:41.698243 systemd-networkd[1814]: eth0: DHCPv4 address 172.31.26.138/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:01:41.725175 systemd-resolved[1815]: Positive Trust Anchors: Feb 13 19:01:41.725705 systemd-resolved[1815]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:01:41.725867 systemd-resolved[1815]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:01:41.735048 systemd-resolved[1815]: Defaulting to hostname 'linux'. Feb 13 19:01:41.738174 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:01:41.740577 systemd[1]: Reached target network.target - Network. Feb 13 19:01:41.742280 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:01:41.744493 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:01:41.746623 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:01:41.748922 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:01:41.751586 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:01:41.753834 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:01:41.756237 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:01:41.758543 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:01:41.758608 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:01:41.760313 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:01:41.763465 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:01:41.768563 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:01:41.779649 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:01:41.782745 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:01:41.784963 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:01:41.786835 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:01:41.788750 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:01:41.788819 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:01:41.797361 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:01:41.807512 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:01:41.814411 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:01:41.824516 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:01:41.839522 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:01:41.841532 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:01:41.850929 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:01:41.857450 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:01:41.866190 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:01:41.872591 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:01:41.880940 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:01:41.888520 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:01:41.898382 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:01:41.901839 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:01:41.904543 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:01:41.906580 jq[1912]: false Feb 13 19:01:41.907951 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:01:41.918343 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:01:41.933991 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:01:41.937230 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:01:41.973844 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:01:41.975057 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:01:41.984325 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:01:41.983366 dbus-daemon[1911]: [system] SELinux support is enabled Feb 13 19:01:41.993982 dbus-daemon[1911]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1814 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:01:41.995579 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:01:41.995655 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:01:41.998357 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:01:41.998395 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:01:42.027247 update_engine[1925]: I20250213 19:01:42.011384 1925 main.cc:92] Flatcar Update Engine starting Feb 13 19:01:42.027247 update_engine[1925]: I20250213 19:01:42.021283 1925 update_check_scheduler.cc:74] Next update check in 8m37s Feb 13 19:01:42.005933 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:01:42.009891 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:01:42.007853 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:01:42.023539 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:01:42.029224 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:01:42.040531 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:01:42.096182 jq[1926]: true Feb 13 19:01:42.091743 (ntainerd)[1946]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:01:42.133111 extend-filesystems[1913]: Found loop4 Feb 13 19:01:42.133111 extend-filesystems[1913]: Found loop5 Feb 13 19:01:42.133111 extend-filesystems[1913]: Found loop6 Feb 13 19:01:42.133111 extend-filesystems[1913]: Found loop7 Feb 13 19:01:42.133111 extend-filesystems[1913]: Found nvme0n1 Feb 13 19:01:42.133111 extend-filesystems[1913]: Found nvme0n1p1 Feb 13 19:01:42.133111 extend-filesystems[1913]: Found nvme0n1p2 Feb 13 19:01:42.133111 extend-filesystems[1913]: Found nvme0n1p3 Feb 13 19:01:42.133111 extend-filesystems[1913]: Found usr Feb 13 19:01:42.133111 extend-filesystems[1913]: Found nvme0n1p4 Feb 13 19:01:42.133111 extend-filesystems[1913]: Found nvme0n1p6 Feb 13 19:01:42.133111 extend-filesystems[1913]: Found nvme0n1p7 Feb 13 19:01:42.133111 extend-filesystems[1913]: Found nvme0n1p9 Feb 13 19:01:42.133111 extend-filesystems[1913]: Checking size of /dev/nvme0n1p9 Feb 13 19:01:42.189646 tar[1942]: linux-arm64/LICENSE Feb 13 19:01:42.189646 tar[1942]: linux-arm64/helm Feb 13 19:01:42.201550 coreos-metadata[1910]: Feb 13 19:01:42.201 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:01:42.208453 ntpd[1917]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:01:42.227634 coreos-metadata[1910]: Feb 13 19:01:42.209 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:01:42.227634 coreos-metadata[1910]: Feb 13 19:01:42.219 INFO Fetch successful Feb 13 19:01:42.227634 coreos-metadata[1910]: Feb 13 19:01:42.219 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:01:42.227634 coreos-metadata[1910]: Feb 13 19:01:42.224 INFO Fetch successful Feb 13 19:01:42.227634 coreos-metadata[1910]: Feb 13 19:01:42.224 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:01:42.227634 coreos-metadata[1910]: Feb 13 19:01:42.226 INFO Fetch successful Feb 13 19:01:42.227634 coreos-metadata[1910]: Feb 13 19:01:42.226 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:01:42.228051 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:01:42.228051 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:01:42.228051 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: ---------------------------------------------------- Feb 13 19:01:42.228051 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:01:42.228051 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:01:42.228051 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: corporation. Support and training for ntp-4 are Feb 13 19:01:42.228051 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: available at https://www.nwtime.org/support Feb 13 19:01:42.228051 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: ---------------------------------------------------- Feb 13 19:01:42.208518 ntpd[1917]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:01:42.234734 coreos-metadata[1910]: Feb 13 19:01:42.232 INFO Fetch successful Feb 13 19:01:42.234734 coreos-metadata[1910]: Feb 13 19:01:42.233 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:01:42.234847 jq[1949]: true Feb 13 19:01:42.208538 ntpd[1917]: ---------------------------------------------------- Feb 13 19:01:42.239510 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: proto: precision = 0.096 usec (-23) Feb 13 19:01:42.239639 coreos-metadata[1910]: Feb 13 19:01:42.238 INFO Fetch failed with 404: resource not found Feb 13 19:01:42.239639 coreos-metadata[1910]: Feb 13 19:01:42.238 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:01:42.208557 ntpd[1917]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:01:42.208575 ntpd[1917]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:01:42.208595 ntpd[1917]: corporation. Support and training for ntp-4 are Feb 13 19:01:42.208612 ntpd[1917]: available at https://www.nwtime.org/support Feb 13 19:01:42.208633 ntpd[1917]: ---------------------------------------------------- Feb 13 19:01:42.252526 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: basedate set to 2025-02-01 Feb 13 19:01:42.252526 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: gps base set to 2025-02-02 (week 2352) Feb 13 19:01:42.252526 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:01:42.252526 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:01:42.252897 coreos-metadata[1910]: Feb 13 19:01:42.240 INFO Fetch successful Feb 13 19:01:42.252897 coreos-metadata[1910]: Feb 13 19:01:42.240 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:01:42.252897 coreos-metadata[1910]: Feb 13 19:01:42.247 INFO Fetch successful Feb 13 19:01:42.252897 coreos-metadata[1910]: Feb 13 19:01:42.247 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:01:42.252897 coreos-metadata[1910]: Feb 13 19:01:42.252 INFO Fetch successful Feb 13 19:01:42.252897 coreos-metadata[1910]: Feb 13 19:01:42.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:01:42.238827 ntpd[1917]: proto: precision = 0.096 usec (-23) Feb 13 19:01:42.261029 extend-filesystems[1913]: Resized partition /dev/nvme0n1p9 Feb 13 19:01:42.267559 coreos-metadata[1910]: Feb 13 19:01:42.257 INFO Fetch successful Feb 13 19:01:42.267559 coreos-metadata[1910]: Feb 13 19:01:42.257 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:01:42.267559 coreos-metadata[1910]: Feb 13 19:01:42.259 INFO Fetch successful Feb 13 19:01:42.242359 ntpd[1917]: basedate set to 2025-02-01 Feb 13 19:01:42.267855 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:01:42.267855 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: Listen normally on 3 eth0 172.31.26.138:123 Feb 13 19:01:42.267855 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: Listen normally on 4 lo [::1]:123 Feb 13 19:01:42.267855 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: bind(21) AF_INET6 fe80::4ec:61ff:fec1:7a75%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:01:42.267855 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: unable to create socket on eth0 (5) for fe80::4ec:61ff:fec1:7a75%2#123 Feb 13 19:01:42.267855 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: failed to init interface for address fe80::4ec:61ff:fec1:7a75%2 Feb 13 19:01:42.267855 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: Listening on routing socket on fd #21 for interface updates Feb 13 19:01:42.242398 ntpd[1917]: gps base set to 2025-02-02 (week 2352) Feb 13 19:01:42.251706 ntpd[1917]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:01:42.251822 ntpd[1917]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:01:42.262821 ntpd[1917]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:01:42.262934 ntpd[1917]: Listen normally on 3 eth0 172.31.26.138:123 Feb 13 19:01:42.263011 ntpd[1917]: Listen normally on 4 lo [::1]:123 Feb 13 19:01:42.263099 ntpd[1917]: bind(21) AF_INET6 fe80::4ec:61ff:fec1:7a75%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:01:42.263237 ntpd[1917]: unable to create socket on eth0 (5) for fe80::4ec:61ff:fec1:7a75%2#123 Feb 13 19:01:42.263271 ntpd[1917]: failed to init interface for address fe80::4ec:61ff:fec1:7a75%2 Feb 13 19:01:42.263339 ntpd[1917]: Listening on routing socket on fd #21 for interface updates Feb 13 19:01:42.280473 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:42.284413 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:42.284413 ntpd[1917]: 13 Feb 19:01:42 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:42.280533 ntpd[1917]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:42.310225 extend-filesystems[1965]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:01:42.305561 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:01:42.328171 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:01:42.418071 systemd-logind[1924]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:01:42.418979 systemd-logind[1924]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:01:42.421430 systemd-logind[1924]: New seat seat0. Feb 13 19:01:42.436349 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:01:42.452126 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:01:42.455309 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:01:42.506700 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:01:42.518819 extend-filesystems[1965]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:01:42.518819 extend-filesystems[1965]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:01:42.518819 extend-filesystems[1965]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:01:42.549451 extend-filesystems[1913]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:01:42.558683 bash[1993]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:01:42.540832 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:01:42.542298 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:01:42.553669 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:01:42.592053 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:01:42.592989 dbus-daemon[1911]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1937 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:01:42.616953 systemd[1]: Starting sshkeys.service... Feb 13 19:01:42.620648 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:01:42.641583 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:01:42.654832 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1688) Feb 13 19:01:42.670463 polkitd[2005]: Started polkitd version 121 Feb 13 19:01:42.672482 locksmithd[1940]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:01:42.700664 polkitd[2005]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:01:42.700802 polkitd[2005]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:01:42.708729 polkitd[2005]: Finished loading, compiling and executing 2 rules Feb 13 19:01:42.718520 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:01:42.719375 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:01:42.719007 polkitd[2005]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:01:42.729190 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:01:42.737365 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:01:42.797647 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:01:42.826420 systemd-hostnamed[1937]: Hostname set to (transient) Feb 13 19:01:42.826627 systemd-resolved[1815]: System hostname changed to 'ip-172-31-26-138'. Feb 13 19:01:42.940471 coreos-metadata[2040]: Feb 13 19:01:42.940 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:01:42.943848 coreos-metadata[2040]: Feb 13 19:01:42.943 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:01:42.944918 coreos-metadata[2040]: Feb 13 19:01:42.944 INFO Fetch successful Feb 13 19:01:42.944918 coreos-metadata[2040]: Feb 13 19:01:42.944 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:01:42.950182 coreos-metadata[2040]: Feb 13 19:01:42.945 INFO Fetch successful Feb 13 19:01:42.952642 unknown[2040]: wrote ssh authorized keys file for user: core Feb 13 19:01:42.967179 containerd[1946]: time="2025-02-13T19:01:42.965824307Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:01:43.011327 update-ssh-keys[2078]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:01:43.013058 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:01:43.028371 systemd[1]: Finished sshkeys.service. Feb 13 19:01:43.123009 containerd[1946]: time="2025-02-13T19:01:43.122431424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:43.132174 containerd[1946]: time="2025-02-13T19:01:43.131454620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:43.132174 containerd[1946]: time="2025-02-13T19:01:43.131521136Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:01:43.132174 containerd[1946]: time="2025-02-13T19:01:43.131558792Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:01:43.132174 containerd[1946]: time="2025-02-13T19:01:43.131861372Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:01:43.132174 containerd[1946]: time="2025-02-13T19:01:43.131894828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:43.132174 containerd[1946]: time="2025-02-13T19:01:43.132014660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:43.132174 containerd[1946]: time="2025-02-13T19:01:43.132042680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:43.134768 containerd[1946]: time="2025-02-13T19:01:43.134676908Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:43.137331 containerd[1946]: time="2025-02-13T19:01:43.137257424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:43.137534 containerd[1946]: time="2025-02-13T19:01:43.137484296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:43.137651 containerd[1946]: time="2025-02-13T19:01:43.137621804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:43.144064 containerd[1946]: time="2025-02-13T19:01:43.143324972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:43.157363 containerd[1946]: time="2025-02-13T19:01:43.157218716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:43.159457 containerd[1946]: time="2025-02-13T19:01:43.159320096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:43.159457 containerd[1946]: time="2025-02-13T19:01:43.159396272Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:01:43.162335 systemd-networkd[1814]: eth0: Gained IPv6LL Feb 13 19:01:43.168909 containerd[1946]: time="2025-02-13T19:01:43.168419888Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:01:43.168909 containerd[1946]: time="2025-02-13T19:01:43.168605540Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:01:43.175108 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:01:43.181072 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:01:43.194764 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.198274389Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.198383265Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.198419613Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.198455529Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.198504273Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.198810165Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.199230849Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.199458033Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.199495761Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.199531593Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.199568565Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.199599369Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.199633857Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:01:43.199786 containerd[1946]: time="2025-02-13T19:01:43.199664877Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.199700877Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.199732473Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.199762485Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.199791909Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.199843713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.199878081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.199909089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.199940409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.199970637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.200002029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.200029305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.200058537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.200477 containerd[1946]: time="2025-02-13T19:01:43.200089977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.207529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.200123577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211270077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211320141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211354053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211392321Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211457301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211492593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211522401Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211677117Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211718397Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211744413Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211773585Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:01:43.213361 containerd[1946]: time="2025-02-13T19:01:43.211798065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.214244 containerd[1946]: time="2025-02-13T19:01:43.211832601Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:01:43.214244 containerd[1946]: time="2025-02-13T19:01:43.211856781Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:01:43.214244 containerd[1946]: time="2025-02-13T19:01:43.211887669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:01:43.213667 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:01:43.214493 containerd[1946]: time="2025-02-13T19:01:43.212423061Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:01:43.214493 containerd[1946]: time="2025-02-13T19:01:43.212519709Z" level=info msg="Connect containerd service" Feb 13 19:01:43.214493 containerd[1946]: time="2025-02-13T19:01:43.212593497Z" level=info msg="using legacy CRI server" Feb 13 19:01:43.214493 containerd[1946]: time="2025-02-13T19:01:43.212612805Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:01:43.214493 containerd[1946]: time="2025-02-13T19:01:43.212905413Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:01:43.232255 containerd[1946]: time="2025-02-13T19:01:43.223296669Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:01:43.232255 containerd[1946]: time="2025-02-13T19:01:43.223893885Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:01:43.232255 containerd[1946]: time="2025-02-13T19:01:43.224009241Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:01:43.244037 containerd[1946]: time="2025-02-13T19:01:43.224127081Z" level=info msg="Start subscribing containerd event" Feb 13 19:01:43.244037 containerd[1946]: time="2025-02-13T19:01:43.243394593Z" level=info msg="Start recovering state" Feb 13 19:01:43.244037 containerd[1946]: time="2025-02-13T19:01:43.243549501Z" level=info msg="Start event monitor" Feb 13 19:01:43.244037 containerd[1946]: time="2025-02-13T19:01:43.243583665Z" level=info msg="Start snapshots syncer" Feb 13 19:01:43.244037 containerd[1946]: time="2025-02-13T19:01:43.243608913Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:01:43.244037 containerd[1946]: time="2025-02-13T19:01:43.243630177Z" level=info msg="Start streaming server" Feb 13 19:01:43.243978 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:01:43.249430 containerd[1946]: time="2025-02-13T19:01:43.247496949Z" level=info msg="containerd successfully booted in 0.289827s" Feb 13 19:01:43.353330 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:01:43.435629 amazon-ssm-agent[2106]: Initializing new seelog logger Feb 13 19:01:43.435629 amazon-ssm-agent[2106]: New Seelog Logger Creation Complete Feb 13 19:01:43.435629 amazon-ssm-agent[2106]: 2025/02/13 19:01:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:43.435629 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:43.437281 amazon-ssm-agent[2106]: 2025/02/13 19:01:43 processing appconfig overrides Feb 13 19:01:43.437825 amazon-ssm-agent[2106]: 2025/02/13 19:01:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:43.437825 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:43.437979 amazon-ssm-agent[2106]: 2025/02/13 19:01:43 processing appconfig overrides Feb 13 19:01:43.438903 amazon-ssm-agent[2106]: 2025/02/13 19:01:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:43.438903 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:43.439033 amazon-ssm-agent[2106]: 2025/02/13 19:01:43 processing appconfig overrides Feb 13 19:01:43.440522 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO Proxy environment variables: Feb 13 19:01:43.445361 amazon-ssm-agent[2106]: 2025/02/13 19:01:43 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:43.445361 amazon-ssm-agent[2106]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:43.445511 amazon-ssm-agent[2106]: 2025/02/13 19:01:43 processing appconfig overrides Feb 13 19:01:43.540973 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO https_proxy: Feb 13 19:01:43.642176 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO http_proxy: Feb 13 19:01:43.740954 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO no_proxy: Feb 13 19:01:43.837299 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:01:43.936371 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:01:44.035712 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO Agent will take identity from EC2 Feb 13 19:01:44.147825 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:01:44.168558 sshd_keygen[1958]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:01:44.248191 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:01:44.271801 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:01:44.288021 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:01:44.300794 systemd[1]: Started sshd@0-172.31.26.138:22-147.75.109.163:46238.service - OpenSSH per-connection server daemon (147.75.109.163:46238). Feb 13 19:01:44.346323 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:01:44.354784 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:01:44.357329 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:01:44.371724 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:01:44.432183 tar[1942]: linux-arm64/README.md Feb 13 19:01:44.446234 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:01:44.448706 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:01:44.464798 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:01:44.472443 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:01:44.475872 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:01:44.488239 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:01:44.547057 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:01:44.645826 sshd[2144]: Accepted publickey for core from 147.75.109.163 port 46238 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:44.647528 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:01:44.650234 sshd-session[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:44.673839 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:01:44.686912 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:01:44.696240 systemd-logind[1924]: New session 1 of user core. Feb 13 19:01:44.729782 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:01:44.743831 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:01:44.752246 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:01:44.768262 (systemd)[2158]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:01:44.855435 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO [Registrar] Starting registrar module Feb 13 19:01:44.953370 amazon-ssm-agent[2106]: 2025-02-13 19:01:43 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:01:45.069551 systemd[2158]: Queued start job for default target default.target. Feb 13 19:01:45.080784 systemd[2158]: Created slice app.slice - User Application Slice. Feb 13 19:01:45.081169 systemd[2158]: Reached target paths.target - Paths. Feb 13 19:01:45.081214 systemd[2158]: Reached target timers.target - Timers. Feb 13 19:01:45.091496 systemd[2158]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:01:45.127702 systemd[2158]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:01:45.128028 systemd[2158]: Reached target sockets.target - Sockets. Feb 13 19:01:45.128069 systemd[2158]: Reached target basic.target - Basic System. Feb 13 19:01:45.128227 systemd[2158]: Reached target default.target - Main User Target. Feb 13 19:01:45.128313 systemd[2158]: Startup finished in 341ms. Feb 13 19:01:45.128379 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:01:45.137882 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:01:45.210522 ntpd[1917]: Listen normally on 6 eth0 [fe80::4ec:61ff:fec1:7a75%2]:123 Feb 13 19:01:45.213538 ntpd[1917]: 13 Feb 19:01:45 ntpd[1917]: Listen normally on 6 eth0 [fe80::4ec:61ff:fec1:7a75%2]:123 Feb 13 19:01:45.314357 systemd[1]: Started sshd@1-172.31.26.138:22-147.75.109.163:46248.service - OpenSSH per-connection server daemon (147.75.109.163:46248). Feb 13 19:01:45.409506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:45.412976 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:01:45.416644 systemd[1]: Startup finished in 1.105s (kernel) + 8.110s (initrd) + 8.556s (userspace) = 17.773s. Feb 13 19:01:45.424187 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:01:45.576294 sshd[2169]: Accepted publickey for core from 147.75.109.163 port 46248 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:45.581512 sshd-session[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:45.602288 systemd-logind[1924]: New session 2 of user core. Feb 13 19:01:45.605378 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:01:45.757784 sshd[2182]: Connection closed by 147.75.109.163 port 46248 Feb 13 19:01:45.755588 sshd-session[2169]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:45.769179 systemd[1]: sshd@1-172.31.26.138:22-147.75.109.163:46248.service: Deactivated successfully. Feb 13 19:01:45.775700 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:01:45.784560 systemd-logind[1924]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:01:45.809894 systemd[1]: Started sshd@2-172.31.26.138:22-147.75.109.163:46254.service - OpenSSH per-connection server daemon (147.75.109.163:46254). Feb 13 19:01:45.814582 systemd-logind[1924]: Removed session 2. Feb 13 19:01:46.018234 sshd[2190]: Accepted publickey for core from 147.75.109.163 port 46254 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:46.025605 sshd-session[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:46.044263 systemd-logind[1924]: New session 3 of user core. Feb 13 19:01:46.051430 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:01:46.180878 sshd[2192]: Connection closed by 147.75.109.163 port 46254 Feb 13 19:01:46.181445 sshd-session[2190]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:46.191315 systemd[1]: sshd@2-172.31.26.138:22-147.75.109.163:46254.service: Deactivated successfully. Feb 13 19:01:46.199351 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:01:46.208639 systemd-logind[1924]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:01:46.231721 systemd[1]: Started sshd@3-172.31.26.138:22-147.75.109.163:46266.service - OpenSSH per-connection server daemon (147.75.109.163:46266). Feb 13 19:01:46.235053 systemd-logind[1924]: Removed session 3. Feb 13 19:01:46.480095 sshd[2198]: Accepted publickey for core from 147.75.109.163 port 46266 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:46.483535 sshd-session[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:46.487540 amazon-ssm-agent[2106]: 2025-02-13 19:01:46 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:01:46.501509 systemd-logind[1924]: New session 4 of user core. Feb 13 19:01:46.505491 kubelet[2176]: E0213 19:01:46.505397 2176 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:01:46.510533 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:01:46.511339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:01:46.511695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:01:46.512366 systemd[1]: kubelet.service: Consumed 1.374s CPU time. Feb 13 19:01:46.521344 amazon-ssm-agent[2106]: 2025-02-13 19:01:46 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:01:46.521634 amazon-ssm-agent[2106]: 2025-02-13 19:01:46 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:01:46.521858 amazon-ssm-agent[2106]: 2025-02-13 19:01:46 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:01:46.587809 amazon-ssm-agent[2106]: 2025-02-13 19:01:46 INFO [CredentialRefresher] Next credential rotation will be in 31.058311624033333 minutes Feb 13 19:01:46.644338 sshd[2201]: Connection closed by 147.75.109.163 port 46266 Feb 13 19:01:46.645398 sshd-session[2198]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:46.650985 systemd[1]: sshd@3-172.31.26.138:22-147.75.109.163:46266.service: Deactivated successfully. Feb 13 19:01:46.654934 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:01:46.660073 systemd-logind[1924]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:01:46.662281 systemd-logind[1924]: Removed session 4. Feb 13 19:01:46.688856 systemd[1]: Started sshd@4-172.31.26.138:22-147.75.109.163:46276.service - OpenSSH per-connection server daemon (147.75.109.163:46276). Feb 13 19:01:46.886653 sshd[2206]: Accepted publickey for core from 147.75.109.163 port 46276 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:46.889449 sshd-session[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:46.897700 systemd-logind[1924]: New session 5 of user core. Feb 13 19:01:46.907445 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:01:47.027890 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:01:47.029256 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:47.044996 sudo[2209]: pam_unix(sudo:session): session closed for user root Feb 13 19:01:47.068082 sshd[2208]: Connection closed by 147.75.109.163 port 46276 Feb 13 19:01:47.069347 sshd-session[2206]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:47.075741 systemd[1]: sshd@4-172.31.26.138:22-147.75.109.163:46276.service: Deactivated successfully. Feb 13 19:01:47.080259 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:01:47.083747 systemd-logind[1924]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:01:47.086509 systemd-logind[1924]: Removed session 5. Feb 13 19:01:47.106798 systemd[1]: Started sshd@5-172.31.26.138:22-147.75.109.163:46286.service - OpenSSH per-connection server daemon (147.75.109.163:46286). Feb 13 19:01:47.298260 sshd[2214]: Accepted publickey for core from 147.75.109.163 port 46286 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:47.301706 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:47.310696 systemd-logind[1924]: New session 6 of user core. Feb 13 19:01:47.318513 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:01:47.425496 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:01:47.426313 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:47.433438 sudo[2218]: pam_unix(sudo:session): session closed for user root Feb 13 19:01:47.445264 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:01:47.445956 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:47.479838 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:01:47.536244 augenrules[2241]: No rules Feb 13 19:01:47.540798 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:01:47.541749 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:01:47.544481 sudo[2217]: pam_unix(sudo:session): session closed for user root Feb 13 19:01:47.557383 amazon-ssm-agent[2106]: 2025-02-13 19:01:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:01:47.570200 sshd[2216]: Connection closed by 147.75.109.163 port 46286 Feb 13 19:01:47.571246 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Feb 13 19:01:47.580815 systemd[1]: sshd@5-172.31.26.138:22-147.75.109.163:46286.service: Deactivated successfully. Feb 13 19:01:47.586686 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:01:47.589497 systemd-logind[1924]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:01:47.616498 systemd[1]: Started sshd@6-172.31.26.138:22-147.75.109.163:46294.service - OpenSSH per-connection server daemon (147.75.109.163:46294). Feb 13 19:01:47.619793 systemd-logind[1924]: Removed session 6. Feb 13 19:01:47.658571 amazon-ssm-agent[2106]: 2025-02-13 19:01:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2246) started Feb 13 19:01:47.760445 amazon-ssm-agent[2106]: 2025-02-13 19:01:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:01:47.823981 sshd[2253]: Accepted publickey for core from 147.75.109.163 port 46294 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:01:47.828315 sshd-session[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:47.838432 systemd-logind[1924]: New session 7 of user core. Feb 13 19:01:47.844498 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:01:47.950826 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:01:47.951565 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:01:48.524055 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:01:48.527516 (dockerd)[2279]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:01:48.904159 dockerd[2279]: time="2025-02-13T19:01:48.903921353Z" level=info msg="Starting up" Feb 13 19:01:49.158094 dockerd[2279]: time="2025-02-13T19:01:49.157573094Z" level=info msg="Loading containers: start." Feb 13 19:01:49.514647 systemd-resolved[1815]: Clock change detected. Flushing caches. Feb 13 19:01:49.725287 kernel: Initializing XFRM netlink socket Feb 13 19:01:49.759533 (udev-worker)[2307]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:01:49.867527 systemd-networkd[1814]: docker0: Link UP Feb 13 19:01:49.916861 dockerd[2279]: time="2025-02-13T19:01:49.916807954Z" level=info msg="Loading containers: done." Feb 13 19:01:49.941653 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1328175025-merged.mount: Deactivated successfully. Feb 13 19:01:49.944576 dockerd[2279]: time="2025-02-13T19:01:49.944487466Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:01:49.944737 dockerd[2279]: time="2025-02-13T19:01:49.944642938Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:01:49.944905 dockerd[2279]: time="2025-02-13T19:01:49.944860294Z" level=info msg="Daemon has completed initialization" Feb 13 19:01:50.001761 dockerd[2279]: time="2025-02-13T19:01:50.001637838Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:01:50.002244 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:01:50.928157 containerd[1946]: time="2025-02-13T19:01:50.927584111Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:01:51.573890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524912937.mount: Deactivated successfully. Feb 13 19:01:53.141212 containerd[1946]: time="2025-02-13T19:01:53.140525758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:53.143011 containerd[1946]: time="2025-02-13T19:01:53.142913158Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218236" Feb 13 19:01:53.144050 containerd[1946]: time="2025-02-13T19:01:53.143456386Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:53.150855 containerd[1946]: time="2025-02-13T19:01:53.150767314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:53.153749 containerd[1946]: time="2025-02-13T19:01:53.153006382Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 2.225353367s" Feb 13 19:01:53.153749 containerd[1946]: time="2025-02-13T19:01:53.153123862Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\"" Feb 13 19:01:53.154149 containerd[1946]: time="2025-02-13T19:01:53.154102738Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:01:54.883749 containerd[1946]: time="2025-02-13T19:01:54.883669790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:54.885834 containerd[1946]: time="2025-02-13T19:01:54.885745322Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528145" Feb 13 19:01:54.887081 containerd[1946]: time="2025-02-13T19:01:54.886991678Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:54.898339 containerd[1946]: time="2025-02-13T19:01:54.898248530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:54.901062 containerd[1946]: time="2025-02-13T19:01:54.900461042Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.74629888s" Feb 13 19:01:54.901062 containerd[1946]: time="2025-02-13T19:01:54.900515786Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\"" Feb 13 19:01:54.902005 containerd[1946]: time="2025-02-13T19:01:54.901889882Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:01:56.524148 containerd[1946]: time="2025-02-13T19:01:56.524073542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:56.526185 containerd[1946]: time="2025-02-13T19:01:56.526104962Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480800" Feb 13 19:01:56.527092 containerd[1946]: time="2025-02-13T19:01:56.526570262Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:56.532061 containerd[1946]: time="2025-02-13T19:01:56.531941510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:56.534442 containerd[1946]: time="2025-02-13T19:01:56.534238550Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.632088472s" Feb 13 19:01:56.534442 containerd[1946]: time="2025-02-13T19:01:56.534294110Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\"" Feb 13 19:01:56.535495 containerd[1946]: time="2025-02-13T19:01:56.535123910Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:01:57.065915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:01:57.077375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:57.508239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:01:57.520880 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:01:57.624081 kubelet[2544]: E0213 19:01:57.623418 2544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:01:57.632659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:01:57.633021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:01:58.085180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3209912535.mount: Deactivated successfully. Feb 13 19:01:58.679451 containerd[1946]: time="2025-02-13T19:01:58.679381541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:58.681706 containerd[1946]: time="2025-02-13T19:01:58.681620405Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363382" Feb 13 19:01:58.683138 containerd[1946]: time="2025-02-13T19:01:58.683022377Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:58.687359 containerd[1946]: time="2025-02-13T19:01:58.687237929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:01:58.689055 containerd[1946]: time="2025-02-13T19:01:58.688778477Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 2.153602931s" Feb 13 19:01:58.689055 containerd[1946]: time="2025-02-13T19:01:58.688837049Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 19:01:58.689600 containerd[1946]: time="2025-02-13T19:01:58.689555981Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:01:59.295345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335367015.mount: Deactivated successfully. Feb 13 19:02:00.496747 containerd[1946]: time="2025-02-13T19:02:00.496497642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:00.498728 containerd[1946]: time="2025-02-13T19:02:00.498658974Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Feb 13 19:02:00.499425 containerd[1946]: time="2025-02-13T19:02:00.499133262Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:00.505319 containerd[1946]: time="2025-02-13T19:02:00.505257486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:00.507929 containerd[1946]: time="2025-02-13T19:02:00.507711918Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.818097689s" Feb 13 19:02:00.507929 containerd[1946]: time="2025-02-13T19:02:00.507768486Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Feb 13 19:02:00.509007 containerd[1946]: time="2025-02-13T19:02:00.508745346Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:02:01.029806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462109619.mount: Deactivated successfully. Feb 13 19:02:01.037381 containerd[1946]: time="2025-02-13T19:02:01.037259873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:01.039012 containerd[1946]: time="2025-02-13T19:02:01.038931617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 19:02:01.041849 containerd[1946]: time="2025-02-13T19:02:01.041774933Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:01.047111 containerd[1946]: time="2025-02-13T19:02:01.047024429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:01.053362 containerd[1946]: time="2025-02-13T19:02:01.051406853Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 542.605359ms" Feb 13 19:02:01.053362 containerd[1946]: time="2025-02-13T19:02:01.051479825Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:02:01.055856 containerd[1946]: time="2025-02-13T19:02:01.055792925Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:02:01.692709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2596364807.mount: Deactivated successfully. Feb 13 19:02:04.667711 containerd[1946]: time="2025-02-13T19:02:04.667374587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:04.669732 containerd[1946]: time="2025-02-13T19:02:04.669661967Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Feb 13 19:02:04.670671 containerd[1946]: time="2025-02-13T19:02:04.670106603Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:04.676626 containerd[1946]: time="2025-02-13T19:02:04.676540883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:04.679335 containerd[1946]: time="2025-02-13T19:02:04.679110479Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.623252826s" Feb 13 19:02:04.679335 containerd[1946]: time="2025-02-13T19:02:04.679170587Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Feb 13 19:02:07.883482 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:02:07.897231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:08.246480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:08.251355 (kubelet)[2694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:08.335069 kubelet[2694]: E0213 19:02:08.328932 2694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:08.333512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:08.333835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:10.972677 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:10.984524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:11.035140 systemd[1]: Reloading requested from client PID 2708 ('systemctl') (unit session-7.scope)... Feb 13 19:02:11.035174 systemd[1]: Reloading... Feb 13 19:02:11.265087 zram_generator::config[2752]: No configuration found. Feb 13 19:02:11.501304 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:11.665921 systemd[1]: Reloading finished in 630 ms. Feb 13 19:02:11.761463 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:02:11.761695 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:02:11.762267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:11.769599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:12.068802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:12.083589 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:12.155712 kubelet[2813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:12.155712 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:02:12.155712 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:12.156337 kubelet[2813]: I0213 19:02:12.155815 2813 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:02:13.138363 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:02:13.425456 kubelet[2813]: I0213 19:02:13.425291 2813 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:02:13.425456 kubelet[2813]: I0213 19:02:13.425347 2813 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:02:13.426100 kubelet[2813]: I0213 19:02:13.425831 2813 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:02:13.477089 kubelet[2813]: I0213 19:02:13.476595 2813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:13.477089 kubelet[2813]: E0213 19:02:13.476983 2813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.26.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:13.489957 kubelet[2813]: E0213 19:02:13.489888 2813 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:02:13.489957 kubelet[2813]: I0213 19:02:13.489946 2813 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:02:13.499089 kubelet[2813]: I0213 19:02:13.498783 2813 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:02:13.500611 kubelet[2813]: I0213 19:02:13.500554 2813 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:02:13.501112 kubelet[2813]: I0213 19:02:13.500746 2813 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-138","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:02:13.501891 kubelet[2813]: I0213 19:02:13.501362 2813 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:02:13.501891 kubelet[2813]: I0213 19:02:13.501390 2813 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:02:13.501891 kubelet[2813]: I0213 19:02:13.501610 2813 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:13.507631 kubelet[2813]: I0213 19:02:13.507575 2813 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:02:13.508221 kubelet[2813]: I0213 19:02:13.508020 2813 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:02:13.508221 kubelet[2813]: I0213 19:02:13.508095 2813 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:02:13.508221 kubelet[2813]: I0213 19:02:13.508117 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:02:13.512897 kubelet[2813]: W0213 19:02:13.511655 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-138&limit=500&resourceVersion=0": dial tcp 172.31.26.138:6443: connect: connection refused Feb 13 19:02:13.512897 kubelet[2813]: E0213 19:02:13.511781 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-138&limit=500&resourceVersion=0\": dial tcp 172.31.26.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:13.516303 kubelet[2813]: W0213 19:02:13.516221 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.138:6443: connect: connection refused Feb 13 19:02:13.516428 kubelet[2813]: E0213 19:02:13.516344 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:13.517243 kubelet[2813]: I0213 19:02:13.517193 2813 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:02:13.518146 kubelet[2813]: I0213 19:02:13.518104 2813 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:02:13.518249 kubelet[2813]: W0213 19:02:13.518235 2813 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:02:13.519991 kubelet[2813]: I0213 19:02:13.519936 2813 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:02:13.520132 kubelet[2813]: I0213 19:02:13.520003 2813 server.go:1287] "Started kubelet" Feb 13 19:02:13.522922 kubelet[2813]: I0213 19:02:13.522855 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:02:13.529027 kubelet[2813]: I0213 19:02:13.528953 2813 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:02:13.530866 kubelet[2813]: I0213 19:02:13.530823 2813 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:02:13.532698 kubelet[2813]: I0213 19:02:13.532612 2813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:02:13.533568 kubelet[2813]: I0213 19:02:13.533125 2813 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:02:13.533568 kubelet[2813]: I0213 19:02:13.533530 2813 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:02:13.533974 kubelet[2813]: E0213 19:02:13.533913 2813 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-26-138\" not found" Feb 13 19:02:13.539197 kubelet[2813]: I0213 19:02:13.539078 2813 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:02:13.539197 kubelet[2813]: I0213 19:02:13.539186 2813 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:02:13.540111 kubelet[2813]: I0213 19:02:13.540076 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:02:13.542412 kubelet[2813]: E0213 19:02:13.542245 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-138?timeout=10s\": dial tcp 172.31.26.138:6443: connect: connection refused" interval="200ms" Feb 13 19:02:13.544162 kubelet[2813]: E0213 19:02:13.543069 2813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.138:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.138:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-138.1823d9c7dcb8f1df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-138,UID:ip-172-31-26-138,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-138,},FirstTimestamp:2025-02-13 19:02:13.519970783 +0000 UTC m=+1.430047880,LastTimestamp:2025-02-13 19:02:13.519970783 +0000 UTC m=+1.430047880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-138,}" Feb 13 19:02:13.544162 kubelet[2813]: I0213 19:02:13.543580 2813 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:02:13.544162 kubelet[2813]: I0213 19:02:13.543730 2813 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:02:13.548571 kubelet[2813]: E0213 19:02:13.548515 2813 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:02:13.549241 kubelet[2813]: I0213 19:02:13.549095 2813 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:02:13.557135 kubelet[2813]: I0213 19:02:13.556155 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:02:13.559584 kubelet[2813]: I0213 19:02:13.559529 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:02:13.559584 kubelet[2813]: I0213 19:02:13.559576 2813 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:02:13.559759 kubelet[2813]: I0213 19:02:13.559609 2813 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:02:13.559759 kubelet[2813]: I0213 19:02:13.559625 2813 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:02:13.559759 kubelet[2813]: E0213 19:02:13.559691 2813 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:02:13.572796 kubelet[2813]: W0213 19:02:13.572696 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.138:6443: connect: connection refused Feb 13 19:02:13.572796 kubelet[2813]: E0213 19:02:13.572790 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:13.573015 kubelet[2813]: W0213 19:02:13.572956 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.138:6443: connect: connection refused Feb 13 19:02:13.573098 kubelet[2813]: E0213 19:02:13.573013 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:13.596779 kubelet[2813]: I0213 19:02:13.596732 2813 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:02:13.596779 kubelet[2813]: I0213 19:02:13.596769 2813 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:02:13.596989 kubelet[2813]: I0213 19:02:13.596805 2813 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:13.599546 kubelet[2813]: I0213 19:02:13.599499 2813 policy_none.go:49] "None policy: Start" Feb 13 19:02:13.599546 kubelet[2813]: I0213 19:02:13.599541 2813 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:02:13.599698 kubelet[2813]: I0213 19:02:13.599565 2813 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:02:13.609812 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:02:13.625669 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:02:13.633182 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:02:13.634600 kubelet[2813]: E0213 19:02:13.634358 2813 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-26-138\" not found" Feb 13 19:02:13.646719 kubelet[2813]: I0213 19:02:13.646650 2813 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:02:13.647082 kubelet[2813]: I0213 19:02:13.646940 2813 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:02:13.647082 kubelet[2813]: I0213 19:02:13.646973 2813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:02:13.647553 kubelet[2813]: I0213 19:02:13.647399 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:02:13.649376 kubelet[2813]: E0213 19:02:13.649312 2813 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:02:13.650427 kubelet[2813]: E0213 19:02:13.649384 2813 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-138\" not found" Feb 13 19:02:13.678903 systemd[1]: Created slice kubepods-burstable-pod9a3fb1807e8f6fe91036e804ced0ccb8.slice - libcontainer container kubepods-burstable-pod9a3fb1807e8f6fe91036e804ced0ccb8.slice. Feb 13 19:02:13.690669 kubelet[2813]: E0213 19:02:13.690630 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-138\" not found" node="ip-172-31-26-138" Feb 13 19:02:13.695784 systemd[1]: Created slice kubepods-burstable-pod8b9669a04341a14bb5ace281f648be3e.slice - libcontainer container kubepods-burstable-pod8b9669a04341a14bb5ace281f648be3e.slice. Feb 13 19:02:13.701428 kubelet[2813]: E0213 19:02:13.701378 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-138\" not found" node="ip-172-31-26-138" Feb 13 19:02:13.707305 systemd[1]: Created slice kubepods-burstable-pode9d8da5231b4b0fe12ade31472dfd822.slice - libcontainer container kubepods-burstable-pode9d8da5231b4b0fe12ade31472dfd822.slice. Feb 13 19:02:13.711382 kubelet[2813]: E0213 19:02:13.711336 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-138\" not found" node="ip-172-31-26-138" Feb 13 19:02:13.741001 kubelet[2813]: I0213 19:02:13.740556 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a3fb1807e8f6fe91036e804ced0ccb8-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-138\" (UID: \"9a3fb1807e8f6fe91036e804ced0ccb8\") " pod="kube-system/kube-scheduler-ip-172-31-26-138" Feb 13 19:02:13.741001 kubelet[2813]: I0213 19:02:13.740614 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b9669a04341a14bb5ace281f648be3e-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-138\" (UID: \"8b9669a04341a14bb5ace281f648be3e\") " pod="kube-system/kube-apiserver-ip-172-31-26-138" Feb 13 19:02:13.741001 kubelet[2813]: I0213 19:02:13.740653 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9d8da5231b4b0fe12ade31472dfd822-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-138\" (UID: \"e9d8da5231b4b0fe12ade31472dfd822\") " pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:13.741001 kubelet[2813]: I0213 19:02:13.740691 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9d8da5231b4b0fe12ade31472dfd822-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-138\" (UID: \"e9d8da5231b4b0fe12ade31472dfd822\") " pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:13.741001 kubelet[2813]: I0213 19:02:13.740731 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b9669a04341a14bb5ace281f648be3e-ca-certs\") pod \"kube-apiserver-ip-172-31-26-138\" (UID: \"8b9669a04341a14bb5ace281f648be3e\") " pod="kube-system/kube-apiserver-ip-172-31-26-138" Feb 13 19:02:13.741384 kubelet[2813]: I0213 19:02:13.740765 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b9669a04341a14bb5ace281f648be3e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-138\" (UID: \"8b9669a04341a14bb5ace281f648be3e\") " pod="kube-system/kube-apiserver-ip-172-31-26-138" Feb 13 19:02:13.741384 kubelet[2813]: I0213 19:02:13.740799 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9d8da5231b4b0fe12ade31472dfd822-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-138\" (UID: \"e9d8da5231b4b0fe12ade31472dfd822\") " pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:13.741384 kubelet[2813]: I0213 19:02:13.740837 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9d8da5231b4b0fe12ade31472dfd822-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-138\" (UID: \"e9d8da5231b4b0fe12ade31472dfd822\") " pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:13.741384 kubelet[2813]: I0213 19:02:13.740875 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9d8da5231b4b0fe12ade31472dfd822-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-138\" (UID: \"e9d8da5231b4b0fe12ade31472dfd822\") " pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:13.743049 kubelet[2813]: E0213 19:02:13.742980 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-138?timeout=10s\": dial tcp 172.31.26.138:6443: connect: connection refused" interval="400ms" Feb 13 19:02:13.751064 kubelet[2813]: I0213 19:02:13.751002 2813 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-138" Feb 13 19:02:13.751619 kubelet[2813]: E0213 19:02:13.751564 2813 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.26.138:6443/api/v1/nodes\": dial tcp 172.31.26.138:6443: connect: connection refused" node="ip-172-31-26-138" Feb 13 19:02:13.954491 kubelet[2813]: I0213 19:02:13.953932 2813 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-138" Feb 13 19:02:13.954491 kubelet[2813]: E0213 19:02:13.954393 2813 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.26.138:6443/api/v1/nodes\": dial tcp 172.31.26.138:6443: connect: connection refused" node="ip-172-31-26-138" Feb 13 19:02:13.993075 containerd[1946]: time="2025-02-13T19:02:13.992846361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-138,Uid:9a3fb1807e8f6fe91036e804ced0ccb8,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:14.002890 containerd[1946]: time="2025-02-13T19:02:14.002826929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-138,Uid:8b9669a04341a14bb5ace281f648be3e,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:14.013232 containerd[1946]: time="2025-02-13T19:02:14.012815705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-138,Uid:e9d8da5231b4b0fe12ade31472dfd822,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:14.144131 kubelet[2813]: E0213 19:02:14.144076 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-138?timeout=10s\": dial tcp 172.31.26.138:6443: connect: connection refused" interval="800ms" Feb 13 19:02:14.356944 kubelet[2813]: I0213 19:02:14.356804 2813 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-138" Feb 13 19:02:14.357347 kubelet[2813]: E0213 19:02:14.357297 2813 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.26.138:6443/api/v1/nodes\": dial tcp 172.31.26.138:6443: connect: connection refused" node="ip-172-31-26-138" Feb 13 19:02:14.415431 kubelet[2813]: W0213 19:02:14.415317 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.26.138:6443: connect: connection refused Feb 13 19:02:14.415564 kubelet[2813]: E0213 19:02:14.415448 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.26.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:14.500086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276565287.mount: Deactivated successfully. Feb 13 19:02:14.507113 containerd[1946]: time="2025-02-13T19:02:14.506697920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:14.508838 containerd[1946]: time="2025-02-13T19:02:14.508736240Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:02:14.511342 containerd[1946]: time="2025-02-13T19:02:14.511280492Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:14.513916 containerd[1946]: time="2025-02-13T19:02:14.513854660Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:14.518064 containerd[1946]: time="2025-02-13T19:02:14.517655600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:14.518064 containerd[1946]: time="2025-02-13T19:02:14.517726400Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:14.518256 containerd[1946]: time="2025-02-13T19:02:14.518090228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:14.522343 containerd[1946]: time="2025-02-13T19:02:14.522277292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.327035ms" Feb 13 19:02:14.524345 containerd[1946]: time="2025-02-13T19:02:14.524294600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:14.528820 containerd[1946]: time="2025-02-13T19:02:14.528391220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 515.453235ms" Feb 13 19:02:14.543474 containerd[1946]: time="2025-02-13T19:02:14.543396920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 540.457839ms" Feb 13 19:02:14.666965 kubelet[2813]: W0213 19:02:14.666110 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-138&limit=500&resourceVersion=0": dial tcp 172.31.26.138:6443: connect: connection refused Feb 13 19:02:14.666965 kubelet[2813]: E0213 19:02:14.666812 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.26.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-138&limit=500&resourceVersion=0\": dial tcp 172.31.26.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:14.718066 kubelet[2813]: W0213 19:02:14.716925 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.138:6443: connect: connection refused Feb 13 19:02:14.718066 kubelet[2813]: E0213 19:02:14.717023 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.26.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:14.759751 containerd[1946]: time="2025-02-13T19:02:14.759200289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:14.759751 containerd[1946]: time="2025-02-13T19:02:14.759402513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:14.759751 containerd[1946]: time="2025-02-13T19:02:14.759440217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:14.759751 containerd[1946]: time="2025-02-13T19:02:14.759591921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:14.760659 containerd[1946]: time="2025-02-13T19:02:14.760307121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:14.760659 containerd[1946]: time="2025-02-13T19:02:14.760388085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:14.760659 containerd[1946]: time="2025-02-13T19:02:14.760412385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:14.760659 containerd[1946]: time="2025-02-13T19:02:14.760556517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:14.762438 containerd[1946]: time="2025-02-13T19:02:14.762221157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:14.762438 containerd[1946]: time="2025-02-13T19:02:14.762336177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:14.762817 containerd[1946]: time="2025-02-13T19:02:14.762710745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:14.763420 containerd[1946]: time="2025-02-13T19:02:14.763330041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:14.766869 kubelet[2813]: W0213 19:02:14.766706 2813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.138:6443: connect: connection refused Feb 13 19:02:14.766869 kubelet[2813]: E0213 19:02:14.766808 2813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.26.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.138:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:14.812346 systemd[1]: Started cri-containerd-666eb92eaa4fd03ca144a3f4595c0ce5d8ea17b4e72f90796965b099435d8619.scope - libcontainer container 666eb92eaa4fd03ca144a3f4595c0ce5d8ea17b4e72f90796965b099435d8619. Feb 13 19:02:14.823988 systemd[1]: Started cri-containerd-76cf6e767c32b1a4afdb63f8f756a175a2ea5084e85a4cea5e87204d027f3664.scope - libcontainer container 76cf6e767c32b1a4afdb63f8f756a175a2ea5084e85a4cea5e87204d027f3664. Feb 13 19:02:14.836078 systemd[1]: Started cri-containerd-ea780ec821ac59eeff41e606ce1d59e519fbd678c0eed947a7826ed8d67ecfdd.scope - libcontainer container ea780ec821ac59eeff41e606ce1d59e519fbd678c0eed947a7826ed8d67ecfdd. Feb 13 19:02:14.937336 containerd[1946]: time="2025-02-13T19:02:14.936691966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-138,Uid:e9d8da5231b4b0fe12ade31472dfd822,Namespace:kube-system,Attempt:0,} returns sandbox id \"76cf6e767c32b1a4afdb63f8f756a175a2ea5084e85a4cea5e87204d027f3664\"" Feb 13 19:02:14.945567 kubelet[2813]: E0213 19:02:14.945240 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-138?timeout=10s\": dial tcp 172.31.26.138:6443: connect: connection refused" interval="1.6s" Feb 13 19:02:14.952370 containerd[1946]: time="2025-02-13T19:02:14.952203238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-138,Uid:8b9669a04341a14bb5ace281f648be3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"666eb92eaa4fd03ca144a3f4595c0ce5d8ea17b4e72f90796965b099435d8619\"" Feb 13 19:02:14.953648 containerd[1946]: time="2025-02-13T19:02:14.953176066Z" level=info msg="CreateContainer within sandbox \"76cf6e767c32b1a4afdb63f8f756a175a2ea5084e85a4cea5e87204d027f3664\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:02:14.974529 containerd[1946]: time="2025-02-13T19:02:14.974363062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-138,Uid:9a3fb1807e8f6fe91036e804ced0ccb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea780ec821ac59eeff41e606ce1d59e519fbd678c0eed947a7826ed8d67ecfdd\"" Feb 13 19:02:14.977415 containerd[1946]: time="2025-02-13T19:02:14.977299090Z" level=info msg="CreateContainer within sandbox \"666eb92eaa4fd03ca144a3f4595c0ce5d8ea17b4e72f90796965b099435d8619\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:02:14.981718 containerd[1946]: time="2025-02-13T19:02:14.981630154Z" level=info msg="CreateContainer within sandbox \"ea780ec821ac59eeff41e606ce1d59e519fbd678c0eed947a7826ed8d67ecfdd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:02:14.988135 containerd[1946]: time="2025-02-13T19:02:14.987391078Z" level=info msg="CreateContainer within sandbox \"76cf6e767c32b1a4afdb63f8f756a175a2ea5084e85a4cea5e87204d027f3664\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eaad26c32586ec665d6578768540ffa131a3e9afaa77adde1111c3ed20edefb3\"" Feb 13 19:02:14.989142 containerd[1946]: time="2025-02-13T19:02:14.988967902Z" level=info msg="StartContainer for \"eaad26c32586ec665d6578768540ffa131a3e9afaa77adde1111c3ed20edefb3\"" Feb 13 19:02:15.023933 containerd[1946]: time="2025-02-13T19:02:15.023862774Z" level=info msg="CreateContainer within sandbox \"ea780ec821ac59eeff41e606ce1d59e519fbd678c0eed947a7826ed8d67ecfdd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4194e08aa0a0ea72ec3bb66d05945673861c520033cbcfbbef007319393e2359\"" Feb 13 19:02:15.026067 containerd[1946]: time="2025-02-13T19:02:15.025373538Z" level=info msg="StartContainer for \"4194e08aa0a0ea72ec3bb66d05945673861c520033cbcfbbef007319393e2359\"" Feb 13 19:02:15.031663 containerd[1946]: time="2025-02-13T19:02:15.031417302Z" level=info msg="CreateContainer within sandbox \"666eb92eaa4fd03ca144a3f4595c0ce5d8ea17b4e72f90796965b099435d8619\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d45829d1f731357d37ea029b437780ad4e1e63313c9a8dc04899c2bbae77ad41\"" Feb 13 19:02:15.036161 containerd[1946]: time="2025-02-13T19:02:15.036086418Z" level=info msg="StartContainer for \"d45829d1f731357d37ea029b437780ad4e1e63313c9a8dc04899c2bbae77ad41\"" Feb 13 19:02:15.045358 systemd[1]: Started cri-containerd-eaad26c32586ec665d6578768540ffa131a3e9afaa77adde1111c3ed20edefb3.scope - libcontainer container eaad26c32586ec665d6578768540ffa131a3e9afaa77adde1111c3ed20edefb3. Feb 13 19:02:15.107383 systemd[1]: Started cri-containerd-d45829d1f731357d37ea029b437780ad4e1e63313c9a8dc04899c2bbae77ad41.scope - libcontainer container d45829d1f731357d37ea029b437780ad4e1e63313c9a8dc04899c2bbae77ad41. Feb 13 19:02:15.133348 systemd[1]: Started cri-containerd-4194e08aa0a0ea72ec3bb66d05945673861c520033cbcfbbef007319393e2359.scope - libcontainer container 4194e08aa0a0ea72ec3bb66d05945673861c520033cbcfbbef007319393e2359. Feb 13 19:02:15.168069 kubelet[2813]: I0213 19:02:15.166663 2813 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-138" Feb 13 19:02:15.168069 kubelet[2813]: E0213 19:02:15.167159 2813 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.26.138:6443/api/v1/nodes\": dial tcp 172.31.26.138:6443: connect: connection refused" node="ip-172-31-26-138" Feb 13 19:02:15.173149 containerd[1946]: time="2025-02-13T19:02:15.173077303Z" level=info msg="StartContainer for \"eaad26c32586ec665d6578768540ffa131a3e9afaa77adde1111c3ed20edefb3\" returns successfully" Feb 13 19:02:15.240641 containerd[1946]: time="2025-02-13T19:02:15.240298831Z" level=info msg="StartContainer for \"d45829d1f731357d37ea029b437780ad4e1e63313c9a8dc04899c2bbae77ad41\" returns successfully" Feb 13 19:02:15.303424 containerd[1946]: time="2025-02-13T19:02:15.303350420Z" level=info msg="StartContainer for \"4194e08aa0a0ea72ec3bb66d05945673861c520033cbcfbbef007319393e2359\" returns successfully" Feb 13 19:02:15.608573 kubelet[2813]: E0213 19:02:15.608413 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-138\" not found" node="ip-172-31-26-138" Feb 13 19:02:15.627289 kubelet[2813]: E0213 19:02:15.627229 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-138\" not found" node="ip-172-31-26-138" Feb 13 19:02:15.635312 kubelet[2813]: E0213 19:02:15.635060 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-138\" not found" node="ip-172-31-26-138" Feb 13 19:02:16.637594 kubelet[2813]: E0213 19:02:16.637537 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-138\" not found" node="ip-172-31-26-138" Feb 13 19:02:16.638445 kubelet[2813]: E0213 19:02:16.638299 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-138\" not found" node="ip-172-31-26-138" Feb 13 19:02:16.770645 kubelet[2813]: I0213 19:02:16.770594 2813 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-138" Feb 13 19:02:19.056102 kubelet[2813]: E0213 19:02:19.056052 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-138\" not found" node="ip-172-31-26-138" Feb 13 19:02:19.516150 kubelet[2813]: I0213 19:02:19.515393 2813 apiserver.go:52] "Watching apiserver" Feb 13 19:02:19.639378 kubelet[2813]: I0213 19:02:19.639322 2813 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:02:19.667280 kubelet[2813]: E0213 19:02:19.667205 2813 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-138\" not found" node="ip-172-31-26-138" Feb 13 19:02:19.680986 kubelet[2813]: E0213 19:02:19.680939 2813 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-138\" not found" node="ip-172-31-26-138" Feb 13 19:02:19.796653 kubelet[2813]: I0213 19:02:19.796489 2813 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-26-138" Feb 13 19:02:19.796653 kubelet[2813]: E0213 19:02:19.796550 2813 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-26-138\": node \"ip-172-31-26-138\" not found" Feb 13 19:02:19.908239 kubelet[2813]: E0213 19:02:19.908076 2813 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-26-138.1823d9c7dcb8f1df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-138,UID:ip-172-31-26-138,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-138,},FirstTimestamp:2025-02-13 19:02:13.519970783 +0000 UTC m=+1.430047880,LastTimestamp:2025-02-13 19:02:13.519970783 +0000 UTC m=+1.430047880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-138,}" Feb 13 19:02:19.934810 kubelet[2813]: I0213 19:02:19.934740 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-138" Feb 13 19:02:19.990504 kubelet[2813]: E0213 19:02:19.990340 2813 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-26-138.1823d9c7de6c27df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-138,UID:ip-172-31-26-138,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-26-138,},FirstTimestamp:2025-02-13 19:02:13.548492767 +0000 UTC m=+1.458569852,LastTimestamp:2025-02-13 19:02:13.548492767 +0000 UTC m=+1.458569852,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-138,}" Feb 13 19:02:20.024556 kubelet[2813]: E0213 19:02:20.024478 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-138\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-138" Feb 13 19:02:20.024556 kubelet[2813]: I0213 19:02:20.024537 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-138" Feb 13 19:02:20.047594 kubelet[2813]: E0213 19:02:20.047433 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-138\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-138" Feb 13 19:02:20.047594 kubelet[2813]: I0213 19:02:20.047486 2813 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:20.076165 kubelet[2813]: E0213 19:02:20.075989 2813 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-26-138.1823d9c7e12af067 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-138,UID:ip-172-31-26-138,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-26-138 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-26-138,},FirstTimestamp:2025-02-13 19:02:13.594550375 +0000 UTC m=+1.504627448,LastTimestamp:2025-02-13 19:02:13.594550375 +0000 UTC m=+1.504627448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-138,}" Feb 13 19:02:20.081483 kubelet[2813]: E0213 19:02:20.081421 2813 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-138\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:22.062965 systemd[1]: Reloading requested from client PID 3092 ('systemctl') (unit session-7.scope)... Feb 13 19:02:22.063530 systemd[1]: Reloading... Feb 13 19:02:22.251094 zram_generator::config[3138]: No configuration found. Feb 13 19:02:22.463265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:22.654409 systemd[1]: Reloading finished in 590 ms. Feb 13 19:02:22.723652 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:22.738775 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:02:22.739243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:22.739323 systemd[1]: kubelet.service: Consumed 2.128s CPU time, 125.8M memory peak, 0B memory swap peak. Feb 13 19:02:22.747621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:23.063077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:23.078684 (kubelet)[3192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:23.204179 kubelet[3192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:23.204179 kubelet[3192]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:02:23.204179 kubelet[3192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:23.204737 kubelet[3192]: I0213 19:02:23.204366 3192 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:02:23.224094 kubelet[3192]: I0213 19:02:23.223475 3192 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:02:23.225168 kubelet[3192]: I0213 19:02:23.225112 3192 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:02:23.225885 kubelet[3192]: I0213 19:02:23.225846 3192 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:02:23.230499 kubelet[3192]: I0213 19:02:23.230459 3192 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:02:23.235322 kubelet[3192]: I0213 19:02:23.235281 3192 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:23.243936 kubelet[3192]: E0213 19:02:23.243774 3192 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:02:23.243936 kubelet[3192]: I0213 19:02:23.243838 3192 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:02:23.253500 kubelet[3192]: I0213 19:02:23.253184 3192 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:02:23.255755 kubelet[3192]: I0213 19:02:23.255381 3192 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:02:23.255755 kubelet[3192]: I0213 19:02:23.255444 3192 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-138","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:02:23.256924 kubelet[3192]: I0213 19:02:23.256867 3192 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:02:23.256924 kubelet[3192]: I0213 19:02:23.256914 3192 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:02:23.257127 kubelet[3192]: I0213 19:02:23.257018 3192 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:23.260344 kubelet[3192]: I0213 19:02:23.257318 3192 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:02:23.260344 kubelet[3192]: I0213 19:02:23.257351 3192 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:02:23.260344 kubelet[3192]: I0213 19:02:23.257388 3192 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:02:23.260344 kubelet[3192]: I0213 19:02:23.257409 3192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:02:23.274159 kubelet[3192]: I0213 19:02:23.272709 3192 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:02:23.277348 kubelet[3192]: I0213 19:02:23.276152 3192 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:02:23.283095 kubelet[3192]: I0213 19:02:23.282102 3192 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:02:23.283095 kubelet[3192]: I0213 19:02:23.282752 3192 server.go:1287] "Started kubelet" Feb 13 19:02:23.291329 sudo[3207]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:02:23.292006 sudo[3207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:02:23.294944 kubelet[3192]: I0213 19:02:23.293588 3192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:02:23.303345 kubelet[3192]: I0213 19:02:23.302972 3192 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:02:23.331344 kubelet[3192]: I0213 19:02:23.331208 3192 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:02:23.334677 kubelet[3192]: I0213 19:02:23.318808 3192 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:02:23.339681 kubelet[3192]: I0213 19:02:23.306964 3192 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:02:23.341284 kubelet[3192]: I0213 19:02:23.318828 3192 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:02:23.342494 kubelet[3192]: I0213 19:02:23.342341 3192 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:02:23.344582 kubelet[3192]: I0213 19:02:23.344054 3192 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:02:23.344582 kubelet[3192]: I0213 19:02:23.344234 3192 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:02:23.347389 kubelet[3192]: E0213 19:02:23.318889 3192 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-26-138\" not found" Feb 13 19:02:23.354176 kubelet[3192]: I0213 19:02:23.354027 3192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:02:23.371953 kubelet[3192]: I0213 19:02:23.371663 3192 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:02:23.397262 kubelet[3192]: I0213 19:02:23.396621 3192 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:02:23.453518 kubelet[3192]: E0213 19:02:23.453386 3192 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-26-138\" not found" Feb 13 19:02:23.477968 kubelet[3192]: I0213 19:02:23.477721 3192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:02:23.481090 kubelet[3192]: I0213 19:02:23.480817 3192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:02:23.481090 kubelet[3192]: I0213 19:02:23.480861 3192 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:02:23.481090 kubelet[3192]: I0213 19:02:23.480893 3192 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:02:23.481090 kubelet[3192]: I0213 19:02:23.480907 3192 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:02:23.481090 kubelet[3192]: E0213 19:02:23.480982 3192 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:02:23.583189 kubelet[3192]: E0213 19:02:23.581993 3192 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:02:23.629232 kubelet[3192]: I0213 19:02:23.629198 3192 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:02:23.629618 kubelet[3192]: I0213 19:02:23.629593 3192 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:02:23.629740 kubelet[3192]: I0213 19:02:23.629721 3192 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:23.630511 kubelet[3192]: I0213 19:02:23.630100 3192 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:02:23.630511 kubelet[3192]: I0213 19:02:23.630129 3192 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:02:23.630511 kubelet[3192]: I0213 19:02:23.630166 3192 policy_none.go:49] "None policy: Start" Feb 13 19:02:23.630511 kubelet[3192]: I0213 19:02:23.630185 3192 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:02:23.630511 kubelet[3192]: I0213 19:02:23.630206 3192 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:02:23.630511 kubelet[3192]: I0213 19:02:23.630400 3192 state_mem.go:75] "Updated machine memory state" Feb 13 19:02:23.639089 kubelet[3192]: I0213 19:02:23.638962 3192 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:02:23.642002 kubelet[3192]: I0213 19:02:23.641363 3192 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:02:23.642002 kubelet[3192]: I0213 19:02:23.641408 3192 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:02:23.642002 kubelet[3192]: I0213 19:02:23.641847 3192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:02:23.648463 kubelet[3192]: E0213 19:02:23.648334 3192 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:02:23.758625 kubelet[3192]: I0213 19:02:23.757782 3192 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-26-138" Feb 13 19:02:23.773634 kubelet[3192]: I0213 19:02:23.772999 3192 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-26-138" Feb 13 19:02:23.773634 kubelet[3192]: I0213 19:02:23.773129 3192 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-26-138" Feb 13 19:02:23.783106 kubelet[3192]: I0213 19:02:23.783067 3192 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-138" Feb 13 19:02:23.786480 kubelet[3192]: I0213 19:02:23.783503 3192 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-138" Feb 13 19:02:23.786973 kubelet[3192]: I0213 19:02:23.783704 3192 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:23.869012 kubelet[3192]: I0213 19:02:23.868293 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a3fb1807e8f6fe91036e804ced0ccb8-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-138\" (UID: \"9a3fb1807e8f6fe91036e804ced0ccb8\") " pod="kube-system/kube-scheduler-ip-172-31-26-138" Feb 13 19:02:23.869012 kubelet[3192]: I0213 19:02:23.868376 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b9669a04341a14bb5ace281f648be3e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-138\" (UID: \"8b9669a04341a14bb5ace281f648be3e\") " pod="kube-system/kube-apiserver-ip-172-31-26-138" Feb 13 19:02:23.869012 kubelet[3192]: I0213 19:02:23.868418 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9d8da5231b4b0fe12ade31472dfd822-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-138\" (UID: \"e9d8da5231b4b0fe12ade31472dfd822\") " pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:23.869012 kubelet[3192]: I0213 19:02:23.868459 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b9669a04341a14bb5ace281f648be3e-ca-certs\") pod \"kube-apiserver-ip-172-31-26-138\" (UID: \"8b9669a04341a14bb5ace281f648be3e\") " pod="kube-system/kube-apiserver-ip-172-31-26-138" Feb 13 19:02:23.869012 kubelet[3192]: I0213 19:02:23.868498 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b9669a04341a14bb5ace281f648be3e-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-138\" (UID: \"8b9669a04341a14bb5ace281f648be3e\") " pod="kube-system/kube-apiserver-ip-172-31-26-138" Feb 13 19:02:23.870308 kubelet[3192]: I0213 19:02:23.868537 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9d8da5231b4b0fe12ade31472dfd822-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-138\" (UID: \"e9d8da5231b4b0fe12ade31472dfd822\") " pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:23.870308 kubelet[3192]: I0213 19:02:23.868573 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9d8da5231b4b0fe12ade31472dfd822-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-138\" (UID: \"e9d8da5231b4b0fe12ade31472dfd822\") " pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:23.870308 kubelet[3192]: I0213 19:02:23.868612 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9d8da5231b4b0fe12ade31472dfd822-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-138\" (UID: \"e9d8da5231b4b0fe12ade31472dfd822\") " pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:23.870308 kubelet[3192]: I0213 19:02:23.868649 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9d8da5231b4b0fe12ade31472dfd822-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-138\" (UID: \"e9d8da5231b4b0fe12ade31472dfd822\") " pod="kube-system/kube-controller-manager-ip-172-31-26-138" Feb 13 19:02:24.243551 sudo[3207]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:24.258976 kubelet[3192]: I0213 19:02:24.258885 3192 apiserver.go:52] "Watching apiserver" Feb 13 19:02:24.342470 kubelet[3192]: I0213 19:02:24.342356 3192 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:02:24.348065 kubelet[3192]: I0213 19:02:24.346987 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-138" podStartSLOduration=1.346964501 podStartE2EDuration="1.346964501s" podCreationTimestamp="2025-02-13 19:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:24.345995117 +0000 UTC m=+1.259739572" watchObservedRunningTime="2025-02-13 19:02:24.346964501 +0000 UTC m=+1.260708956" Feb 13 19:02:24.377518 kubelet[3192]: I0213 19:02:24.377320 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-138" podStartSLOduration=1.377295257 podStartE2EDuration="1.377295257s" podCreationTimestamp="2025-02-13 19:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:24.362363573 +0000 UTC m=+1.276108028" watchObservedRunningTime="2025-02-13 19:02:24.377295257 +0000 UTC m=+1.291039700" Feb 13 19:02:24.403058 kubelet[3192]: I0213 19:02:24.402394 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-138" podStartSLOduration=1.402332477 podStartE2EDuration="1.402332477s" podCreationTimestamp="2025-02-13 19:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:24.378485549 +0000 UTC m=+1.292230028" watchObservedRunningTime="2025-02-13 19:02:24.402332477 +0000 UTC m=+1.316076920" Feb 13 19:02:26.855582 sudo[2261]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:26.878001 sshd[2260]: Connection closed by 147.75.109.163 port 46294 Feb 13 19:02:26.878874 sshd-session[2253]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:26.884518 systemd[1]: sshd@6-172.31.26.138:22-147.75.109.163:46294.service: Deactivated successfully. Feb 13 19:02:26.888587 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:02:26.889706 systemd[1]: session-7.scope: Consumed 10.058s CPU time, 153.8M memory peak, 0B memory swap peak. Feb 13 19:02:26.893817 systemd-logind[1924]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:02:26.896081 systemd-logind[1924]: Removed session 7. Feb 13 19:02:27.788599 kubelet[3192]: I0213 19:02:27.788506 3192 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:02:27.790724 containerd[1946]: time="2025-02-13T19:02:27.790201378Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:02:27.792878 kubelet[3192]: I0213 19:02:27.791026 3192 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:02:27.949642 update_engine[1925]: I20250213 19:02:27.948658 1925 update_attempter.cc:509] Updating boot flags... Feb 13 19:02:28.028192 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3277) Feb 13 19:02:28.311198 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3277) Feb 13 19:02:28.717831 kubelet[3192]: W0213 19:02:28.717677 3192 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-26-138" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-138' and this object Feb 13 19:02:28.717831 kubelet[3192]: E0213 19:02:28.717750 3192 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-26-138\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-138' and this object" logger="UnhandledError" Feb 13 19:02:28.721304 kubelet[3192]: I0213 19:02:28.717839 3192 status_manager.go:890] "Failed to get status for pod" podUID="331ab2d9-7059-4e82-88e1-a3b678c15b12" pod="kube-system/kube-proxy-tx2hd" err="pods \"kube-proxy-tx2hd\" is forbidden: User \"system:node:ip-172-31-26-138\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-138' and this object" Feb 13 19:02:28.721304 kubelet[3192]: W0213 19:02:28.720365 3192 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-26-138" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-138' and this object Feb 13 19:02:28.721304 kubelet[3192]: E0213 19:02:28.720420 3192 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-26-138\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-138' and this object" logger="UnhandledError" Feb 13 19:02:28.723638 systemd[1]: Created slice kubepods-besteffort-pod331ab2d9_7059_4e82_88e1_a3b678c15b12.slice - libcontainer container kubepods-besteffort-pod331ab2d9_7059_4e82_88e1_a3b678c15b12.slice. Feb 13 19:02:28.766233 systemd[1]: Created slice kubepods-burstable-podd2e242cf_07e3_4cfc_8cfc_9154b8e9bf95.slice - libcontainer container kubepods-burstable-podd2e242cf_07e3_4cfc_8cfc_9154b8e9bf95.slice. Feb 13 19:02:28.806071 kubelet[3192]: I0213 19:02:28.804394 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/331ab2d9-7059-4e82-88e1-a3b678c15b12-kube-proxy\") pod \"kube-proxy-tx2hd\" (UID: \"331ab2d9-7059-4e82-88e1-a3b678c15b12\") " pod="kube-system/kube-proxy-tx2hd" Feb 13 19:02:28.806972 kubelet[3192]: I0213 19:02:28.806755 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cni-path\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.806972 kubelet[3192]: I0213 19:02:28.806823 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-clustermesh-secrets\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.806972 kubelet[3192]: I0213 19:02:28.806861 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-hubble-tls\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.806972 kubelet[3192]: I0213 19:02:28.806898 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/331ab2d9-7059-4e82-88e1-a3b678c15b12-xtables-lock\") pod \"kube-proxy-tx2hd\" (UID: \"331ab2d9-7059-4e82-88e1-a3b678c15b12\") " pod="kube-system/kube-proxy-tx2hd" Feb 13 19:02:28.806972 kubelet[3192]: I0213 19:02:28.806935 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-etc-cni-netd\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.809255 kubelet[3192]: I0213 19:02:28.806986 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-lib-modules\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.809255 kubelet[3192]: I0213 19:02:28.807086 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-host-proc-sys-kernel\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.809255 kubelet[3192]: I0213 19:02:28.807176 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-host-proc-sys-net\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.809255 kubelet[3192]: I0213 19:02:28.807217 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwtn\" (UniqueName: \"kubernetes.io/projected/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-kube-api-access-gfwtn\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.809255 kubelet[3192]: I0213 19:02:28.807310 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z25nm\" (UniqueName: \"kubernetes.io/projected/331ab2d9-7059-4e82-88e1-a3b678c15b12-kube-api-access-z25nm\") pod \"kube-proxy-tx2hd\" (UID: \"331ab2d9-7059-4e82-88e1-a3b678c15b12\") " pod="kube-system/kube-proxy-tx2hd" Feb 13 19:02:28.810244 kubelet[3192]: I0213 19:02:28.807364 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-hostproc\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.810244 kubelet[3192]: I0213 19:02:28.807405 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-cgroup\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.810244 kubelet[3192]: I0213 19:02:28.807446 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/331ab2d9-7059-4e82-88e1-a3b678c15b12-lib-modules\") pod \"kube-proxy-tx2hd\" (UID: \"331ab2d9-7059-4e82-88e1-a3b678c15b12\") " pod="kube-system/kube-proxy-tx2hd" Feb 13 19:02:28.810244 kubelet[3192]: I0213 19:02:28.807479 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-xtables-lock\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.810244 kubelet[3192]: I0213 19:02:28.807539 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-config-path\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.810244 kubelet[3192]: I0213 19:02:28.807580 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-run\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.810618 kubelet[3192]: I0213 19:02:28.807620 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-bpf-maps\") pod \"cilium-xbfs9\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " pod="kube-system/cilium-xbfs9" Feb 13 19:02:28.830073 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3277) Feb 13 19:02:29.029146 systemd[1]: Created slice kubepods-besteffort-pod3e4a1f35_e21f_44d5_b89a_aa4d6c7db800.slice - libcontainer container kubepods-besteffort-pod3e4a1f35_e21f_44d5_b89a_aa4d6c7db800.slice. Feb 13 19:02:29.110906 kubelet[3192]: I0213 19:02:29.110281 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlg6t\" (UniqueName: \"kubernetes.io/projected/3e4a1f35-e21f-44d5-b89a-aa4d6c7db800-kube-api-access-mlg6t\") pod \"cilium-operator-6c4d7847fc-js9c9\" (UID: \"3e4a1f35-e21f-44d5-b89a-aa4d6c7db800\") " pod="kube-system/cilium-operator-6c4d7847fc-js9c9" Feb 13 19:02:29.110906 kubelet[3192]: I0213 19:02:29.110387 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e4a1f35-e21f-44d5-b89a-aa4d6c7db800-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-js9c9\" (UID: \"3e4a1f35-e21f-44d5-b89a-aa4d6c7db800\") " pod="kube-system/cilium-operator-6c4d7847fc-js9c9" Feb 13 19:02:29.644866 containerd[1946]: time="2025-02-13T19:02:29.644799419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-js9c9,Uid:3e4a1f35-e21f-44d5-b89a-aa4d6c7db800,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:29.692427 containerd[1946]: time="2025-02-13T19:02:29.691567379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xbfs9,Uid:d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:29.694981 containerd[1946]: time="2025-02-13T19:02:29.694835015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:29.694981 containerd[1946]: time="2025-02-13T19:02:29.694941467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:29.696164 containerd[1946]: time="2025-02-13T19:02:29.696068279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:29.696547 containerd[1946]: time="2025-02-13T19:02:29.696455231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:29.731471 systemd[1]: Started cri-containerd-6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468.scope - libcontainer container 6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468. Feb 13 19:02:29.760220 containerd[1946]: time="2025-02-13T19:02:29.759615371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:29.760220 containerd[1946]: time="2025-02-13T19:02:29.759735419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:29.760220 containerd[1946]: time="2025-02-13T19:02:29.759771695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:29.760220 containerd[1946]: time="2025-02-13T19:02:29.759908915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:29.806400 systemd[1]: Started cri-containerd-1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c.scope - libcontainer container 1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c. Feb 13 19:02:29.820748 containerd[1946]: time="2025-02-13T19:02:29.820671504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-js9c9,Uid:3e4a1f35-e21f-44d5-b89a-aa4d6c7db800,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\"" Feb 13 19:02:29.827167 containerd[1946]: time="2025-02-13T19:02:29.826295244Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:02:29.865188 containerd[1946]: time="2025-02-13T19:02:29.865108656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xbfs9,Uid:d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\"" Feb 13 19:02:29.917299 kubelet[3192]: E0213 19:02:29.917085 3192 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:02:29.917299 kubelet[3192]: E0213 19:02:29.917217 3192 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/331ab2d9-7059-4e82-88e1-a3b678c15b12-kube-proxy podName:331ab2d9-7059-4e82-88e1-a3b678c15b12 nodeName:}" failed. No retries permitted until 2025-02-13 19:02:30.417184732 +0000 UTC m=+7.330929163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/331ab2d9-7059-4e82-88e1-a3b678c15b12-kube-proxy") pod "kube-proxy-tx2hd" (UID: "331ab2d9-7059-4e82-88e1-a3b678c15b12") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:02:30.541345 containerd[1946]: time="2025-02-13T19:02:30.541259915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tx2hd,Uid:331ab2d9-7059-4e82-88e1-a3b678c15b12,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:30.601959 containerd[1946]: time="2025-02-13T19:02:30.601690584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:30.602362 containerd[1946]: time="2025-02-13T19:02:30.601863792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:30.602850 containerd[1946]: time="2025-02-13T19:02:30.602762244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:30.603339 containerd[1946]: time="2025-02-13T19:02:30.603067020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:30.651433 systemd[1]: Started cri-containerd-67ae99fafd8248202535c30bc717719812188e8aebf524c46e732d34013cacb1.scope - libcontainer container 67ae99fafd8248202535c30bc717719812188e8aebf524c46e732d34013cacb1. Feb 13 19:02:30.692729 containerd[1946]: time="2025-02-13T19:02:30.692669244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tx2hd,Uid:331ab2d9-7059-4e82-88e1-a3b678c15b12,Namespace:kube-system,Attempt:0,} returns sandbox id \"67ae99fafd8248202535c30bc717719812188e8aebf524c46e732d34013cacb1\"" Feb 13 19:02:30.701976 containerd[1946]: time="2025-02-13T19:02:30.701913096Z" level=info msg="CreateContainer within sandbox \"67ae99fafd8248202535c30bc717719812188e8aebf524c46e732d34013cacb1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:02:30.735928 containerd[1946]: time="2025-02-13T19:02:30.735784920Z" level=info msg="CreateContainer within sandbox \"67ae99fafd8248202535c30bc717719812188e8aebf524c46e732d34013cacb1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf0c841c14db173a7c156de0fd655d474c4b55a32a09c3b5598ad175b6ea5d80\"" Feb 13 19:02:30.737422 containerd[1946]: time="2025-02-13T19:02:30.736552668Z" level=info msg="StartContainer for \"cf0c841c14db173a7c156de0fd655d474c4b55a32a09c3b5598ad175b6ea5d80\"" Feb 13 19:02:30.782357 systemd[1]: Started cri-containerd-cf0c841c14db173a7c156de0fd655d474c4b55a32a09c3b5598ad175b6ea5d80.scope - libcontainer container cf0c841c14db173a7c156de0fd655d474c4b55a32a09c3b5598ad175b6ea5d80. Feb 13 19:02:30.840744 containerd[1946]: time="2025-02-13T19:02:30.840431533Z" level=info msg="StartContainer for \"cf0c841c14db173a7c156de0fd655d474c4b55a32a09c3b5598ad175b6ea5d80\" returns successfully" Feb 13 19:02:31.662419 kubelet[3192]: I0213 19:02:31.661722 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tx2hd" podStartSLOduration=3.661700329 podStartE2EDuration="3.661700329s" podCreationTimestamp="2025-02-13 19:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:31.661680961 +0000 UTC m=+8.575425428" watchObservedRunningTime="2025-02-13 19:02:31.661700329 +0000 UTC m=+8.575444772" Feb 13 19:02:33.180854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1880623837.mount: Deactivated successfully. Feb 13 19:02:33.847158 containerd[1946]: time="2025-02-13T19:02:33.846191608Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:33.848356 containerd[1946]: time="2025-02-13T19:02:33.848233936Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 19:02:33.850763 containerd[1946]: time="2025-02-13T19:02:33.850686088Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:33.853663 containerd[1946]: time="2025-02-13T19:02:33.853483336Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.027057448s" Feb 13 19:02:33.853663 containerd[1946]: time="2025-02-13T19:02:33.853538968Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 19:02:33.856205 containerd[1946]: time="2025-02-13T19:02:33.856153828Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:02:33.860068 containerd[1946]: time="2025-02-13T19:02:33.859711816Z" level=info msg="CreateContainer within sandbox \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:02:33.894608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3726271844.mount: Deactivated successfully. Feb 13 19:02:33.898882 containerd[1946]: time="2025-02-13T19:02:33.898747468Z" level=info msg="CreateContainer within sandbox \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\"" Feb 13 19:02:33.899572 containerd[1946]: time="2025-02-13T19:02:33.899513176Z" level=info msg="StartContainer for \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\"" Feb 13 19:02:33.948349 systemd[1]: Started cri-containerd-1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a.scope - libcontainer container 1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a. Feb 13 19:02:34.004971 containerd[1946]: time="2025-02-13T19:02:34.004842192Z" level=info msg="StartContainer for \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\" returns successfully" Feb 13 19:02:34.694671 kubelet[3192]: I0213 19:02:34.694566 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-js9c9" podStartSLOduration=2.664438788 podStartE2EDuration="6.69454156s" podCreationTimestamp="2025-02-13 19:02:28 +0000 UTC" firstStartedPulling="2025-02-13 19:02:29.825281304 +0000 UTC m=+6.739025747" lastFinishedPulling="2025-02-13 19:02:33.855384076 +0000 UTC m=+10.769128519" observedRunningTime="2025-02-13 19:02:34.694422376 +0000 UTC m=+11.608166855" watchObservedRunningTime="2025-02-13 19:02:34.69454156 +0000 UTC m=+11.608286015" Feb 13 19:02:39.849908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634615431.mount: Deactivated successfully. Feb 13 19:02:42.336301 containerd[1946]: time="2025-02-13T19:02:42.336240994Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:42.338459 containerd[1946]: time="2025-02-13T19:02:42.338377774Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 19:02:42.340411 containerd[1946]: time="2025-02-13T19:02:42.340365226Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:42.344338 containerd[1946]: time="2025-02-13T19:02:42.344274910Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.48784519s" Feb 13 19:02:42.344445 containerd[1946]: time="2025-02-13T19:02:42.344335054Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 19:02:42.351284 containerd[1946]: time="2025-02-13T19:02:42.351215038Z" level=info msg="CreateContainer within sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:02:42.393990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352001876.mount: Deactivated successfully. Feb 13 19:02:42.407732 containerd[1946]: time="2025-02-13T19:02:42.407664166Z" level=info msg="CreateContainer within sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab\"" Feb 13 19:02:42.409894 containerd[1946]: time="2025-02-13T19:02:42.408483034Z" level=info msg="StartContainer for \"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab\"" Feb 13 19:02:42.478240 systemd[1]: Started cri-containerd-342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab.scope - libcontainer container 342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab. Feb 13 19:02:42.529618 containerd[1946]: time="2025-02-13T19:02:42.529553771Z" level=info msg="StartContainer for \"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab\" returns successfully" Feb 13 19:02:42.557045 systemd[1]: cri-containerd-342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab.scope: Deactivated successfully. Feb 13 19:02:43.385600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab-rootfs.mount: Deactivated successfully. Feb 13 19:02:43.598941 containerd[1946]: time="2025-02-13T19:02:43.598855824Z" level=info msg="shim disconnected" id=342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab namespace=k8s.io Feb 13 19:02:43.598941 containerd[1946]: time="2025-02-13T19:02:43.598931664Z" level=warning msg="cleaning up after shim disconnected" id=342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab namespace=k8s.io Feb 13 19:02:43.599591 containerd[1946]: time="2025-02-13T19:02:43.598953000Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:43.678469 containerd[1946]: time="2025-02-13T19:02:43.678282121Z" level=info msg="CreateContainer within sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:02:43.707916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739499515.mount: Deactivated successfully. Feb 13 19:02:43.712408 containerd[1946]: time="2025-02-13T19:02:43.708065173Z" level=info msg="CreateContainer within sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3\"" Feb 13 19:02:43.712408 containerd[1946]: time="2025-02-13T19:02:43.708907405Z" level=info msg="StartContainer for \"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3\"" Feb 13 19:02:43.789337 systemd[1]: Started cri-containerd-f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3.scope - libcontainer container f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3. Feb 13 19:02:43.846958 containerd[1946]: time="2025-02-13T19:02:43.846790657Z" level=info msg="StartContainer for \"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3\" returns successfully" Feb 13 19:02:43.871622 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:02:43.873231 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:43.873769 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:43.884645 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:02:43.885882 systemd[1]: cri-containerd-f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3.scope: Deactivated successfully. Feb 13 19:02:43.931943 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:02:43.945214 containerd[1946]: time="2025-02-13T19:02:43.944837114Z" level=info msg="shim disconnected" id=f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3 namespace=k8s.io Feb 13 19:02:43.945214 containerd[1946]: time="2025-02-13T19:02:43.944911538Z" level=warning msg="cleaning up after shim disconnected" id=f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3 namespace=k8s.io Feb 13 19:02:43.945214 containerd[1946]: time="2025-02-13T19:02:43.944930522Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:44.388296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3-rootfs.mount: Deactivated successfully. Feb 13 19:02:44.687166 containerd[1946]: time="2025-02-13T19:02:44.686986454Z" level=info msg="CreateContainer within sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:02:44.709710 containerd[1946]: time="2025-02-13T19:02:44.709631918Z" level=info msg="CreateContainer within sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a\"" Feb 13 19:02:44.716067 containerd[1946]: time="2025-02-13T19:02:44.712676570Z" level=info msg="StartContainer for \"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a\"" Feb 13 19:02:44.718563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677351320.mount: Deactivated successfully. Feb 13 19:02:44.785359 systemd[1]: Started cri-containerd-335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a.scope - libcontainer container 335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a. Feb 13 19:02:44.838426 containerd[1946]: time="2025-02-13T19:02:44.838228982Z" level=info msg="StartContainer for \"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a\" returns successfully" Feb 13 19:02:44.845390 systemd[1]: cri-containerd-335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a.scope: Deactivated successfully. Feb 13 19:02:44.894119 containerd[1946]: time="2025-02-13T19:02:44.893968659Z" level=info msg="shim disconnected" id=335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a namespace=k8s.io Feb 13 19:02:44.894498 containerd[1946]: time="2025-02-13T19:02:44.894174123Z" level=warning msg="cleaning up after shim disconnected" id=335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a namespace=k8s.io Feb 13 19:02:44.894498 containerd[1946]: time="2025-02-13T19:02:44.894195111Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:45.387280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a-rootfs.mount: Deactivated successfully. Feb 13 19:02:45.694760 containerd[1946]: time="2025-02-13T19:02:45.693752691Z" level=info msg="CreateContainer within sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:02:45.721372 containerd[1946]: time="2025-02-13T19:02:45.719863839Z" level=info msg="CreateContainer within sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c\"" Feb 13 19:02:45.721377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037926199.mount: Deactivated successfully. Feb 13 19:02:45.730000 containerd[1946]: time="2025-02-13T19:02:45.728733183Z" level=info msg="StartContainer for \"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c\"" Feb 13 19:02:45.786363 systemd[1]: Started cri-containerd-e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c.scope - libcontainer container e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c. Feb 13 19:02:45.831875 systemd[1]: cri-containerd-e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c.scope: Deactivated successfully. Feb 13 19:02:45.839646 containerd[1946]: time="2025-02-13T19:02:45.838195923Z" level=info msg="StartContainer for \"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c\" returns successfully" Feb 13 19:02:45.876392 containerd[1946]: time="2025-02-13T19:02:45.876276147Z" level=info msg="shim disconnected" id=e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c namespace=k8s.io Feb 13 19:02:45.876656 containerd[1946]: time="2025-02-13T19:02:45.876379995Z" level=warning msg="cleaning up after shim disconnected" id=e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c namespace=k8s.io Feb 13 19:02:45.876656 containerd[1946]: time="2025-02-13T19:02:45.876429747Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:46.385886 systemd[1]: run-containerd-runc-k8s.io-e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c-runc.4QTJIx.mount: Deactivated successfully. Feb 13 19:02:46.386091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c-rootfs.mount: Deactivated successfully. Feb 13 19:02:46.697845 containerd[1946]: time="2025-02-13T19:02:46.697640212Z" level=info msg="CreateContainer within sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:02:46.730872 containerd[1946]: time="2025-02-13T19:02:46.730467148Z" level=info msg="CreateContainer within sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\"" Feb 13 19:02:46.732335 containerd[1946]: time="2025-02-13T19:02:46.731207200Z" level=info msg="StartContainer for \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\"" Feb 13 19:02:46.793357 systemd[1]: Started cri-containerd-7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f.scope - libcontainer container 7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f. Feb 13 19:02:46.843600 containerd[1946]: time="2025-02-13T19:02:46.843516292Z" level=info msg="StartContainer for \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\" returns successfully" Feb 13 19:02:47.064004 kubelet[3192]: I0213 19:02:47.063727 3192 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:02:47.137808 systemd[1]: Created slice kubepods-burstable-pod90ba1247_41b0_497a_8a32_f981595b8bf0.slice - libcontainer container kubepods-burstable-pod90ba1247_41b0_497a_8a32_f981595b8bf0.slice. Feb 13 19:02:47.156884 systemd[1]: Created slice kubepods-burstable-pod37b675fc_3a9f_4821_82e6_4e8e0f466636.slice - libcontainer container kubepods-burstable-pod37b675fc_3a9f_4821_82e6_4e8e0f466636.slice. Feb 13 19:02:47.249717 kubelet[3192]: I0213 19:02:47.249657 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9fdb\" (UniqueName: \"kubernetes.io/projected/37b675fc-3a9f-4821-82e6-4e8e0f466636-kube-api-access-p9fdb\") pod \"coredns-668d6bf9bc-9pk6w\" (UID: \"37b675fc-3a9f-4821-82e6-4e8e0f466636\") " pod="kube-system/coredns-668d6bf9bc-9pk6w" Feb 13 19:02:47.249877 kubelet[3192]: I0213 19:02:47.249759 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9xh7\" (UniqueName: \"kubernetes.io/projected/90ba1247-41b0-497a-8a32-f981595b8bf0-kube-api-access-p9xh7\") pod \"coredns-668d6bf9bc-zjpg2\" (UID: \"90ba1247-41b0-497a-8a32-f981595b8bf0\") " pod="kube-system/coredns-668d6bf9bc-zjpg2" Feb 13 19:02:47.249941 kubelet[3192]: I0213 19:02:47.249878 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37b675fc-3a9f-4821-82e6-4e8e0f466636-config-volume\") pod \"coredns-668d6bf9bc-9pk6w\" (UID: \"37b675fc-3a9f-4821-82e6-4e8e0f466636\") " pod="kube-system/coredns-668d6bf9bc-9pk6w" Feb 13 19:02:47.250101 kubelet[3192]: I0213 19:02:47.250026 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ba1247-41b0-497a-8a32-f981595b8bf0-config-volume\") pod \"coredns-668d6bf9bc-zjpg2\" (UID: \"90ba1247-41b0-497a-8a32-f981595b8bf0\") " pod="kube-system/coredns-668d6bf9bc-zjpg2" Feb 13 19:02:47.448262 containerd[1946]: time="2025-02-13T19:02:47.447103239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zjpg2,Uid:90ba1247-41b0-497a-8a32-f981595b8bf0,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:47.470497 containerd[1946]: time="2025-02-13T19:02:47.470429595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9pk6w,Uid:37b675fc-3a9f-4821-82e6-4e8e0f466636,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:49.677716 systemd-networkd[1814]: cilium_host: Link UP Feb 13 19:02:49.678863 (udev-worker)[4250]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:49.679971 (udev-worker)[4291]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:49.684431 systemd-networkd[1814]: cilium_net: Link UP Feb 13 19:02:49.685149 systemd-networkd[1814]: cilium_net: Gained carrier Feb 13 19:02:49.686352 systemd-networkd[1814]: cilium_host: Gained carrier Feb 13 19:02:49.696369 systemd-networkd[1814]: cilium_host: Gained IPv6LL Feb 13 19:02:49.866312 systemd-networkd[1814]: cilium_vxlan: Link UP Feb 13 19:02:49.866927 systemd-networkd[1814]: cilium_vxlan: Gained carrier Feb 13 19:02:50.347092 kernel: NET: Registered PF_ALG protocol family Feb 13 19:02:50.474434 systemd-networkd[1814]: cilium_net: Gained IPv6LL Feb 13 19:02:51.647947 systemd-networkd[1814]: lxc_health: Link UP Feb 13 19:02:51.658084 systemd-networkd[1814]: lxc_health: Gained carrier Feb 13 19:02:51.736534 kubelet[3192]: I0213 19:02:51.736431 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xbfs9" podStartSLOduration=11.258025843 podStartE2EDuration="23.736407333s" podCreationTimestamp="2025-02-13 19:02:28 +0000 UTC" firstStartedPulling="2025-02-13 19:02:29.868155492 +0000 UTC m=+6.781899923" lastFinishedPulling="2025-02-13 19:02:42.346536982 +0000 UTC m=+19.260281413" observedRunningTime="2025-02-13 19:02:47.739300709 +0000 UTC m=+24.653045152" watchObservedRunningTime="2025-02-13 19:02:51.736407333 +0000 UTC m=+28.650151776" Feb 13 19:02:51.818215 systemd-networkd[1814]: cilium_vxlan: Gained IPv6LL Feb 13 19:02:52.064943 systemd-networkd[1814]: lxc6655236d77a2: Link UP Feb 13 19:02:52.073100 kernel: eth0: renamed from tmp58582 Feb 13 19:02:52.082134 systemd-networkd[1814]: lxc6655236d77a2: Gained carrier Feb 13 19:02:52.110625 systemd-networkd[1814]: lxc6f5cd99b0ae9: Link UP Feb 13 19:02:52.122181 kernel: eth0: renamed from tmp8c2da Feb 13 19:02:52.133310 (udev-worker)[4297]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:52.134894 systemd-networkd[1814]: lxc6f5cd99b0ae9: Gained carrier Feb 13 19:02:53.674389 systemd-networkd[1814]: lxc_health: Gained IPv6LL Feb 13 19:02:53.931320 systemd-networkd[1814]: lxc6655236d77a2: Gained IPv6LL Feb 13 19:02:54.186346 systemd-networkd[1814]: lxc6f5cd99b0ae9: Gained IPv6LL Feb 13 19:02:56.513273 ntpd[1917]: Listen normally on 7 cilium_host 192.168.0.183:123 Feb 13 19:02:56.514516 ntpd[1917]: 13 Feb 19:02:56 ntpd[1917]: Listen normally on 7 cilium_host 192.168.0.183:123 Feb 13 19:02:56.514516 ntpd[1917]: 13 Feb 19:02:56 ntpd[1917]: Listen normally on 8 cilium_net [fe80::6c93:e7ff:fe67:55d2%4]:123 Feb 13 19:02:56.514516 ntpd[1917]: 13 Feb 19:02:56 ntpd[1917]: Listen normally on 9 cilium_host [fe80::2c14:55ff:fe5c:3efa%5]:123 Feb 13 19:02:56.514516 ntpd[1917]: 13 Feb 19:02:56 ntpd[1917]: Listen normally on 10 cilium_vxlan [fe80::1025:7cff:fed5:f59f%6]:123 Feb 13 19:02:56.514516 ntpd[1917]: 13 Feb 19:02:56 ntpd[1917]: Listen normally on 11 lxc_health [fe80::2016:39ff:fe65:1f2c%8]:123 Feb 13 19:02:56.514516 ntpd[1917]: 13 Feb 19:02:56 ntpd[1917]: Listen normally on 12 lxc6655236d77a2 [fe80::4821:52ff:fed7:cffa%10]:123 Feb 13 19:02:56.514516 ntpd[1917]: 13 Feb 19:02:56 ntpd[1917]: Listen normally on 13 lxc6f5cd99b0ae9 [fe80::3c9e:ecff:fe11:25ad%12]:123 Feb 13 19:02:56.513402 ntpd[1917]: Listen normally on 8 cilium_net [fe80::6c93:e7ff:fe67:55d2%4]:123 Feb 13 19:02:56.513484 ntpd[1917]: Listen normally on 9 cilium_host [fe80::2c14:55ff:fe5c:3efa%5]:123 Feb 13 19:02:56.513553 ntpd[1917]: Listen normally on 10 cilium_vxlan [fe80::1025:7cff:fed5:f59f%6]:123 Feb 13 19:02:56.513626 ntpd[1917]: Listen normally on 11 lxc_health [fe80::2016:39ff:fe65:1f2c%8]:123 Feb 13 19:02:56.513694 ntpd[1917]: Listen normally on 12 lxc6655236d77a2 [fe80::4821:52ff:fed7:cffa%10]:123 Feb 13 19:02:56.513763 ntpd[1917]: Listen normally on 13 lxc6f5cd99b0ae9 [fe80::3c9e:ecff:fe11:25ad%12]:123 Feb 13 19:03:00.371497 containerd[1946]: time="2025-02-13T19:03:00.371347695Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:00.373486 containerd[1946]: time="2025-02-13T19:03:00.371529339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:00.373486 containerd[1946]: time="2025-02-13T19:03:00.371571771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:00.373486 containerd[1946]: time="2025-02-13T19:03:00.371727591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:00.422525 systemd[1]: Started cri-containerd-5858238afaf76dd73c7b7ea0d2953aee42ade514bbc5fce543def00f76723b73.scope - libcontainer container 5858238afaf76dd73c7b7ea0d2953aee42ade514bbc5fce543def00f76723b73. Feb 13 19:03:00.503836 containerd[1946]: time="2025-02-13T19:03:00.502755664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:00.503836 containerd[1946]: time="2025-02-13T19:03:00.502853812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:00.503836 containerd[1946]: time="2025-02-13T19:03:00.502891192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:00.511879 containerd[1946]: time="2025-02-13T19:03:00.506604148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:00.542964 containerd[1946]: time="2025-02-13T19:03:00.542908048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zjpg2,Uid:90ba1247-41b0-497a-8a32-f981595b8bf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5858238afaf76dd73c7b7ea0d2953aee42ade514bbc5fce543def00f76723b73\"" Feb 13 19:03:00.551922 containerd[1946]: time="2025-02-13T19:03:00.551839480Z" level=info msg="CreateContainer within sandbox \"5858238afaf76dd73c7b7ea0d2953aee42ade514bbc5fce543def00f76723b73\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:03:00.592528 systemd[1]: Started cri-containerd-8c2da844ca5a7e761241eaa47ee8d1a1b1929d8398b01c17d77e00f696d16ec4.scope - libcontainer container 8c2da844ca5a7e761241eaa47ee8d1a1b1929d8398b01c17d77e00f696d16ec4. Feb 13 19:03:00.601082 containerd[1946]: time="2025-02-13T19:03:00.599590733Z" level=info msg="CreateContainer within sandbox \"5858238afaf76dd73c7b7ea0d2953aee42ade514bbc5fce543def00f76723b73\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67e05259fa60bedf3e9f43a3fe7e6604de2f35ffd8f1ba0fdbefdbc78907885a\"" Feb 13 19:03:00.602319 containerd[1946]: time="2025-02-13T19:03:00.602151017Z" level=info msg="StartContainer for \"67e05259fa60bedf3e9f43a3fe7e6604de2f35ffd8f1ba0fdbefdbc78907885a\"" Feb 13 19:03:00.682158 systemd[1]: Started cri-containerd-67e05259fa60bedf3e9f43a3fe7e6604de2f35ffd8f1ba0fdbefdbc78907885a.scope - libcontainer container 67e05259fa60bedf3e9f43a3fe7e6604de2f35ffd8f1ba0fdbefdbc78907885a. Feb 13 19:03:00.736296 containerd[1946]: time="2025-02-13T19:03:00.735448901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9pk6w,Uid:37b675fc-3a9f-4821-82e6-4e8e0f466636,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c2da844ca5a7e761241eaa47ee8d1a1b1929d8398b01c17d77e00f696d16ec4\"" Feb 13 19:03:00.743796 containerd[1946]: time="2025-02-13T19:03:00.743166029Z" level=info msg="CreateContainer within sandbox \"8c2da844ca5a7e761241eaa47ee8d1a1b1929d8398b01c17d77e00f696d16ec4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:03:00.802894 containerd[1946]: time="2025-02-13T19:03:00.802581306Z" level=info msg="CreateContainer within sandbox \"8c2da844ca5a7e761241eaa47ee8d1a1b1929d8398b01c17d77e00f696d16ec4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ee68100e87a2cf11edb1720154485afcad14609df721ac6409ceafcc8463308\"" Feb 13 19:03:00.805076 containerd[1946]: time="2025-02-13T19:03:00.804733230Z" level=info msg="StartContainer for \"67e05259fa60bedf3e9f43a3fe7e6604de2f35ffd8f1ba0fdbefdbc78907885a\" returns successfully" Feb 13 19:03:00.806351 containerd[1946]: time="2025-02-13T19:03:00.804781962Z" level=info msg="StartContainer for \"8ee68100e87a2cf11edb1720154485afcad14609df721ac6409ceafcc8463308\"" Feb 13 19:03:00.883379 systemd[1]: Started cri-containerd-8ee68100e87a2cf11edb1720154485afcad14609df721ac6409ceafcc8463308.scope - libcontainer container 8ee68100e87a2cf11edb1720154485afcad14609df721ac6409ceafcc8463308. Feb 13 19:03:00.985617 containerd[1946]: time="2025-02-13T19:03:00.984635803Z" level=info msg="StartContainer for \"8ee68100e87a2cf11edb1720154485afcad14609df721ac6409ceafcc8463308\" returns successfully" Feb 13 19:03:01.813644 kubelet[3192]: I0213 19:03:01.813541 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9pk6w" podStartSLOduration=32.813514363 podStartE2EDuration="32.813514363s" podCreationTimestamp="2025-02-13 19:02:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:01.810280147 +0000 UTC m=+38.724024614" watchObservedRunningTime="2025-02-13 19:03:01.813514363 +0000 UTC m=+38.727258806" Feb 13 19:03:01.839255 kubelet[3192]: I0213 19:03:01.838482 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zjpg2" podStartSLOduration=33.838455391 podStartE2EDuration="33.838455391s" podCreationTimestamp="2025-02-13 19:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:01.834842071 +0000 UTC m=+38.748586526" watchObservedRunningTime="2025-02-13 19:03:01.838455391 +0000 UTC m=+38.752199846" Feb 13 19:03:03.221572 systemd[1]: Started sshd@7-172.31.26.138:22-147.75.109.163:48568.service - OpenSSH per-connection server daemon (147.75.109.163:48568). Feb 13 19:03:03.414110 sshd[4832]: Accepted publickey for core from 147.75.109.163 port 48568 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:03.415523 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:03.423330 systemd-logind[1924]: New session 8 of user core. Feb 13 19:03:03.431338 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:03:03.718933 sshd[4834]: Connection closed by 147.75.109.163 port 48568 Feb 13 19:03:03.719791 sshd-session[4832]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:03.727160 systemd[1]: sshd@7-172.31.26.138:22-147.75.109.163:48568.service: Deactivated successfully. Feb 13 19:03:03.730677 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:03:03.733231 systemd-logind[1924]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:03:03.736614 systemd-logind[1924]: Removed session 8. Feb 13 19:03:08.759586 systemd[1]: Started sshd@8-172.31.26.138:22-147.75.109.163:48572.service - OpenSSH per-connection server daemon (147.75.109.163:48572). Feb 13 19:03:08.939844 sshd[4848]: Accepted publickey for core from 147.75.109.163 port 48572 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:08.942292 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:08.950182 systemd-logind[1924]: New session 9 of user core. Feb 13 19:03:08.960286 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:03:09.202890 sshd[4850]: Connection closed by 147.75.109.163 port 48572 Feb 13 19:03:09.203543 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:09.209553 systemd[1]: sshd@8-172.31.26.138:22-147.75.109.163:48572.service: Deactivated successfully. Feb 13 19:03:09.214792 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:03:09.216913 systemd-logind[1924]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:03:09.218901 systemd-logind[1924]: Removed session 9. Feb 13 19:03:14.245660 systemd[1]: Started sshd@9-172.31.26.138:22-147.75.109.163:36090.service - OpenSSH per-connection server daemon (147.75.109.163:36090). Feb 13 19:03:14.440937 sshd[4862]: Accepted publickey for core from 147.75.109.163 port 36090 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:14.443451 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:14.450825 systemd-logind[1924]: New session 10 of user core. Feb 13 19:03:14.458373 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:03:14.699547 sshd[4864]: Connection closed by 147.75.109.163 port 36090 Feb 13 19:03:14.700402 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:14.706438 systemd[1]: sshd@9-172.31.26.138:22-147.75.109.163:36090.service: Deactivated successfully. Feb 13 19:03:14.711115 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:03:14.714145 systemd-logind[1924]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:03:14.717024 systemd-logind[1924]: Removed session 10. Feb 13 19:03:19.747531 systemd[1]: Started sshd@10-172.31.26.138:22-147.75.109.163:38540.service - OpenSSH per-connection server daemon (147.75.109.163:38540). Feb 13 19:03:19.927651 sshd[4876]: Accepted publickey for core from 147.75.109.163 port 38540 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:19.930058 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:19.937553 systemd-logind[1924]: New session 11 of user core. Feb 13 19:03:19.946310 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:03:20.202680 sshd[4878]: Connection closed by 147.75.109.163 port 38540 Feb 13 19:03:20.202738 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:20.211364 systemd[1]: sshd@10-172.31.26.138:22-147.75.109.163:38540.service: Deactivated successfully. Feb 13 19:03:20.216560 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:03:20.218376 systemd-logind[1924]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:03:20.220225 systemd-logind[1924]: Removed session 11. Feb 13 19:03:25.242623 systemd[1]: Started sshd@11-172.31.26.138:22-147.75.109.163:38548.service - OpenSSH per-connection server daemon (147.75.109.163:38548). Feb 13 19:03:25.433790 sshd[4893]: Accepted publickey for core from 147.75.109.163 port 38548 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:25.436388 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:25.444484 systemd-logind[1924]: New session 12 of user core. Feb 13 19:03:25.451745 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:03:25.698108 sshd[4895]: Connection closed by 147.75.109.163 port 38548 Feb 13 19:03:25.698924 sshd-session[4893]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:25.705024 systemd[1]: sshd@11-172.31.26.138:22-147.75.109.163:38548.service: Deactivated successfully. Feb 13 19:03:25.709803 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:03:25.712467 systemd-logind[1924]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:03:25.715134 systemd-logind[1924]: Removed session 12. Feb 13 19:03:25.737595 systemd[1]: Started sshd@12-172.31.26.138:22-147.75.109.163:38560.service - OpenSSH per-connection server daemon (147.75.109.163:38560). Feb 13 19:03:25.918181 sshd[4908]: Accepted publickey for core from 147.75.109.163 port 38560 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:25.920696 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:25.928482 systemd-logind[1924]: New session 13 of user core. Feb 13 19:03:25.938312 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:03:26.250293 sshd[4910]: Connection closed by 147.75.109.163 port 38560 Feb 13 19:03:26.252994 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:26.261355 systemd[1]: sshd@12-172.31.26.138:22-147.75.109.163:38560.service: Deactivated successfully. Feb 13 19:03:26.269519 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:03:26.275429 systemd-logind[1924]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:03:26.300537 systemd[1]: Started sshd@13-172.31.26.138:22-147.75.109.163:38570.service - OpenSSH per-connection server daemon (147.75.109.163:38570). Feb 13 19:03:26.303218 systemd-logind[1924]: Removed session 13. Feb 13 19:03:26.491211 sshd[4919]: Accepted publickey for core from 147.75.109.163 port 38570 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:26.493359 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:26.501701 systemd-logind[1924]: New session 14 of user core. Feb 13 19:03:26.509293 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:03:26.757122 sshd[4921]: Connection closed by 147.75.109.163 port 38570 Feb 13 19:03:26.758150 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:26.764369 systemd[1]: sshd@13-172.31.26.138:22-147.75.109.163:38570.service: Deactivated successfully. Feb 13 19:03:26.769208 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:03:26.770854 systemd-logind[1924]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:03:26.773192 systemd-logind[1924]: Removed session 14. Feb 13 19:03:31.796551 systemd[1]: Started sshd@14-172.31.26.138:22-147.75.109.163:38906.service - OpenSSH per-connection server daemon (147.75.109.163:38906). Feb 13 19:03:31.989792 sshd[4936]: Accepted publickey for core from 147.75.109.163 port 38906 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:31.992387 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:32.002143 systemd-logind[1924]: New session 15 of user core. Feb 13 19:03:32.011419 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:03:32.257893 sshd[4938]: Connection closed by 147.75.109.163 port 38906 Feb 13 19:03:32.259463 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:32.264850 systemd-logind[1924]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:03:32.266055 systemd[1]: sshd@14-172.31.26.138:22-147.75.109.163:38906.service: Deactivated successfully. Feb 13 19:03:32.269725 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:03:32.275993 systemd-logind[1924]: Removed session 15. Feb 13 19:03:37.299706 systemd[1]: Started sshd@15-172.31.26.138:22-147.75.109.163:38920.service - OpenSSH per-connection server daemon (147.75.109.163:38920). Feb 13 19:03:37.486571 sshd[4950]: Accepted publickey for core from 147.75.109.163 port 38920 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:37.490467 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:37.499123 systemd-logind[1924]: New session 16 of user core. Feb 13 19:03:37.506353 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:03:37.748414 sshd[4952]: Connection closed by 147.75.109.163 port 38920 Feb 13 19:03:37.749640 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:37.756446 systemd[1]: sshd@15-172.31.26.138:22-147.75.109.163:38920.service: Deactivated successfully. Feb 13 19:03:37.760294 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:03:37.761659 systemd-logind[1924]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:03:37.764461 systemd-logind[1924]: Removed session 16. Feb 13 19:03:42.795561 systemd[1]: Started sshd@16-172.31.26.138:22-147.75.109.163:54276.service - OpenSSH per-connection server daemon (147.75.109.163:54276). Feb 13 19:03:42.990156 sshd[4963]: Accepted publickey for core from 147.75.109.163 port 54276 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:42.992586 sshd-session[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:43.001044 systemd-logind[1924]: New session 17 of user core. Feb 13 19:03:43.011352 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:03:43.267911 sshd[4965]: Connection closed by 147.75.109.163 port 54276 Feb 13 19:03:43.269000 sshd-session[4963]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:43.273965 systemd[1]: sshd@16-172.31.26.138:22-147.75.109.163:54276.service: Deactivated successfully. Feb 13 19:03:43.278160 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:03:43.282566 systemd-logind[1924]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:03:43.284967 systemd-logind[1924]: Removed session 17. Feb 13 19:03:43.306610 systemd[1]: Started sshd@17-172.31.26.138:22-147.75.109.163:54284.service - OpenSSH per-connection server daemon (147.75.109.163:54284). Feb 13 19:03:43.510208 sshd[4976]: Accepted publickey for core from 147.75.109.163 port 54284 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:43.510845 sshd-session[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:43.519571 systemd-logind[1924]: New session 18 of user core. Feb 13 19:03:43.525331 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:03:43.828741 sshd[4978]: Connection closed by 147.75.109.163 port 54284 Feb 13 19:03:43.829631 sshd-session[4976]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:43.836406 systemd[1]: sshd@17-172.31.26.138:22-147.75.109.163:54284.service: Deactivated successfully. Feb 13 19:03:43.840572 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:03:43.842701 systemd-logind[1924]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:03:43.844648 systemd-logind[1924]: Removed session 18. Feb 13 19:03:43.871545 systemd[1]: Started sshd@18-172.31.26.138:22-147.75.109.163:54294.service - OpenSSH per-connection server daemon (147.75.109.163:54294). Feb 13 19:03:44.055310 sshd[4986]: Accepted publickey for core from 147.75.109.163 port 54294 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:44.059735 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:44.073782 systemd-logind[1924]: New session 19 of user core. Feb 13 19:03:44.080605 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:03:45.401661 sshd[4988]: Connection closed by 147.75.109.163 port 54294 Feb 13 19:03:45.402874 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:45.410737 systemd[1]: sshd@18-172.31.26.138:22-147.75.109.163:54294.service: Deactivated successfully. Feb 13 19:03:45.419627 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:03:45.428954 systemd-logind[1924]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:03:45.452574 systemd[1]: Started sshd@19-172.31.26.138:22-147.75.109.163:54300.service - OpenSSH per-connection server daemon (147.75.109.163:54300). Feb 13 19:03:45.454493 systemd-logind[1924]: Removed session 19. Feb 13 19:03:45.670991 sshd[5004]: Accepted publickey for core from 147.75.109.163 port 54300 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:45.673127 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:45.681377 systemd-logind[1924]: New session 20 of user core. Feb 13 19:03:45.690367 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:03:46.182970 sshd[5006]: Connection closed by 147.75.109.163 port 54300 Feb 13 19:03:46.183852 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:46.190578 systemd[1]: sshd@19-172.31.26.138:22-147.75.109.163:54300.service: Deactivated successfully. Feb 13 19:03:46.195973 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:03:46.198310 systemd-logind[1924]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:03:46.200665 systemd-logind[1924]: Removed session 20. Feb 13 19:03:46.225579 systemd[1]: Started sshd@20-172.31.26.138:22-147.75.109.163:54316.service - OpenSSH per-connection server daemon (147.75.109.163:54316). Feb 13 19:03:46.418903 sshd[5015]: Accepted publickey for core from 147.75.109.163 port 54316 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:46.421594 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:46.430369 systemd-logind[1924]: New session 21 of user core. Feb 13 19:03:46.441333 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:03:46.689462 sshd[5017]: Connection closed by 147.75.109.163 port 54316 Feb 13 19:03:46.688518 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:46.693712 systemd[1]: sshd@20-172.31.26.138:22-147.75.109.163:54316.service: Deactivated successfully. Feb 13 19:03:46.699346 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:03:46.701417 systemd-logind[1924]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:03:46.704613 systemd-logind[1924]: Removed session 21. Feb 13 19:03:51.732748 systemd[1]: Started sshd@21-172.31.26.138:22-147.75.109.163:51324.service - OpenSSH per-connection server daemon (147.75.109.163:51324). Feb 13 19:03:51.915280 sshd[5029]: Accepted publickey for core from 147.75.109.163 port 51324 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:51.917619 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:51.924767 systemd-logind[1924]: New session 22 of user core. Feb 13 19:03:51.937321 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:03:52.177131 sshd[5031]: Connection closed by 147.75.109.163 port 51324 Feb 13 19:03:52.178398 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:52.184411 systemd[1]: sshd@21-172.31.26.138:22-147.75.109.163:51324.service: Deactivated successfully. Feb 13 19:03:52.187964 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:03:52.189987 systemd-logind[1924]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:03:52.191819 systemd-logind[1924]: Removed session 22. Feb 13 19:03:57.216602 systemd[1]: Started sshd@22-172.31.26.138:22-147.75.109.163:51328.service - OpenSSH per-connection server daemon (147.75.109.163:51328). Feb 13 19:03:57.403435 sshd[5044]: Accepted publickey for core from 147.75.109.163 port 51328 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:57.406500 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:57.413999 systemd-logind[1924]: New session 23 of user core. Feb 13 19:03:57.425325 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:03:57.671366 sshd[5046]: Connection closed by 147.75.109.163 port 51328 Feb 13 19:03:57.673352 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:57.680193 systemd[1]: sshd@22-172.31.26.138:22-147.75.109.163:51328.service: Deactivated successfully. Feb 13 19:03:57.683570 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:03:57.684795 systemd-logind[1924]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:03:57.687139 systemd-logind[1924]: Removed session 23. Feb 13 19:04:02.716501 systemd[1]: Started sshd@23-172.31.26.138:22-147.75.109.163:41320.service - OpenSSH per-connection server daemon (147.75.109.163:41320). Feb 13 19:04:02.900406 sshd[5060]: Accepted publickey for core from 147.75.109.163 port 41320 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:02.902664 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:02.911675 systemd-logind[1924]: New session 24 of user core. Feb 13 19:04:02.924492 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:04:03.170619 sshd[5062]: Connection closed by 147.75.109.163 port 41320 Feb 13 19:04:03.171504 sshd-session[5060]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:03.177752 systemd[1]: sshd@23-172.31.26.138:22-147.75.109.163:41320.service: Deactivated successfully. Feb 13 19:04:03.181400 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:04:03.183787 systemd-logind[1924]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:04:03.185742 systemd-logind[1924]: Removed session 24. Feb 13 19:04:08.212769 systemd[1]: Started sshd@24-172.31.26.138:22-147.75.109.163:41326.service - OpenSSH per-connection server daemon (147.75.109.163:41326). Feb 13 19:04:08.395150 sshd[5073]: Accepted publickey for core from 147.75.109.163 port 41326 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:08.397569 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:08.406356 systemd-logind[1924]: New session 25 of user core. Feb 13 19:04:08.414336 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:04:08.654578 sshd[5075]: Connection closed by 147.75.109.163 port 41326 Feb 13 19:04:08.653646 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:08.659747 systemd[1]: sshd@24-172.31.26.138:22-147.75.109.163:41326.service: Deactivated successfully. Feb 13 19:04:08.663851 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:04:08.665616 systemd-logind[1924]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:04:08.668805 systemd-logind[1924]: Removed session 25. Feb 13 19:04:08.697832 systemd[1]: Started sshd@25-172.31.26.138:22-147.75.109.163:41342.service - OpenSSH per-connection server daemon (147.75.109.163:41342). Feb 13 19:04:08.879796 sshd[5086]: Accepted publickey for core from 147.75.109.163 port 41342 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:08.882347 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:08.891013 systemd-logind[1924]: New session 26 of user core. Feb 13 19:04:08.897310 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:04:12.300956 containerd[1946]: time="2025-02-13T19:04:12.290841973Z" level=info msg="StopContainer for \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\" with timeout 30 (s)" Feb 13 19:04:12.307280 containerd[1946]: time="2025-02-13T19:04:12.304255765Z" level=info msg="Stop container \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\" with signal terminated" Feb 13 19:04:12.344180 systemd[1]: cri-containerd-1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a.scope: Deactivated successfully. Feb 13 19:04:12.353804 containerd[1946]: time="2025-02-13T19:04:12.353690293Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:04:12.371833 containerd[1946]: time="2025-02-13T19:04:12.371615173Z" level=info msg="StopContainer for \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\" with timeout 2 (s)" Feb 13 19:04:12.372716 containerd[1946]: time="2025-02-13T19:04:12.372591241Z" level=info msg="Stop container \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\" with signal terminated" Feb 13 19:04:12.409957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a-rootfs.mount: Deactivated successfully. Feb 13 19:04:12.410201 systemd-networkd[1814]: lxc_health: Link DOWN Feb 13 19:04:12.410212 systemd-networkd[1814]: lxc_health: Lost carrier Feb 13 19:04:12.436215 containerd[1946]: time="2025-02-13T19:04:12.436002445Z" level=info msg="shim disconnected" id=1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a namespace=k8s.io Feb 13 19:04:12.436643 containerd[1946]: time="2025-02-13T19:04:12.436186537Z" level=warning msg="cleaning up after shim disconnected" id=1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a namespace=k8s.io Feb 13 19:04:12.436643 containerd[1946]: time="2025-02-13T19:04:12.436494193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:12.442158 systemd[1]: cri-containerd-7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f.scope: Deactivated successfully. Feb 13 19:04:12.442728 systemd[1]: cri-containerd-7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f.scope: Consumed 14.166s CPU time. Feb 13 19:04:12.474588 containerd[1946]: time="2025-02-13T19:04:12.474298262Z" level=info msg="StopContainer for \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\" returns successfully" Feb 13 19:04:12.475558 containerd[1946]: time="2025-02-13T19:04:12.475285358Z" level=info msg="StopPodSandbox for \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\"" Feb 13 19:04:12.475558 containerd[1946]: time="2025-02-13T19:04:12.475359206Z" level=info msg="Container to stop \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:12.478981 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468-shm.mount: Deactivated successfully. Feb 13 19:04:12.496835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f-rootfs.mount: Deactivated successfully. Feb 13 19:04:12.503203 systemd[1]: cri-containerd-6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468.scope: Deactivated successfully. Feb 13 19:04:12.514176 containerd[1946]: time="2025-02-13T19:04:12.514012982Z" level=info msg="shim disconnected" id=7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f namespace=k8s.io Feb 13 19:04:12.514176 containerd[1946]: time="2025-02-13T19:04:12.514110914Z" level=warning msg="cleaning up after shim disconnected" id=7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f namespace=k8s.io Feb 13 19:04:12.514176 containerd[1946]: time="2025-02-13T19:04:12.514130306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:12.548564 containerd[1946]: time="2025-02-13T19:04:12.548198894Z" level=info msg="shim disconnected" id=6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468 namespace=k8s.io Feb 13 19:04:12.548564 containerd[1946]: time="2025-02-13T19:04:12.548275394Z" level=warning msg="cleaning up after shim disconnected" id=6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468 namespace=k8s.io Feb 13 19:04:12.548564 containerd[1946]: time="2025-02-13T19:04:12.548298110Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:12.550810 containerd[1946]: time="2025-02-13T19:04:12.550677170Z" level=info msg="StopContainer for \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\" returns successfully" Feb 13 19:04:12.552582 containerd[1946]: time="2025-02-13T19:04:12.552324998Z" level=info msg="StopPodSandbox for \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\"" Feb 13 19:04:12.552582 containerd[1946]: time="2025-02-13T19:04:12.552446630Z" level=info msg="Container to stop \"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:12.553541 containerd[1946]: time="2025-02-13T19:04:12.552801110Z" level=info msg="Container to stop \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:12.553541 containerd[1946]: time="2025-02-13T19:04:12.552853406Z" level=info msg="Container to stop \"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:12.553541 containerd[1946]: time="2025-02-13T19:04:12.553295426Z" level=info msg="Container to stop \"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:12.554387 containerd[1946]: time="2025-02-13T19:04:12.553319282Z" level=info msg="Container to stop \"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:04:12.565327 systemd[1]: cri-containerd-1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c.scope: Deactivated successfully. Feb 13 19:04:12.583826 containerd[1946]: time="2025-02-13T19:04:12.583743098Z" level=info msg="TearDown network for sandbox \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\" successfully" Feb 13 19:04:12.583826 containerd[1946]: time="2025-02-13T19:04:12.583795250Z" level=info msg="StopPodSandbox for \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\" returns successfully" Feb 13 19:04:12.628859 containerd[1946]: time="2025-02-13T19:04:12.628688210Z" level=info msg="shim disconnected" id=1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c namespace=k8s.io Feb 13 19:04:12.629704 containerd[1946]: time="2025-02-13T19:04:12.629331890Z" level=warning msg="cleaning up after shim disconnected" id=1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c namespace=k8s.io Feb 13 19:04:12.629704 containerd[1946]: time="2025-02-13T19:04:12.629369018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:12.653418 containerd[1946]: time="2025-02-13T19:04:12.653352986Z" level=info msg="TearDown network for sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" successfully" Feb 13 19:04:12.653418 containerd[1946]: time="2025-02-13T19:04:12.653406818Z" level=info msg="StopPodSandbox for \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" returns successfully" Feb 13 19:04:12.726067 kubelet[3192]: I0213 19:04:12.724130 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e4a1f35-e21f-44d5-b89a-aa4d6c7db800-cilium-config-path\") pod \"3e4a1f35-e21f-44d5-b89a-aa4d6c7db800\" (UID: \"3e4a1f35-e21f-44d5-b89a-aa4d6c7db800\") " Feb 13 19:04:12.726067 kubelet[3192]: I0213 19:04:12.724210 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlg6t\" (UniqueName: \"kubernetes.io/projected/3e4a1f35-e21f-44d5-b89a-aa4d6c7db800-kube-api-access-mlg6t\") pod \"3e4a1f35-e21f-44d5-b89a-aa4d6c7db800\" (UID: \"3e4a1f35-e21f-44d5-b89a-aa4d6c7db800\") " Feb 13 19:04:12.728652 kubelet[3192]: I0213 19:04:12.728587 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4a1f35-e21f-44d5-b89a-aa4d6c7db800-kube-api-access-mlg6t" (OuterVolumeSpecName: "kube-api-access-mlg6t") pod "3e4a1f35-e21f-44d5-b89a-aa4d6c7db800" (UID: "3e4a1f35-e21f-44d5-b89a-aa4d6c7db800"). InnerVolumeSpecName "kube-api-access-mlg6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:04:12.732351 kubelet[3192]: I0213 19:04:12.732199 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4a1f35-e21f-44d5-b89a-aa4d6c7db800-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e4a1f35-e21f-44d5-b89a-aa4d6c7db800" (UID: "3e4a1f35-e21f-44d5-b89a-aa4d6c7db800"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:04:12.825253 kubelet[3192]: I0213 19:04:12.825115 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-lib-modules\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.825557 kubelet[3192]: I0213 19:04:12.825532 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-bpf-maps\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.825761 kubelet[3192]: I0213 19:04:12.825737 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-cgroup\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.826315 kubelet[3192]: I0213 19:04:12.826276 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-xtables-lock\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.826534 kubelet[3192]: I0213 19:04:12.826512 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-etc-cni-netd\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.826733 kubelet[3192]: I0213 19:04:12.826711 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-host-proc-sys-net\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.826914 kubelet[3192]: I0213 19:04:12.826892 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-config-path\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.828180 kubelet[3192]: I0213 19:04:12.828132 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cni-path\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.828266 kubelet[3192]: I0213 19:04:12.828192 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-host-proc-sys-kernel\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.828266 kubelet[3192]: I0213 19:04:12.828245 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-clustermesh-secrets\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.828399 kubelet[3192]: I0213 19:04:12.828280 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-run\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.828399 kubelet[3192]: I0213 19:04:12.828321 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwtn\" (UniqueName: \"kubernetes.io/projected/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-kube-api-access-gfwtn\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.828399 kubelet[3192]: I0213 19:04:12.828356 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-hostproc\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.828542 kubelet[3192]: I0213 19:04:12.828397 3192 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-hubble-tls\") pod \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\" (UID: \"d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95\") " Feb 13 19:04:12.828542 kubelet[3192]: I0213 19:04:12.828476 3192 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e4a1f35-e21f-44d5-b89a-aa4d6c7db800-cilium-config-path\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.828542 kubelet[3192]: I0213 19:04:12.828503 3192 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mlg6t\" (UniqueName: \"kubernetes.io/projected/3e4a1f35-e21f-44d5-b89a-aa4d6c7db800-kube-api-access-mlg6t\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.837626 kubelet[3192]: I0213 19:04:12.825451 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:12.837626 kubelet[3192]: I0213 19:04:12.837460 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-kube-api-access-gfwtn" (OuterVolumeSpecName: "kube-api-access-gfwtn") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "kube-api-access-gfwtn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:04:12.837626 kubelet[3192]: I0213 19:04:12.837523 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-hostproc" (OuterVolumeSpecName: "hostproc") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:12.837626 kubelet[3192]: I0213 19:04:12.837532 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cni-path" (OuterVolumeSpecName: "cni-path") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:12.837626 kubelet[3192]: I0213 19:04:12.825680 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:12.838025 kubelet[3192]: I0213 19:04:12.825868 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:12.838025 kubelet[3192]: I0213 19:04:12.826456 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:12.838025 kubelet[3192]: I0213 19:04:12.837579 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:12.838025 kubelet[3192]: I0213 19:04:12.826835 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:12.838025 kubelet[3192]: I0213 19:04:12.832753 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:04:12.838442 kubelet[3192]: I0213 19:04:12.832782 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:04:12.838442 kubelet[3192]: I0213 19:04:12.832826 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:12.838442 kubelet[3192]: I0213 19:04:12.837423 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:04:12.838442 kubelet[3192]: I0213 19:04:12.826654 3192 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" (UID: "d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:04:12.929740 kubelet[3192]: I0213 19:04:12.929398 3192 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-clustermesh-secrets\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.929740 kubelet[3192]: I0213 19:04:12.929447 3192 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-run\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.929740 kubelet[3192]: I0213 19:04:12.929470 3192 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gfwtn\" (UniqueName: \"kubernetes.io/projected/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-kube-api-access-gfwtn\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.929740 kubelet[3192]: I0213 19:04:12.929492 3192 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-hostproc\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.929740 kubelet[3192]: I0213 19:04:12.929514 3192 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-hubble-tls\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.929740 kubelet[3192]: I0213 19:04:12.929535 3192 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-lib-modules\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.929740 kubelet[3192]: I0213 19:04:12.929556 3192 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-cgroup\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.929740 kubelet[3192]: I0213 19:04:12.929583 3192 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-xtables-lock\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.930301 kubelet[3192]: I0213 19:04:12.929603 3192 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-bpf-maps\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.930301 kubelet[3192]: I0213 19:04:12.929626 3192 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-etc-cni-netd\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.930301 kubelet[3192]: I0213 19:04:12.929646 3192 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cni-path\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.930301 kubelet[3192]: I0213 19:04:12.929668 3192 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-host-proc-sys-kernel\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.930301 kubelet[3192]: I0213 19:04:12.929687 3192 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-host-proc-sys-net\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.930301 kubelet[3192]: I0213 19:04:12.929707 3192 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95-cilium-config-path\") on node \"ip-172-31-26-138\" DevicePath \"\"" Feb 13 19:04:12.972587 kubelet[3192]: I0213 19:04:12.972446 3192 scope.go:117] "RemoveContainer" containerID="7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f" Feb 13 19:04:12.984540 containerd[1946]: time="2025-02-13T19:04:12.984364972Z" level=info msg="RemoveContainer for \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\"" Feb 13 19:04:12.990298 systemd[1]: Removed slice kubepods-burstable-podd2e242cf_07e3_4cfc_8cfc_9154b8e9bf95.slice - libcontainer container kubepods-burstable-podd2e242cf_07e3_4cfc_8cfc_9154b8e9bf95.slice. Feb 13 19:04:12.990977 systemd[1]: kubepods-burstable-podd2e242cf_07e3_4cfc_8cfc_9154b8e9bf95.slice: Consumed 14.310s CPU time. Feb 13 19:04:13.001748 containerd[1946]: time="2025-02-13T19:04:13.001269612Z" level=info msg="RemoveContainer for \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\" returns successfully" Feb 13 19:04:13.001906 kubelet[3192]: I0213 19:04:13.001749 3192 scope.go:117] "RemoveContainer" containerID="e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c" Feb 13 19:04:13.004883 containerd[1946]: time="2025-02-13T19:04:13.004824528Z" level=info msg="RemoveContainer for \"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c\"" Feb 13 19:04:13.005563 systemd[1]: Removed slice kubepods-besteffort-pod3e4a1f35_e21f_44d5_b89a_aa4d6c7db800.slice - libcontainer container kubepods-besteffort-pod3e4a1f35_e21f_44d5_b89a_aa4d6c7db800.slice. Feb 13 19:04:13.011980 containerd[1946]: time="2025-02-13T19:04:13.011917932Z" level=info msg="RemoveContainer for \"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c\" returns successfully" Feb 13 19:04:13.012519 kubelet[3192]: I0213 19:04:13.012469 3192 scope.go:117] "RemoveContainer" containerID="335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a" Feb 13 19:04:13.015720 containerd[1946]: time="2025-02-13T19:04:13.015665880Z" level=info msg="RemoveContainer for \"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a\"" Feb 13 19:04:13.024407 containerd[1946]: time="2025-02-13T19:04:13.024345720Z" level=info msg="RemoveContainer for \"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a\" returns successfully" Feb 13 19:04:13.024715 kubelet[3192]: I0213 19:04:13.024694 3192 scope.go:117] "RemoveContainer" containerID="f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3" Feb 13 19:04:13.029353 containerd[1946]: time="2025-02-13T19:04:13.027170328Z" level=info msg="RemoveContainer for \"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3\"" Feb 13 19:04:13.039664 containerd[1946]: time="2025-02-13T19:04:13.039582336Z" level=info msg="RemoveContainer for \"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3\" returns successfully" Feb 13 19:04:13.044802 kubelet[3192]: I0213 19:04:13.044715 3192 scope.go:117] "RemoveContainer" containerID="342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab" Feb 13 19:04:13.050741 containerd[1946]: time="2025-02-13T19:04:13.050266416Z" level=info msg="RemoveContainer for \"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab\"" Feb 13 19:04:13.059367 containerd[1946]: time="2025-02-13T19:04:13.058924980Z" level=info msg="RemoveContainer for \"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab\" returns successfully" Feb 13 19:04:13.060012 kubelet[3192]: I0213 19:04:13.059975 3192 scope.go:117] "RemoveContainer" containerID="7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f" Feb 13 19:04:13.060515 containerd[1946]: time="2025-02-13T19:04:13.060453577Z" level=error msg="ContainerStatus for \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\": not found" Feb 13 19:04:13.060751 kubelet[3192]: E0213 19:04:13.060697 3192 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\": not found" containerID="7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f" Feb 13 19:04:13.060890 kubelet[3192]: I0213 19:04:13.060755 3192 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f"} err="failed to get container status \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a46dc543a17695f52a5e390e240d800c1c1f473fbe63360a29848679447392f\": not found" Feb 13 19:04:13.060967 kubelet[3192]: I0213 19:04:13.060889 3192 scope.go:117] "RemoveContainer" containerID="e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c" Feb 13 19:04:13.061322 containerd[1946]: time="2025-02-13T19:04:13.061228873Z" level=error msg="ContainerStatus for \"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c\": not found" Feb 13 19:04:13.062082 kubelet[3192]: E0213 19:04:13.061675 3192 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c\": not found" containerID="e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c" Feb 13 19:04:13.062082 kubelet[3192]: I0213 19:04:13.061772 3192 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c"} err="failed to get container status \"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e685215d91499365860e9adda3d9fb5b37b2b99bb311cd67031ff9ed48de2d4c\": not found" Feb 13 19:04:13.062082 kubelet[3192]: I0213 19:04:13.061808 3192 scope.go:117] "RemoveContainer" containerID="335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a" Feb 13 19:04:13.062941 containerd[1946]: time="2025-02-13T19:04:13.062885233Z" level=error msg="ContainerStatus for \"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a\": not found" Feb 13 19:04:13.063918 kubelet[3192]: E0213 19:04:13.063638 3192 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a\": not found" containerID="335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a" Feb 13 19:04:13.063918 kubelet[3192]: I0213 19:04:13.063690 3192 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a"} err="failed to get container status \"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a\": rpc error: code = NotFound desc = an error occurred when try to find container \"335a559277a29af30e822f093c064116ebb29cd4d2a3a68369912e6fa578f40a\": not found" Feb 13 19:04:13.063918 kubelet[3192]: I0213 19:04:13.063727 3192 scope.go:117] "RemoveContainer" containerID="f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3" Feb 13 19:04:13.065086 containerd[1946]: time="2025-02-13T19:04:13.064465633Z" level=error msg="ContainerStatus for \"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3\": not found" Feb 13 19:04:13.065261 kubelet[3192]: E0213 19:04:13.064940 3192 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3\": not found" containerID="f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3" Feb 13 19:04:13.065261 kubelet[3192]: I0213 19:04:13.065233 3192 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3"} err="failed to get container status \"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f32aae0af410d972ff626418d02f9e28b13345500a23c62234538a13c2d474d3\": not found" Feb 13 19:04:13.065843 kubelet[3192]: I0213 19:04:13.065278 3192 scope.go:117] "RemoveContainer" containerID="342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab" Feb 13 19:04:13.065922 containerd[1946]: time="2025-02-13T19:04:13.065655637Z" level=error msg="ContainerStatus for \"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab\": not found" Feb 13 19:04:13.065981 kubelet[3192]: E0213 19:04:13.065868 3192 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab\": not found" containerID="342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab" Feb 13 19:04:13.065981 kubelet[3192]: I0213 19:04:13.065906 3192 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab"} err="failed to get container status \"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"342f4a8cdff3c205be06208a5fdd16217df11a5090987ec58cd91c12290e02ab\": not found" Feb 13 19:04:13.065981 kubelet[3192]: I0213 19:04:13.065937 3192 scope.go:117] "RemoveContainer" containerID="1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a" Feb 13 19:04:13.067765 containerd[1946]: time="2025-02-13T19:04:13.067712545Z" level=info msg="RemoveContainer for \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\"" Feb 13 19:04:13.073772 containerd[1946]: time="2025-02-13T19:04:13.073700257Z" level=info msg="RemoveContainer for \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\" returns successfully" Feb 13 19:04:13.074148 kubelet[3192]: I0213 19:04:13.074023 3192 scope.go:117] "RemoveContainer" containerID="1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a" Feb 13 19:04:13.074639 containerd[1946]: time="2025-02-13T19:04:13.074442841Z" level=error msg="ContainerStatus for \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\": not found" Feb 13 19:04:13.074960 kubelet[3192]: E0213 19:04:13.074853 3192 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\": not found" containerID="1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a" Feb 13 19:04:13.074960 kubelet[3192]: I0213 19:04:13.074899 3192 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a"} err="failed to get container status \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"1949886f09283be510da2ed8264e1be6092f18a27326cff6e4aac998b4c7fa7a\": not found" Feb 13 19:04:13.296512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c-rootfs.mount: Deactivated successfully. Feb 13 19:04:13.296912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c-shm.mount: Deactivated successfully. Feb 13 19:04:13.297374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468-rootfs.mount: Deactivated successfully. Feb 13 19:04:13.297652 systemd[1]: var-lib-kubelet-pods-d2e242cf\x2d07e3\x2d4cfc\x2d8cfc\x2d9154b8e9bf95-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgfwtn.mount: Deactivated successfully. Feb 13 19:04:13.297907 systemd[1]: var-lib-kubelet-pods-3e4a1f35\x2de21f\x2d44d5\x2db89a\x2daa4d6c7db800-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmlg6t.mount: Deactivated successfully. Feb 13 19:04:13.298208 systemd[1]: var-lib-kubelet-pods-d2e242cf\x2d07e3\x2d4cfc\x2d8cfc\x2d9154b8e9bf95-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:04:13.298460 systemd[1]: var-lib-kubelet-pods-d2e242cf\x2d07e3\x2d4cfc\x2d8cfc\x2d9154b8e9bf95-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:04:13.485991 kubelet[3192]: I0213 19:04:13.485938 3192 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4a1f35-e21f-44d5-b89a-aa4d6c7db800" path="/var/lib/kubelet/pods/3e4a1f35-e21f-44d5-b89a-aa4d6c7db800/volumes" Feb 13 19:04:13.487007 kubelet[3192]: I0213 19:04:13.486954 3192 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" path="/var/lib/kubelet/pods/d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95/volumes" Feb 13 19:04:13.688357 kubelet[3192]: E0213 19:04:13.688145 3192 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:04:14.225091 sshd[5088]: Connection closed by 147.75.109.163 port 41342 Feb 13 19:04:14.225991 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:14.233350 systemd[1]: sshd@25-172.31.26.138:22-147.75.109.163:41342.service: Deactivated successfully. Feb 13 19:04:14.239299 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:04:14.239802 systemd[1]: session-26.scope: Consumed 2.636s CPU time. Feb 13 19:04:14.240704 systemd-logind[1924]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:04:14.243142 systemd-logind[1924]: Removed session 26. Feb 13 19:04:14.263522 systemd[1]: Started sshd@26-172.31.26.138:22-147.75.109.163:33226.service - OpenSSH per-connection server daemon (147.75.109.163:33226). Feb 13 19:04:14.452077 sshd[5247]: Accepted publickey for core from 147.75.109.163 port 33226 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:14.454425 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:14.461729 systemd-logind[1924]: New session 27 of user core. Feb 13 19:04:14.471292 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:04:14.513199 ntpd[1917]: Deleting interface #11 lxc_health, fe80::2016:39ff:fe65:1f2c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Feb 13 19:04:14.513818 ntpd[1917]: 13 Feb 19:04:14 ntpd[1917]: Deleting interface #11 lxc_health, fe80::2016:39ff:fe65:1f2c%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs Feb 13 19:04:15.758290 sshd[5249]: Connection closed by 147.75.109.163 port 33226 Feb 13 19:04:15.758152 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:15.763212 kubelet[3192]: I0213 19:04:15.762937 3192 memory_manager.go:355] "RemoveStaleState removing state" podUID="3e4a1f35-e21f-44d5-b89a-aa4d6c7db800" containerName="cilium-operator" Feb 13 19:04:15.763212 kubelet[3192]: I0213 19:04:15.763064 3192 memory_manager.go:355] "RemoveStaleState removing state" podUID="d2e242cf-07e3-4cfc-8cfc-9154b8e9bf95" containerName="cilium-agent" Feb 13 19:04:15.774736 systemd[1]: sshd@26-172.31.26.138:22-147.75.109.163:33226.service: Deactivated successfully. Feb 13 19:04:15.780671 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:04:15.782728 systemd[1]: session-27.scope: Consumed 1.078s CPU time. Feb 13 19:04:15.793380 kubelet[3192]: W0213 19:04:15.793258 3192 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-26-138" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-138' and this object Feb 13 19:04:15.793380 kubelet[3192]: E0213 19:04:15.793318 3192 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-26-138\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-138' and this object" logger="UnhandledError" Feb 13 19:04:15.793380 kubelet[3192]: I0213 19:04:15.793318 3192 status_manager.go:890] "Failed to get status for pod" podUID="b1586a59-6f08-419f-8632-372a4b894f60" pod="kube-system/cilium-sz5jg" err="pods \"cilium-sz5jg\" is forbidden: User \"system:node:ip-172-31-26-138\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-138' and this object" Feb 13 19:04:15.793914 kubelet[3192]: W0213 19:04:15.793580 3192 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-26-138" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-138' and this object Feb 13 19:04:15.793914 kubelet[3192]: E0213 19:04:15.793612 3192 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-26-138\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-138' and this object" logger="UnhandledError" Feb 13 19:04:15.796076 kubelet[3192]: W0213 19:04:15.795993 3192 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-26-138" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-138' and this object Feb 13 19:04:15.796653 kubelet[3192]: E0213 19:04:15.796437 3192 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-26-138\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-138' and this object" logger="UnhandledError" Feb 13 19:04:15.796653 kubelet[3192]: W0213 19:04:15.795993 3192 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-26-138" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-26-138' and this object Feb 13 19:04:15.796653 kubelet[3192]: E0213 19:04:15.796520 3192 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-26-138\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-26-138' and this object" logger="UnhandledError" Feb 13 19:04:15.812619 systemd-logind[1924]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:04:15.821941 systemd-logind[1924]: Removed session 27. Feb 13 19:04:15.825020 systemd[1]: Created slice kubepods-burstable-podb1586a59_6f08_419f_8632_372a4b894f60.slice - libcontainer container kubepods-burstable-podb1586a59_6f08_419f_8632_372a4b894f60.slice. Feb 13 19:04:15.838587 systemd[1]: Started sshd@27-172.31.26.138:22-147.75.109.163:33230.service - OpenSSH per-connection server daemon (147.75.109.163:33230). Feb 13 19:04:15.848096 kubelet[3192]: I0213 19:04:15.847657 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1586a59-6f08-419f-8632-372a4b894f60-etc-cni-netd\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848096 kubelet[3192]: I0213 19:04:15.847729 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1586a59-6f08-419f-8632-372a4b894f60-hubble-tls\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848096 kubelet[3192]: I0213 19:04:15.847768 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1586a59-6f08-419f-8632-372a4b894f60-cilium-run\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848096 kubelet[3192]: I0213 19:04:15.847805 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1586a59-6f08-419f-8632-372a4b894f60-cni-path\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848096 kubelet[3192]: I0213 19:04:15.847846 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b1586a59-6f08-419f-8632-372a4b894f60-cilium-ipsec-secrets\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848096 kubelet[3192]: I0213 19:04:15.847894 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv762\" (UniqueName: \"kubernetes.io/projected/b1586a59-6f08-419f-8632-372a4b894f60-kube-api-access-jv762\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848504 kubelet[3192]: I0213 19:04:15.847943 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1586a59-6f08-419f-8632-372a4b894f60-lib-modules\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848504 kubelet[3192]: I0213 19:04:15.847980 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1586a59-6f08-419f-8632-372a4b894f60-hostproc\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848504 kubelet[3192]: I0213 19:04:15.848061 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1586a59-6f08-419f-8632-372a4b894f60-bpf-maps\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848504 kubelet[3192]: I0213 19:04:15.848159 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1586a59-6f08-419f-8632-372a4b894f60-cilium-config-path\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848504 kubelet[3192]: I0213 19:04:15.848219 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1586a59-6f08-419f-8632-372a4b894f60-host-proc-sys-kernel\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848504 kubelet[3192]: I0213 19:04:15.848275 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1586a59-6f08-419f-8632-372a4b894f60-cilium-cgroup\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848802 kubelet[3192]: I0213 19:04:15.848321 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1586a59-6f08-419f-8632-372a4b894f60-xtables-lock\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848802 kubelet[3192]: I0213 19:04:15.848368 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1586a59-6f08-419f-8632-372a4b894f60-clustermesh-secrets\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:15.848802 kubelet[3192]: I0213 19:04:15.848421 3192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1586a59-6f08-419f-8632-372a4b894f60-host-proc-sys-net\") pod \"cilium-sz5jg\" (UID: \"b1586a59-6f08-419f-8632-372a4b894f60\") " pod="kube-system/cilium-sz5jg" Feb 13 19:04:16.057019 sshd[5260]: Accepted publickey for core from 147.75.109.163 port 33230 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:16.058282 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:16.066866 systemd-logind[1924]: New session 28 of user core. Feb 13 19:04:16.075336 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:04:16.194612 sshd[5263]: Connection closed by 147.75.109.163 port 33230 Feb 13 19:04:16.195472 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:16.204910 kubelet[3192]: I0213 19:04:16.204384 3192 setters.go:602] "Node became not ready" node="ip-172-31-26-138" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:04:16Z","lastTransitionTime":"2025-02-13T19:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:04:16.205394 systemd[1]: sshd@27-172.31.26.138:22-147.75.109.163:33230.service: Deactivated successfully. Feb 13 19:04:16.212415 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:04:16.217681 systemd-logind[1924]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:04:16.246790 systemd[1]: Started sshd@28-172.31.26.138:22-147.75.109.163:33242.service - OpenSSH per-connection server daemon (147.75.109.163:33242). Feb 13 19:04:16.248823 systemd-logind[1924]: Removed session 28. Feb 13 19:04:16.448075 sshd[5269]: Accepted publickey for core from 147.75.109.163 port 33242 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:04:16.449979 sshd-session[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:16.457252 systemd-logind[1924]: New session 29 of user core. Feb 13 19:04:16.464306 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:04:16.949316 kubelet[3192]: E0213 19:04:16.949255 3192 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:04:16.949996 kubelet[3192]: E0213 19:04:16.949381 3192 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1586a59-6f08-419f-8632-372a4b894f60-cilium-config-path podName:b1586a59-6f08-419f-8632-372a4b894f60 nodeName:}" failed. No retries permitted until 2025-02-13 19:04:17.449346516 +0000 UTC m=+114.363090959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/b1586a59-6f08-419f-8632-372a4b894f60-cilium-config-path") pod "cilium-sz5jg" (UID: "b1586a59-6f08-419f-8632-372a4b894f60") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:04:16.949996 kubelet[3192]: E0213 19:04:16.949267 3192 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Feb 13 19:04:16.949996 kubelet[3192]: E0213 19:04:16.949736 3192 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1586a59-6f08-419f-8632-372a4b894f60-cilium-ipsec-secrets podName:b1586a59-6f08-419f-8632-372a4b894f60 nodeName:}" failed. No retries permitted until 2025-02-13 19:04:17.449715852 +0000 UTC m=+114.363460295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/b1586a59-6f08-419f-8632-372a4b894f60-cilium-ipsec-secrets") pod "cilium-sz5jg" (UID: "b1586a59-6f08-419f-8632-372a4b894f60") : failed to sync secret cache: timed out waiting for the condition Feb 13 19:04:17.647906 containerd[1946]: time="2025-02-13T19:04:17.647807599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sz5jg,Uid:b1586a59-6f08-419f-8632-372a4b894f60,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:17.691546 containerd[1946]: time="2025-02-13T19:04:17.691166168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:17.691546 containerd[1946]: time="2025-02-13T19:04:17.691281620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:17.691546 containerd[1946]: time="2025-02-13T19:04:17.691311968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:17.692819 containerd[1946]: time="2025-02-13T19:04:17.692695436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:17.734367 systemd[1]: Started cri-containerd-3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a.scope - libcontainer container 3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a. Feb 13 19:04:17.775275 containerd[1946]: time="2025-02-13T19:04:17.775207688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sz5jg,Uid:b1586a59-6f08-419f-8632-372a4b894f60,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a\"" Feb 13 19:04:17.784102 containerd[1946]: time="2025-02-13T19:04:17.783464732Z" level=info msg="CreateContainer within sandbox \"3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:04:17.813210 containerd[1946]: time="2025-02-13T19:04:17.813140756Z" level=info msg="CreateContainer within sandbox \"3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd2ebba9432c80ddee08582ba82a8c0a371c885266a29d7be11dc3988be11c93\"" Feb 13 19:04:17.814639 containerd[1946]: time="2025-02-13T19:04:17.814532792Z" level=info msg="StartContainer for \"cd2ebba9432c80ddee08582ba82a8c0a371c885266a29d7be11dc3988be11c93\"" Feb 13 19:04:17.855436 systemd[1]: Started cri-containerd-cd2ebba9432c80ddee08582ba82a8c0a371c885266a29d7be11dc3988be11c93.scope - libcontainer container cd2ebba9432c80ddee08582ba82a8c0a371c885266a29d7be11dc3988be11c93. Feb 13 19:04:17.906398 containerd[1946]: time="2025-02-13T19:04:17.904646733Z" level=info msg="StartContainer for \"cd2ebba9432c80ddee08582ba82a8c0a371c885266a29d7be11dc3988be11c93\" returns successfully" Feb 13 19:04:17.919662 systemd[1]: cri-containerd-cd2ebba9432c80ddee08582ba82a8c0a371c885266a29d7be11dc3988be11c93.scope: Deactivated successfully. Feb 13 19:04:17.972979 containerd[1946]: time="2025-02-13T19:04:17.972889893Z" level=info msg="shim disconnected" id=cd2ebba9432c80ddee08582ba82a8c0a371c885266a29d7be11dc3988be11c93 namespace=k8s.io Feb 13 19:04:17.972979 containerd[1946]: time="2025-02-13T19:04:17.972968877Z" level=warning msg="cleaning up after shim disconnected" id=cd2ebba9432c80ddee08582ba82a8c0a371c885266a29d7be11dc3988be11c93 namespace=k8s.io Feb 13 19:04:17.972979 containerd[1946]: time="2025-02-13T19:04:17.972990441Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:18.015521 containerd[1946]: time="2025-02-13T19:04:18.015317669Z" level=info msg="CreateContainer within sandbox \"3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:04:18.042740 containerd[1946]: time="2025-02-13T19:04:18.042588893Z" level=info msg="CreateContainer within sandbox \"3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"016cf04786787e9fe27ac88cc70a551b6562a190c029cbcbbfed9dd7b83c6806\"" Feb 13 19:04:18.044723 containerd[1946]: time="2025-02-13T19:04:18.044587697Z" level=info msg="StartContainer for \"016cf04786787e9fe27ac88cc70a551b6562a190c029cbcbbfed9dd7b83c6806\"" Feb 13 19:04:18.086356 systemd[1]: Started cri-containerd-016cf04786787e9fe27ac88cc70a551b6562a190c029cbcbbfed9dd7b83c6806.scope - libcontainer container 016cf04786787e9fe27ac88cc70a551b6562a190c029cbcbbfed9dd7b83c6806. Feb 13 19:04:18.134346 containerd[1946]: time="2025-02-13T19:04:18.134271558Z" level=info msg="StartContainer for \"016cf04786787e9fe27ac88cc70a551b6562a190c029cbcbbfed9dd7b83c6806\" returns successfully" Feb 13 19:04:18.147732 systemd[1]: cri-containerd-016cf04786787e9fe27ac88cc70a551b6562a190c029cbcbbfed9dd7b83c6806.scope: Deactivated successfully. Feb 13 19:04:18.192353 containerd[1946]: time="2025-02-13T19:04:18.191625690Z" level=info msg="shim disconnected" id=016cf04786787e9fe27ac88cc70a551b6562a190c029cbcbbfed9dd7b83c6806 namespace=k8s.io Feb 13 19:04:18.192353 containerd[1946]: time="2025-02-13T19:04:18.191704206Z" level=warning msg="cleaning up after shim disconnected" id=016cf04786787e9fe27ac88cc70a551b6562a190c029cbcbbfed9dd7b83c6806 namespace=k8s.io Feb 13 19:04:18.192353 containerd[1946]: time="2025-02-13T19:04:18.191725410Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:18.468890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435813372.mount: Deactivated successfully. Feb 13 19:04:18.689121 kubelet[3192]: E0213 19:04:18.689067 3192 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:04:19.016722 containerd[1946]: time="2025-02-13T19:04:19.016655046Z" level=info msg="CreateContainer within sandbox \"3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:04:19.049471 containerd[1946]: time="2025-02-13T19:04:19.049288086Z" level=info msg="CreateContainer within sandbox \"3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"408f5e17018c85b8e035331081d7dc8d09cf547acede9378235759691d2e4f20\"" Feb 13 19:04:19.059064 containerd[1946]: time="2025-02-13T19:04:19.057582954Z" level=info msg="StartContainer for \"408f5e17018c85b8e035331081d7dc8d09cf547acede9378235759691d2e4f20\"" Feb 13 19:04:19.123357 systemd[1]: Started cri-containerd-408f5e17018c85b8e035331081d7dc8d09cf547acede9378235759691d2e4f20.scope - libcontainer container 408f5e17018c85b8e035331081d7dc8d09cf547acede9378235759691d2e4f20. Feb 13 19:04:19.184541 containerd[1946]: time="2025-02-13T19:04:19.184483819Z" level=info msg="StartContainer for \"408f5e17018c85b8e035331081d7dc8d09cf547acede9378235759691d2e4f20\" returns successfully" Feb 13 19:04:19.189359 systemd[1]: cri-containerd-408f5e17018c85b8e035331081d7dc8d09cf547acede9378235759691d2e4f20.scope: Deactivated successfully. Feb 13 19:04:19.234988 containerd[1946]: time="2025-02-13T19:04:19.234659479Z" level=info msg="shim disconnected" id=408f5e17018c85b8e035331081d7dc8d09cf547acede9378235759691d2e4f20 namespace=k8s.io Feb 13 19:04:19.234988 containerd[1946]: time="2025-02-13T19:04:19.234755995Z" level=warning msg="cleaning up after shim disconnected" id=408f5e17018c85b8e035331081d7dc8d09cf547acede9378235759691d2e4f20 namespace=k8s.io Feb 13 19:04:19.234988 containerd[1946]: time="2025-02-13T19:04:19.234777007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:19.468899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-408f5e17018c85b8e035331081d7dc8d09cf547acede9378235759691d2e4f20-rootfs.mount: Deactivated successfully. Feb 13 19:04:20.026379 containerd[1946]: time="2025-02-13T19:04:20.026321803Z" level=info msg="CreateContainer within sandbox \"3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:04:20.059490 containerd[1946]: time="2025-02-13T19:04:20.057289147Z" level=info msg="CreateContainer within sandbox \"3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c2c13cf679d06d562b37da51ce627b25b2fe4c80e5bea6475f1cda9a4eb3428b\"" Feb 13 19:04:20.059490 containerd[1946]: time="2025-02-13T19:04:20.059179207Z" level=info msg="StartContainer for \"c2c13cf679d06d562b37da51ce627b25b2fe4c80e5bea6475f1cda9a4eb3428b\"" Feb 13 19:04:20.120356 systemd[1]: Started cri-containerd-c2c13cf679d06d562b37da51ce627b25b2fe4c80e5bea6475f1cda9a4eb3428b.scope - libcontainer container c2c13cf679d06d562b37da51ce627b25b2fe4c80e5bea6475f1cda9a4eb3428b. Feb 13 19:04:20.163411 systemd[1]: cri-containerd-c2c13cf679d06d562b37da51ce627b25b2fe4c80e5bea6475f1cda9a4eb3428b.scope: Deactivated successfully. Feb 13 19:04:20.169844 containerd[1946]: time="2025-02-13T19:04:20.169780304Z" level=info msg="StartContainer for \"c2c13cf679d06d562b37da51ce627b25b2fe4c80e5bea6475f1cda9a4eb3428b\" returns successfully" Feb 13 19:04:20.217865 containerd[1946]: time="2025-02-13T19:04:20.217732100Z" level=info msg="shim disconnected" id=c2c13cf679d06d562b37da51ce627b25b2fe4c80e5bea6475f1cda9a4eb3428b namespace=k8s.io Feb 13 19:04:20.217865 containerd[1946]: time="2025-02-13T19:04:20.217811972Z" level=warning msg="cleaning up after shim disconnected" id=c2c13cf679d06d562b37da51ce627b25b2fe4c80e5bea6475f1cda9a4eb3428b namespace=k8s.io Feb 13 19:04:20.217865 containerd[1946]: time="2025-02-13T19:04:20.217834532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:20.468961 systemd[1]: run-containerd-runc-k8s.io-c2c13cf679d06d562b37da51ce627b25b2fe4c80e5bea6475f1cda9a4eb3428b-runc.a3SK2H.mount: Deactivated successfully. Feb 13 19:04:20.469171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2c13cf679d06d562b37da51ce627b25b2fe4c80e5bea6475f1cda9a4eb3428b-rootfs.mount: Deactivated successfully. Feb 13 19:04:21.031019 containerd[1946]: time="2025-02-13T19:04:21.030514688Z" level=info msg="CreateContainer within sandbox \"3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:04:21.062990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3690157746.mount: Deactivated successfully. Feb 13 19:04:21.068918 containerd[1946]: time="2025-02-13T19:04:21.068734784Z" level=info msg="CreateContainer within sandbox \"3eb928739c5ed674a0e946a7ca4234b3c272472c7325afc1daddab2dedd4e16a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ec9cc1b4a38d95dc96eb9b990cf47c2c74aac199a46b5de548474d2a2326e796\"" Feb 13 19:04:21.070205 containerd[1946]: time="2025-02-13T19:04:21.069738992Z" level=info msg="StartContainer for \"ec9cc1b4a38d95dc96eb9b990cf47c2c74aac199a46b5de548474d2a2326e796\"" Feb 13 19:04:21.123350 systemd[1]: Started cri-containerd-ec9cc1b4a38d95dc96eb9b990cf47c2c74aac199a46b5de548474d2a2326e796.scope - libcontainer container ec9cc1b4a38d95dc96eb9b990cf47c2c74aac199a46b5de548474d2a2326e796. Feb 13 19:04:21.185453 containerd[1946]: time="2025-02-13T19:04:21.184870509Z" level=info msg="StartContainer for \"ec9cc1b4a38d95dc96eb9b990cf47c2c74aac199a46b5de548474d2a2326e796\" returns successfully" Feb 13 19:04:21.985886 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 19:04:23.461922 containerd[1946]: time="2025-02-13T19:04:23.461817660Z" level=info msg="StopPodSandbox for \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\"" Feb 13 19:04:23.462938 containerd[1946]: time="2025-02-13T19:04:23.462107100Z" level=info msg="TearDown network for sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" successfully" Feb 13 19:04:23.462938 containerd[1946]: time="2025-02-13T19:04:23.462185088Z" level=info msg="StopPodSandbox for \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" returns successfully" Feb 13 19:04:23.463509 containerd[1946]: time="2025-02-13T19:04:23.463436616Z" level=info msg="RemovePodSandbox for \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\"" Feb 13 19:04:23.463594 containerd[1946]: time="2025-02-13T19:04:23.463519140Z" level=info msg="Forcibly stopping sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\"" Feb 13 19:04:23.463668 containerd[1946]: time="2025-02-13T19:04:23.463631208Z" level=info msg="TearDown network for sandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" successfully" Feb 13 19:04:23.470701 containerd[1946]: time="2025-02-13T19:04:23.470604852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:23.471164 containerd[1946]: time="2025-02-13T19:04:23.470706780Z" level=info msg="RemovePodSandbox \"1d6095002ebb7377e8de6d3d0079f41e5ec0750c9f8849346c479495d1368d7c\" returns successfully" Feb 13 19:04:23.471886 containerd[1946]: time="2025-02-13T19:04:23.471530940Z" level=info msg="StopPodSandbox for \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\"" Feb 13 19:04:23.471886 containerd[1946]: time="2025-02-13T19:04:23.471658260Z" level=info msg="TearDown network for sandbox \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\" successfully" Feb 13 19:04:23.471886 containerd[1946]: time="2025-02-13T19:04:23.471681816Z" level=info msg="StopPodSandbox for \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\" returns successfully" Feb 13 19:04:23.472264 containerd[1946]: time="2025-02-13T19:04:23.472187064Z" level=info msg="RemovePodSandbox for \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\"" Feb 13 19:04:23.472264 containerd[1946]: time="2025-02-13T19:04:23.472230084Z" level=info msg="Forcibly stopping sandbox \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\"" Feb 13 19:04:23.472373 containerd[1946]: time="2025-02-13T19:04:23.472319460Z" level=info msg="TearDown network for sandbox \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\" successfully" Feb 13 19:04:23.478421 containerd[1946]: time="2025-02-13T19:04:23.478348632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:04:23.478580 containerd[1946]: time="2025-02-13T19:04:23.478441320Z" level=info msg="RemovePodSandbox \"6a864d7b2425eb1fa9ad5152621e926666f5f91f1a73dbbfb8b8928a1d5a9468\" returns successfully" Feb 13 19:04:25.152989 systemd[1]: run-containerd-runc-k8s.io-ec9cc1b4a38d95dc96eb9b990cf47c2c74aac199a46b5de548474d2a2326e796-runc.UPqIO3.mount: Deactivated successfully. Feb 13 19:04:26.230648 systemd-networkd[1814]: lxc_health: Link UP Feb 13 19:04:26.248436 (udev-worker)[6105]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:04:26.248519 systemd-networkd[1814]: lxc_health: Gained carrier Feb 13 19:04:27.370290 systemd-networkd[1814]: lxc_health: Gained IPv6LL Feb 13 19:04:27.702515 kubelet[3192]: I0213 19:04:27.701095 3192 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sz5jg" podStartSLOduration=12.701072429 podStartE2EDuration="12.701072429s" podCreationTimestamp="2025-02-13 19:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:22.074800941 +0000 UTC m=+118.988545396" watchObservedRunningTime="2025-02-13 19:04:27.701072429 +0000 UTC m=+124.614816908" Feb 13 19:04:29.513274 ntpd[1917]: Listen normally on 14 lxc_health [fe80::4c0a:4ff:fe29:e4a8%14]:123 Feb 13 19:04:29.513804 ntpd[1917]: 13 Feb 19:04:29 ntpd[1917]: Listen normally on 14 lxc_health [fe80::4c0a:4ff:fe29:e4a8%14]:123 Feb 13 19:04:29.825669 systemd[1]: run-containerd-runc-k8s.io-ec9cc1b4a38d95dc96eb9b990cf47c2c74aac199a46b5de548474d2a2326e796-runc.vlfnZE.mount: Deactivated successfully. Feb 13 19:04:32.141182 systemd[1]: run-containerd-runc-k8s.io-ec9cc1b4a38d95dc96eb9b990cf47c2c74aac199a46b5de548474d2a2326e796-runc.LBEL0a.mount: Deactivated successfully. Feb 13 19:04:32.273196 sshd[5271]: Connection closed by 147.75.109.163 port 33242 Feb 13 19:04:32.273709 sshd-session[5269]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:32.281525 systemd[1]: sshd@28-172.31.26.138:22-147.75.109.163:33242.service: Deactivated successfully. Feb 13 19:04:32.287284 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:04:32.291780 systemd-logind[1924]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:04:32.298975 systemd-logind[1924]: Removed session 29. Feb 13 19:04:47.721789 systemd[1]: cri-containerd-eaad26c32586ec665d6578768540ffa131a3e9afaa77adde1111c3ed20edefb3.scope: Deactivated successfully. Feb 13 19:04:47.723512 systemd[1]: cri-containerd-eaad26c32586ec665d6578768540ffa131a3e9afaa77adde1111c3ed20edefb3.scope: Consumed 6.245s CPU time, 19.7M memory peak, 0B memory swap peak. Feb 13 19:04:47.767804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaad26c32586ec665d6578768540ffa131a3e9afaa77adde1111c3ed20edefb3-rootfs.mount: Deactivated successfully. Feb 13 19:04:47.777516 containerd[1946]: time="2025-02-13T19:04:47.777019273Z" level=info msg="shim disconnected" id=eaad26c32586ec665d6578768540ffa131a3e9afaa77adde1111c3ed20edefb3 namespace=k8s.io Feb 13 19:04:47.777516 containerd[1946]: time="2025-02-13T19:04:47.777248521Z" level=warning msg="cleaning up after shim disconnected" id=eaad26c32586ec665d6578768540ffa131a3e9afaa77adde1111c3ed20edefb3 namespace=k8s.io Feb 13 19:04:47.777516 containerd[1946]: time="2025-02-13T19:04:47.777274741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:48.120012 kubelet[3192]: I0213 19:04:48.119882 3192 scope.go:117] "RemoveContainer" containerID="eaad26c32586ec665d6578768540ffa131a3e9afaa77adde1111c3ed20edefb3" Feb 13 19:04:48.123584 containerd[1946]: time="2025-02-13T19:04:48.123514115Z" level=info msg="CreateContainer within sandbox \"76cf6e767c32b1a4afdb63f8f756a175a2ea5084e85a4cea5e87204d027f3664\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:04:48.154753 containerd[1946]: time="2025-02-13T19:04:48.154541027Z" level=info msg="CreateContainer within sandbox \"76cf6e767c32b1a4afdb63f8f756a175a2ea5084e85a4cea5e87204d027f3664\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"75e1ff86d877842fb9bdf67499183a1c06b922435e2d5ff8dc977b8ba3952d28\"" Feb 13 19:04:48.155978 containerd[1946]: time="2025-02-13T19:04:48.155858675Z" level=info msg="StartContainer for \"75e1ff86d877842fb9bdf67499183a1c06b922435e2d5ff8dc977b8ba3952d28\"" Feb 13 19:04:48.212330 systemd[1]: Started cri-containerd-75e1ff86d877842fb9bdf67499183a1c06b922435e2d5ff8dc977b8ba3952d28.scope - libcontainer container 75e1ff86d877842fb9bdf67499183a1c06b922435e2d5ff8dc977b8ba3952d28. Feb 13 19:04:48.279470 containerd[1946]: time="2025-02-13T19:04:48.278707715Z" level=info msg="StartContainer for \"75e1ff86d877842fb9bdf67499183a1c06b922435e2d5ff8dc977b8ba3952d28\" returns successfully" Feb 13 19:04:51.291558 systemd[1]: cri-containerd-4194e08aa0a0ea72ec3bb66d05945673861c520033cbcfbbef007319393e2359.scope: Deactivated successfully. Feb 13 19:04:51.293669 systemd[1]: cri-containerd-4194e08aa0a0ea72ec3bb66d05945673861c520033cbcfbbef007319393e2359.scope: Consumed 4.589s CPU time, 16.2M memory peak, 0B memory swap peak. Feb 13 19:04:51.332737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4194e08aa0a0ea72ec3bb66d05945673861c520033cbcfbbef007319393e2359-rootfs.mount: Deactivated successfully. Feb 13 19:04:51.350262 containerd[1946]: time="2025-02-13T19:04:51.350178255Z" level=info msg="shim disconnected" id=4194e08aa0a0ea72ec3bb66d05945673861c520033cbcfbbef007319393e2359 namespace=k8s.io Feb 13 19:04:51.350262 containerd[1946]: time="2025-02-13T19:04:51.350257791Z" level=warning msg="cleaning up after shim disconnected" id=4194e08aa0a0ea72ec3bb66d05945673861c520033cbcfbbef007319393e2359 namespace=k8s.io Feb 13 19:04:51.351370 containerd[1946]: time="2025-02-13T19:04:51.350278839Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:52.135945 kubelet[3192]: I0213 19:04:52.135891 3192 scope.go:117] "RemoveContainer" containerID="4194e08aa0a0ea72ec3bb66d05945673861c520033cbcfbbef007319393e2359" Feb 13 19:04:52.139454 containerd[1946]: time="2025-02-13T19:04:52.139162335Z" level=info msg="CreateContainer within sandbox \"ea780ec821ac59eeff41e606ce1d59e519fbd678c0eed947a7826ed8d67ecfdd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:04:52.170485 containerd[1946]: time="2025-02-13T19:04:52.170408319Z" level=info msg="CreateContainer within sandbox \"ea780ec821ac59eeff41e606ce1d59e519fbd678c0eed947a7826ed8d67ecfdd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6f782a98f516e47a9da26e749c240e0016d80c6027c11b1cae8b2db09bed9efd\"" Feb 13 19:04:52.171766 containerd[1946]: time="2025-02-13T19:04:52.171200811Z" level=info msg="StartContainer for \"6f782a98f516e47a9da26e749c240e0016d80c6027c11b1cae8b2db09bed9efd\"" Feb 13 19:04:52.230366 systemd[1]: Started cri-containerd-6f782a98f516e47a9da26e749c240e0016d80c6027c11b1cae8b2db09bed9efd.scope - libcontainer container 6f782a98f516e47a9da26e749c240e0016d80c6027c11b1cae8b2db09bed9efd. Feb 13 19:04:52.295599 containerd[1946]: time="2025-02-13T19:04:52.295189503Z" level=info msg="StartContainer for \"6f782a98f516e47a9da26e749c240e0016d80c6027c11b1cae8b2db09bed9efd\" returns successfully" Feb 13 19:04:56.345382 kubelet[3192]: E0213 19:04:56.344992 3192 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-138?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:05:06.346183 kubelet[3192]: E0213 19:05:06.345818 3192 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-138?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"