Feb 13 15:16:41.262740 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 15:16:41.262786 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:16:41.262818 kernel: KASLR disabled due to lack of seed Feb 13 15:16:41.262835 kernel: efi: EFI v2.7 by EDK II Feb 13 15:16:41.262873 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Feb 13 15:16:41.263923 kernel: secureboot: Secure boot disabled Feb 13 15:16:41.263961 kernel: ACPI: Early table checksum verification disabled Feb 13 15:16:41.263978 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 15:16:41.263995 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 15:16:41.264016 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:16:41.264040 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 15:16:41.264056 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:16:41.264072 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 15:16:41.264087 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 15:16:41.264106 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 15:16:41.264127 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:16:41.264143 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 15:16:41.264160 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 15:16:41.264177 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 15:16:41.264194 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 15:16:41.264210 kernel: printk: bootconsole [uart0] enabled Feb 13 15:16:41.264226 kernel: NUMA: Failed to initialise from firmware Feb 13 15:16:41.264243 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:16:41.264259 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 15:16:41.264275 kernel: Zone ranges: Feb 13 15:16:41.264291 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:16:41.264312 kernel: DMA32 empty Feb 13 15:16:41.264328 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 15:16:41.264345 kernel: Movable zone start for each node Feb 13 15:16:41.264361 kernel: Early memory node ranges Feb 13 15:16:41.264377 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 15:16:41.264393 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 15:16:41.264410 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 15:16:41.264426 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 15:16:41.264442 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 15:16:41.264458 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 15:16:41.264474 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 15:16:41.264490 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 15:16:41.264510 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:16:41.264527 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 15:16:41.264550 kernel: psci: probing for conduit method from ACPI. Feb 13 15:16:41.264567 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 15:16:41.264585 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:16:41.264605 kernel: psci: Trusted OS migration not required Feb 13 15:16:41.264623 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:16:41.264640 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:16:41.264658 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:16:41.264675 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:16:41.264692 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:16:41.264709 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:16:41.264726 kernel: CPU features: detected: Spectre-v2 Feb 13 15:16:41.264743 kernel: CPU features: detected: Spectre-v3a Feb 13 15:16:41.264759 kernel: CPU features: detected: Spectre-BHB Feb 13 15:16:41.264776 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 15:16:41.264793 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 15:16:41.264814 kernel: alternatives: applying boot alternatives Feb 13 15:16:41.264833 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:16:41.265908 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:16:41.265972 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:16:41.265993 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:16:41.266011 kernel: Fallback order for Node 0: 0 Feb 13 15:16:41.266028 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 15:16:41.266045 kernel: Policy zone: Normal Feb 13 15:16:41.266062 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:16:41.266079 kernel: software IO TLB: area num 2. Feb 13 15:16:41.266106 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 15:16:41.266125 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 15:16:41.266142 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:16:41.266160 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:16:41.266178 kernel: rcu: RCU event tracing is enabled. Feb 13 15:16:41.266195 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:16:41.266213 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:16:41.266230 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:16:41.266248 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:16:41.266265 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:16:41.266282 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:16:41.266303 kernel: GICv3: 96 SPIs implemented Feb 13 15:16:41.266321 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:16:41.266337 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:16:41.266354 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 15:16:41.266371 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 15:16:41.266388 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 15:16:41.266405 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:16:41.266423 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:16:41.266440 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 15:16:41.266457 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 15:16:41.266474 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 15:16:41.266491 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:16:41.266512 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 15:16:41.266529 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 15:16:41.266547 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 15:16:41.266565 kernel: Console: colour dummy device 80x25 Feb 13 15:16:41.266584 kernel: printk: console [tty1] enabled Feb 13 15:16:41.266601 kernel: ACPI: Core revision 20230628 Feb 13 15:16:41.266619 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 15:16:41.266637 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:16:41.266688 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:16:41.266725 kernel: landlock: Up and running. Feb 13 15:16:41.266745 kernel: SELinux: Initializing. Feb 13 15:16:41.266763 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:16:41.266781 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:16:41.266799 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:16:41.266818 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:16:41.267976 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:16:41.268030 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:16:41.268049 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 15:16:41.268077 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 15:16:41.268095 kernel: Remapping and enabling EFI services. Feb 13 15:16:41.268113 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:16:41.268131 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:16:41.268149 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 15:16:41.268166 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 15:16:41.268184 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 15:16:41.268202 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:16:41.268219 kernel: SMP: Total of 2 processors activated. Feb 13 15:16:41.268241 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:16:41.268259 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 15:16:41.268277 kernel: CPU features: detected: CRC32 instructions Feb 13 15:16:41.268306 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:16:41.268328 kernel: alternatives: applying system-wide alternatives Feb 13 15:16:41.268346 kernel: devtmpfs: initialized Feb 13 15:16:41.268365 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:16:41.268383 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:16:41.268401 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:16:41.268420 kernel: SMBIOS 3.0.0 present. Feb 13 15:16:41.268442 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 15:16:41.268461 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:16:41.268480 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:16:41.268498 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:16:41.268517 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:16:41.268535 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:16:41.268553 kernel: audit: type=2000 audit(0.232:1): state=initialized audit_enabled=0 res=1 Feb 13 15:16:41.268575 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:16:41.268593 kernel: cpuidle: using governor menu Feb 13 15:16:41.268612 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:16:41.268630 kernel: ASID allocator initialised with 65536 entries Feb 13 15:16:41.268648 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:16:41.268666 kernel: Serial: AMBA PL011 UART driver Feb 13 15:16:41.268684 kernel: Modules: 17440 pages in range for non-PLT usage Feb 13 15:16:41.268702 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:16:41.268721 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:16:41.268743 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:16:41.268762 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:16:41.268780 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:16:41.268799 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:16:41.268817 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:16:41.268835 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:16:41.269899 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:16:41.269931 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:16:41.269970 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:16:41.269998 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:16:41.270017 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:16:41.270036 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:16:41.270054 kernel: ACPI: Interpreter enabled Feb 13 15:16:41.270072 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:16:41.270090 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:16:41.270109 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 15:16:41.270412 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:16:41.270624 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:16:41.270818 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:16:41.271036 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 15:16:41.272746 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 15:16:41.272788 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 15:16:41.272808 kernel: acpiphp: Slot [1] registered Feb 13 15:16:41.272827 kernel: acpiphp: Slot [2] registered Feb 13 15:16:41.272845 kernel: acpiphp: Slot [3] registered Feb 13 15:16:41.272929 kernel: acpiphp: Slot [4] registered Feb 13 15:16:41.272949 kernel: acpiphp: Slot [5] registered Feb 13 15:16:41.272968 kernel: acpiphp: Slot [6] registered Feb 13 15:16:41.272986 kernel: acpiphp: Slot [7] registered Feb 13 15:16:41.273004 kernel: acpiphp: Slot [8] registered Feb 13 15:16:41.273023 kernel: acpiphp: Slot [9] registered Feb 13 15:16:41.273041 kernel: acpiphp: Slot [10] registered Feb 13 15:16:41.273059 kernel: acpiphp: Slot [11] registered Feb 13 15:16:41.273077 kernel: acpiphp: Slot [12] registered Feb 13 15:16:41.273095 kernel: acpiphp: Slot [13] registered Feb 13 15:16:41.273117 kernel: acpiphp: Slot [14] registered Feb 13 15:16:41.273135 kernel: acpiphp: Slot [15] registered Feb 13 15:16:41.273153 kernel: acpiphp: Slot [16] registered Feb 13 15:16:41.273171 kernel: acpiphp: Slot [17] registered Feb 13 15:16:41.273189 kernel: acpiphp: Slot [18] registered Feb 13 15:16:41.273208 kernel: acpiphp: Slot [19] registered Feb 13 15:16:41.273226 kernel: acpiphp: Slot [20] registered Feb 13 15:16:41.273244 kernel: acpiphp: Slot [21] registered Feb 13 15:16:41.273262 kernel: acpiphp: Slot [22] registered Feb 13 15:16:41.273284 kernel: acpiphp: Slot [23] registered Feb 13 15:16:41.273303 kernel: acpiphp: Slot [24] registered Feb 13 15:16:41.273321 kernel: acpiphp: Slot [25] registered Feb 13 15:16:41.273339 kernel: acpiphp: Slot [26] registered Feb 13 15:16:41.273356 kernel: acpiphp: Slot [27] registered Feb 13 15:16:41.273375 kernel: acpiphp: Slot [28] registered Feb 13 15:16:41.273393 kernel: acpiphp: Slot [29] registered Feb 13 15:16:41.273411 kernel: acpiphp: Slot [30] registered Feb 13 15:16:41.273429 kernel: acpiphp: Slot [31] registered Feb 13 15:16:41.273447 kernel: PCI host bridge to bus 0000:00 Feb 13 15:16:41.273661 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 15:16:41.273840 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:16:41.274079 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 15:16:41.274271 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 15:16:41.274530 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 15:16:41.274790 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 15:16:41.278187 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 15:16:41.278435 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:16:41.278644 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 15:16:41.282019 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:16:41.282454 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:16:41.282655 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 15:16:41.283993 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 15:16:41.284329 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 15:16:41.284534 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:16:41.284746 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 15:16:41.284989 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 15:16:41.285200 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 15:16:41.285401 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 15:16:41.285604 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 15:16:41.285807 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 15:16:41.288616 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:16:41.288831 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 15:16:41.288923 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:16:41.288950 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:16:41.288972 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:16:41.288991 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:16:41.289010 kernel: iommu: Default domain type: Translated Feb 13 15:16:41.289043 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:16:41.289061 kernel: efivars: Registered efivars operations Feb 13 15:16:41.289079 kernel: vgaarb: loaded Feb 13 15:16:41.289098 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:16:41.289116 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:16:41.289134 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:16:41.289152 kernel: pnp: PnP ACPI init Feb 13 15:16:41.289368 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 15:16:41.289401 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:16:41.289421 kernel: NET: Registered PF_INET protocol family Feb 13 15:16:41.289441 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:16:41.289461 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:16:41.289482 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:16:41.289503 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:16:41.289522 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:16:41.289541 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:16:41.289561 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:16:41.289585 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:16:41.289604 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:16:41.289623 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:16:41.289642 kernel: kvm [1]: HYP mode not available Feb 13 15:16:41.289660 kernel: Initialise system trusted keyrings Feb 13 15:16:41.289699 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:16:41.289724 kernel: Key type asymmetric registered Feb 13 15:16:41.289744 kernel: Asymmetric key parser 'x509' registered Feb 13 15:16:41.289763 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:16:41.289788 kernel: io scheduler mq-deadline registered Feb 13 15:16:41.289807 kernel: io scheduler kyber registered Feb 13 15:16:41.289826 kernel: io scheduler bfq registered Feb 13 15:16:41.291922 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 15:16:41.291985 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:16:41.292005 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:16:41.292024 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 15:16:41.292043 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 15:16:41.292076 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:16:41.292096 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:16:41.292329 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 15:16:41.292372 kernel: printk: console [ttyS0] disabled Feb 13 15:16:41.292419 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 15:16:41.292474 kernel: printk: console [ttyS0] enabled Feb 13 15:16:41.292522 kernel: printk: bootconsole [uart0] disabled Feb 13 15:16:41.292544 kernel: thunder_xcv, ver 1.0 Feb 13 15:16:41.292564 kernel: thunder_bgx, ver 1.0 Feb 13 15:16:41.292592 kernel: nicpf, ver 1.0 Feb 13 15:16:41.292614 kernel: nicvf, ver 1.0 Feb 13 15:16:41.292921 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:16:41.293140 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:16:40 UTC (1739459800) Feb 13 15:16:41.293167 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:16:41.293186 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 15:16:41.293205 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:16:41.293223 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:16:41.293252 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:16:41.293271 kernel: Segment Routing with IPv6 Feb 13 15:16:41.293290 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:16:41.293308 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:16:41.293326 kernel: Key type dns_resolver registered Feb 13 15:16:41.293345 kernel: registered taskstats version 1 Feb 13 15:16:41.293365 kernel: Loading compiled-in X.509 certificates Feb 13 15:16:41.293383 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:16:41.293402 kernel: Key type .fscrypt registered Feb 13 15:16:41.293425 kernel: Key type fscrypt-provisioning registered Feb 13 15:16:41.293443 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:16:41.293461 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:16:41.293480 kernel: ima: No architecture policies found Feb 13 15:16:41.293498 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:16:41.293516 kernel: clk: Disabling unused clocks Feb 13 15:16:41.293535 kernel: Freeing unused kernel memory: 39680K Feb 13 15:16:41.293553 kernel: Run /init as init process Feb 13 15:16:41.293571 kernel: with arguments: Feb 13 15:16:41.293589 kernel: /init Feb 13 15:16:41.293611 kernel: with environment: Feb 13 15:16:41.293628 kernel: HOME=/ Feb 13 15:16:41.293647 kernel: TERM=linux Feb 13 15:16:41.293664 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:16:41.293686 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:16:41.293710 systemd[1]: Detected virtualization amazon. Feb 13 15:16:41.293730 systemd[1]: Detected architecture arm64. Feb 13 15:16:41.293755 systemd[1]: Running in initrd. Feb 13 15:16:41.293775 systemd[1]: No hostname configured, using default hostname. Feb 13 15:16:41.293794 systemd[1]: Hostname set to . Feb 13 15:16:41.293815 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:16:41.293835 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:16:41.293872 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:41.293897 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:41.293920 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:16:41.293963 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:16:41.293986 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:16:41.294007 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:16:41.294031 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:16:41.294052 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:16:41.294073 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:41.294093 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:41.294120 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:16:41.294140 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:16:41.294160 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:16:41.294181 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:16:41.294201 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:16:41.294221 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:16:41.294241 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:16:41.294261 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:16:41.294281 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:41.294306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:41.294326 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:41.294346 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:16:41.294366 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:16:41.294386 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:16:41.294407 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:16:41.294426 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:16:41.294446 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:16:41.294471 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:16:41.294492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:41.294512 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:16:41.294532 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:41.294598 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 15:16:41.294648 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:16:41.294671 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:16:41.294691 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:16:41.294717 systemd-journald[251]: Journal started Feb 13 15:16:41.294757 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2b5656d78b9e9e1f7f45de4fd57932) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:16:41.297486 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:41.263977 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 15:16:41.310901 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:16:41.311016 kernel: Bridge firewalling registered Feb 13 15:16:41.311960 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 15:16:41.315591 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:41.324991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:41.335202 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:41.349483 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:16:41.357536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:16:41.366213 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:16:41.395995 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:41.406453 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:41.423283 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:16:41.427700 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:41.435432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:41.458304 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:16:41.469582 dracut-cmdline[285]: dracut-dracut-053 Feb 13 15:16:41.475757 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:16:41.547456 systemd-resolved[289]: Positive Trust Anchors: Feb 13 15:16:41.547526 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:16:41.547588 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:16:41.647896 kernel: SCSI subsystem initialized Feb 13 15:16:41.657891 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:16:41.668895 kernel: iscsi: registered transport (tcp) Feb 13 15:16:41.691940 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:16:41.692020 kernel: QLogic iSCSI HBA Driver Feb 13 15:16:41.777970 kernel: random: crng init done Feb 13 15:16:41.778594 systemd-resolved[289]: Defaulting to hostname 'linux'. Feb 13 15:16:41.783015 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:16:41.789783 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:41.821989 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:16:41.830248 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:16:41.875920 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:16:41.876037 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:16:41.877648 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:16:41.947940 kernel: raid6: neonx8 gen() 6710 MB/s Feb 13 15:16:41.964945 kernel: raid6: neonx4 gen() 6518 MB/s Feb 13 15:16:41.981918 kernel: raid6: neonx2 gen() 5433 MB/s Feb 13 15:16:41.998906 kernel: raid6: neonx1 gen() 3931 MB/s Feb 13 15:16:42.015902 kernel: raid6: int64x8 gen() 3799 MB/s Feb 13 15:16:42.032906 kernel: raid6: int64x4 gen() 3706 MB/s Feb 13 15:16:42.049900 kernel: raid6: int64x2 gen() 3600 MB/s Feb 13 15:16:42.067695 kernel: raid6: int64x1 gen() 2768 MB/s Feb 13 15:16:42.067766 kernel: raid6: using algorithm neonx8 gen() 6710 MB/s Feb 13 15:16:42.085706 kernel: raid6: .... xor() 4902 MB/s, rmw enabled Feb 13 15:16:42.085807 kernel: raid6: using neon recovery algorithm Feb 13 15:16:42.094743 kernel: xor: measuring software checksum speed Feb 13 15:16:42.094894 kernel: 8regs : 10940 MB/sec Feb 13 15:16:42.095886 kernel: 32regs : 11465 MB/sec Feb 13 15:16:42.097890 kernel: arm64_neon : 8728 MB/sec Feb 13 15:16:42.097923 kernel: xor: using function: 32regs (11465 MB/sec) Feb 13 15:16:42.189917 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:16:42.215024 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:16:42.227368 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:42.277441 systemd-udevd[471]: Using default interface naming scheme 'v255'. Feb 13 15:16:42.287959 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:42.299417 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:16:42.340341 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Feb 13 15:16:42.407794 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:16:42.423182 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:16:42.542922 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:42.558204 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:16:42.611389 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:16:42.619764 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:16:42.622720 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:42.626142 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:16:42.640704 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:16:42.687146 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:16:42.760543 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:16:42.760654 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 15:16:42.802134 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:16:42.802428 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:16:42.802665 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:16:42.802693 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:16:42.803029 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:82:ac:df:44:49 Feb 13 15:16:42.777538 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:16:42.777801 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:42.781175 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:42.783739 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:42.784068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:42.786743 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:42.826183 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:16:42.807295 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:42.834543 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:16:42.834618 kernel: GPT:9289727 != 16777215 Feb 13 15:16:42.834644 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:16:42.834668 kernel: GPT:9289727 != 16777215 Feb 13 15:16:42.834707 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:16:42.835572 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:16:42.840704 (udev-worker)[530]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:16:42.866132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:42.885274 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:42.929019 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:42.993959 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (520) Feb 13 15:16:43.001906 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (525) Feb 13 15:16:43.009523 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:16:43.134457 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:16:43.150894 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:16:43.153440 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:16:43.171716 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:16:43.188252 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:16:43.202102 disk-uuid[663]: Primary Header is updated. Feb 13 15:16:43.202102 disk-uuid[663]: Secondary Entries is updated. Feb 13 15:16:43.202102 disk-uuid[663]: Secondary Header is updated. Feb 13 15:16:43.211913 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:16:43.236884 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:16:44.248948 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:16:44.250627 disk-uuid[664]: The operation has completed successfully. Feb 13 15:16:44.460235 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:16:44.460474 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:16:44.532174 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:16:44.541890 sh[923]: Success Feb 13 15:16:44.574126 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:16:44.691181 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:16:44.711109 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:16:44.716945 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:16:44.743415 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:16:44.743490 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:44.745297 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:16:44.746645 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:16:44.747786 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:16:44.873888 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:16:44.908284 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:16:44.912663 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:16:44.923257 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:16:44.931274 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:16:44.950473 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:44.950561 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:44.950593 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:16:44.957004 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:16:44.977318 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:16:44.980174 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:45.004541 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:16:45.016392 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:16:45.152608 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:16:45.166324 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:16:45.228295 systemd-networkd[1115]: lo: Link UP Feb 13 15:16:45.228312 systemd-networkd[1115]: lo: Gained carrier Feb 13 15:16:45.233866 systemd-networkd[1115]: Enumeration completed Feb 13 15:16:45.235586 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:16:45.239588 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:45.239596 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:45.239627 systemd[1]: Reached target network.target - Network. Feb 13 15:16:45.250101 systemd-networkd[1115]: eth0: Link UP Feb 13 15:16:45.250109 systemd-networkd[1115]: eth0: Gained carrier Feb 13 15:16:45.250129 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:45.278977 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.21.146/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:16:45.590221 ignition[1024]: Ignition 2.20.0 Feb 13 15:16:45.590253 ignition[1024]: Stage: fetch-offline Feb 13 15:16:45.590767 ignition[1024]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:45.591981 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:45.595360 ignition[1024]: Ignition finished successfully Feb 13 15:16:45.601029 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:16:45.621379 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:16:45.648614 ignition[1124]: Ignition 2.20.0 Feb 13 15:16:45.649457 ignition[1124]: Stage: fetch Feb 13 15:16:45.650295 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:45.650323 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:45.650504 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:45.667112 ignition[1124]: PUT result: OK Feb 13 15:16:45.670973 ignition[1124]: parsed url from cmdline: "" Feb 13 15:16:45.670992 ignition[1124]: no config URL provided Feb 13 15:16:45.671010 ignition[1124]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:16:45.671041 ignition[1124]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:16:45.671081 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:45.673418 ignition[1124]: PUT result: OK Feb 13 15:16:45.673589 ignition[1124]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:16:45.676002 ignition[1124]: GET result: OK Feb 13 15:16:45.676134 ignition[1124]: parsing config with SHA512: fc389f30183c47996050e4af01151a51c4bb6d2ccd2212873095b689281d1a2a1d252c3ac75c41c5bf0bb84e21bd4e5a80fe135c32be3d9f1e6cd90d7c6d9379 Feb 13 15:16:45.693209 unknown[1124]: fetched base config from "system" Feb 13 15:16:45.694573 ignition[1124]: fetch: fetch complete Feb 13 15:16:45.693247 unknown[1124]: fetched base config from "system" Feb 13 15:16:45.694599 ignition[1124]: fetch: fetch passed Feb 13 15:16:45.693264 unknown[1124]: fetched user config from "aws" Feb 13 15:16:45.694727 ignition[1124]: Ignition finished successfully Feb 13 15:16:45.706295 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:16:45.716331 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:16:45.760352 ignition[1130]: Ignition 2.20.0 Feb 13 15:16:45.760387 ignition[1130]: Stage: kargs Feb 13 15:16:45.762573 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:45.762639 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:45.763980 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:45.771142 ignition[1130]: PUT result: OK Feb 13 15:16:45.776811 ignition[1130]: kargs: kargs passed Feb 13 15:16:45.777155 ignition[1130]: Ignition finished successfully Feb 13 15:16:45.782738 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:16:45.797352 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:16:45.834262 ignition[1136]: Ignition 2.20.0 Feb 13 15:16:45.834323 ignition[1136]: Stage: disks Feb 13 15:16:45.835797 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:45.835836 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:45.836201 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:45.838609 ignition[1136]: PUT result: OK Feb 13 15:16:45.853254 ignition[1136]: disks: disks passed Feb 13 15:16:45.853735 ignition[1136]: Ignition finished successfully Feb 13 15:16:45.860743 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:16:45.864610 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:16:45.867174 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:16:45.871735 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:16:45.874095 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:16:45.877180 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:16:45.894194 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:16:45.953053 systemd-fsck[1144]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:16:45.961706 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:16:45.976179 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:16:46.072988 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:16:46.074760 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:16:46.079335 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:16:46.105300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:16:46.121963 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:16:46.125300 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:16:46.125452 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:16:46.125533 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:16:46.155963 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1163) Feb 13 15:16:46.159500 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:46.159586 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:46.160824 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:16:46.162603 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:16:46.169907 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:16:46.172499 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:16:46.187934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:16:46.297135 systemd-networkd[1115]: eth0: Gained IPv6LL Feb 13 15:16:46.773348 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:16:46.783653 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:16:46.810782 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:16:46.822129 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:16:47.205512 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:16:47.219099 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:16:47.230297 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:16:47.261903 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:47.259706 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:16:47.290967 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:16:47.312933 ignition[1279]: INFO : Ignition 2.20.0 Feb 13 15:16:47.312933 ignition[1279]: INFO : Stage: mount Feb 13 15:16:47.316538 ignition[1279]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:47.316538 ignition[1279]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:47.316538 ignition[1279]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:47.324014 ignition[1279]: INFO : PUT result: OK Feb 13 15:16:47.328667 ignition[1279]: INFO : mount: mount passed Feb 13 15:16:47.330316 ignition[1279]: INFO : Ignition finished successfully Feb 13 15:16:47.333949 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:16:47.342070 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:16:47.364455 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:16:47.399930 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1289) Feb 13 15:16:47.404430 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:47.404508 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:47.404535 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:16:47.410919 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:16:47.415615 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:16:47.462373 ignition[1306]: INFO : Ignition 2.20.0 Feb 13 15:16:47.465249 ignition[1306]: INFO : Stage: files Feb 13 15:16:47.465249 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:47.465249 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:47.465249 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:47.473503 ignition[1306]: INFO : PUT result: OK Feb 13 15:16:47.478825 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:16:47.494735 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:16:47.494735 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:16:47.536192 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:16:47.539401 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:16:47.542746 unknown[1306]: wrote ssh authorized keys file for user: core Feb 13 15:16:47.546265 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:16:47.549302 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:16:47.549302 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:16:47.648008 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:16:47.820887 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:16:47.824951 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:16:47.824951 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 15:16:48.296760 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 15:16:48.442416 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 15:16:48.442416 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:16:48.449405 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:16:48.694214 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 15:16:50.125568 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:16:50.125568 ignition[1306]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 15:16:50.139718 ignition[1306]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:16:50.143541 ignition[1306]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:16:50.143541 ignition[1306]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 15:16:50.143541 ignition[1306]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:16:50.143541 ignition[1306]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:16:50.143541 ignition[1306]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:16:50.143541 ignition[1306]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:16:50.143541 ignition[1306]: INFO : files: files passed Feb 13 15:16:50.143541 ignition[1306]: INFO : Ignition finished successfully Feb 13 15:16:50.153586 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:16:50.176577 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:16:50.188848 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:16:50.207308 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:16:50.209077 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:16:50.246649 initrd-setup-root-after-ignition[1334]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:50.246649 initrd-setup-root-after-ignition[1334]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:50.257156 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:50.255707 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:16:50.264554 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:16:50.289645 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:16:50.350810 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:16:50.351563 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:16:50.361052 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:16:50.363333 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:16:50.366391 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:16:50.386454 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:16:50.423925 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:16:50.444532 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:16:50.470601 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:50.475387 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:50.480221 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:16:50.484064 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:16:50.484798 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:16:50.489324 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:16:50.492304 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:16:50.496064 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:16:50.498700 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:16:50.503457 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:16:50.507364 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:16:50.509816 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:16:50.512758 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:16:50.517261 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:16:50.521581 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:16:50.525359 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:16:50.525828 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:16:50.534772 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:50.538519 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:50.541081 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:16:50.547169 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:50.557305 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:16:50.557632 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:16:50.562923 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:16:50.563262 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:16:50.568092 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:16:50.569017 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:16:50.589956 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:16:50.634229 ignition[1358]: INFO : Ignition 2.20.0 Feb 13 15:16:50.634229 ignition[1358]: INFO : Stage: umount Feb 13 15:16:50.634229 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:50.634229 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:50.634229 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:50.616217 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:16:50.652416 ignition[1358]: INFO : PUT result: OK Feb 13 15:16:50.618147 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:16:50.618507 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:50.664212 ignition[1358]: INFO : umount: umount passed Feb 13 15:16:50.664212 ignition[1358]: INFO : Ignition finished successfully Feb 13 15:16:50.633443 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:16:50.633889 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:16:50.663463 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:16:50.667219 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:16:50.684020 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:16:50.684497 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:16:50.694576 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:16:50.694766 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:16:50.705366 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:16:50.705540 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:16:50.713289 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:16:50.713437 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:16:50.716234 systemd[1]: Stopped target network.target - Network. Feb 13 15:16:50.719697 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:16:50.719943 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:16:50.730366 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:16:50.733765 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:16:50.737380 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:50.740255 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:16:50.742837 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:16:50.744912 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:16:50.745040 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:16:50.747080 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:16:50.747176 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:16:50.749393 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:16:50.749514 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:16:50.752068 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:16:50.752182 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:16:50.759600 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:16:50.762441 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:16:50.771493 systemd-networkd[1115]: eth0: DHCPv6 lease lost Feb 13 15:16:50.781327 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:16:50.784225 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:16:50.784447 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:16:50.799558 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:16:50.800830 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:16:50.821949 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:16:50.822552 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:16:50.834945 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:16:50.835131 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:50.839354 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:16:50.839473 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:16:50.854048 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:16:50.858021 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:16:50.858156 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:16:50.866289 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:16:50.866752 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:50.876217 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:16:50.876327 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:50.879337 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:16:50.879489 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:50.882210 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:50.921532 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:16:50.924177 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:50.931492 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:16:50.931747 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:16:50.935395 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:16:50.935691 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:50.938739 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:16:50.938816 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:50.942212 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:16:50.942323 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:16:50.951379 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:16:50.951478 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:16:50.960465 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:16:50.960931 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:50.978256 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:16:50.984932 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:16:50.985073 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:50.987756 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:16:50.987847 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:51.001273 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:16:51.001390 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:51.004441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:51.004528 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:51.008071 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:16:51.008300 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:16:51.018262 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:16:51.051131 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:16:51.070114 systemd[1]: Switching root. Feb 13 15:16:51.117387 systemd-journald[251]: Journal stopped Feb 13 15:16:54.443927 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 15:16:54.446086 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:16:54.446206 kernel: SELinux: policy capability open_perms=1 Feb 13 15:16:54.446244 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:16:54.446281 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:16:54.446315 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:16:54.446348 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:16:54.446392 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:16:54.446428 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:16:54.446475 kernel: audit: type=1403 audit(1739459812.412:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:16:54.446520 systemd[1]: Successfully loaded SELinux policy in 83.172ms. Feb 13 15:16:54.446576 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.203ms. Feb 13 15:16:54.446616 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:16:54.446651 systemd[1]: Detected virtualization amazon. Feb 13 15:16:54.446682 systemd[1]: Detected architecture arm64. Feb 13 15:16:54.446721 systemd[1]: Detected first boot. Feb 13 15:16:54.446754 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:16:54.446787 zram_generator::config[1401]: No configuration found. Feb 13 15:16:54.446824 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:16:54.446931 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:16:54.447032 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:16:54.447079 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:16:54.447117 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:16:54.447154 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:16:54.447204 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:16:54.447249 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:16:54.447294 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:16:54.447328 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:16:54.447358 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:16:54.447394 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:16:54.447431 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:54.447465 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:54.447504 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:16:54.447540 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:16:54.447578 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:16:54.447615 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:16:54.447650 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:16:54.447681 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:54.447712 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:16:54.447745 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:16:54.447776 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:16:54.447813 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:16:54.447846 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:54.447946 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:16:54.447981 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:16:54.448013 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:16:54.448043 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:16:54.448072 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:16:54.448106 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:54.448145 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:54.448177 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:54.448210 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:16:54.448242 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:16:54.448274 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:16:54.448304 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:16:54.448335 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:16:54.448370 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:16:54.448406 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:16:54.448444 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:16:54.448476 systemd[1]: Reached target machines.target - Containers. Feb 13 15:16:54.448508 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:16:54.448539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:54.448571 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:16:54.448600 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:16:54.448629 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:54.448658 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:16:54.448697 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:54.448731 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:16:54.448764 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:54.448797 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:16:54.448826 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:16:54.448897 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:16:54.448935 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:16:54.448971 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:16:54.449000 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:16:54.449041 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:16:54.449072 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:16:54.449104 kernel: fuse: init (API version 7.39) Feb 13 15:16:54.449137 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:16:54.449172 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:16:54.449208 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:16:54.449243 systemd[1]: Stopped verity-setup.service. Feb 13 15:16:54.449275 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:16:54.449306 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:16:54.449642 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:16:54.449708 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:16:54.449742 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:16:54.449775 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:16:54.449806 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:54.449896 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:16:54.449980 kernel: loop: module loaded Feb 13 15:16:54.450025 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:16:54.450056 kernel: ACPI: bus type drm_connector registered Feb 13 15:16:54.450088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:54.450125 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:54.450162 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:16:54.450206 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:16:54.450251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:54.450285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:54.450315 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:16:54.450348 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:16:54.450385 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:54.450417 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:54.450455 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:54.450490 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:16:54.450523 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:16:54.450624 systemd-journald[1483]: Collecting audit messages is disabled. Feb 13 15:16:54.450691 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:16:54.450723 systemd-journald[1483]: Journal started Feb 13 15:16:54.450781 systemd-journald[1483]: Runtime Journal (/run/log/journal/ec2b5656d78b9e9e1f7f45de4fd57932) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:16:53.760918 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:16:53.825318 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:16:53.826173 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:16:54.464914 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:16:54.488026 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:16:54.497306 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:16:54.497436 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:16:54.506028 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:16:54.519161 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:16:54.534944 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:16:54.540071 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:54.554659 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:16:54.562943 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:16:54.576070 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:16:54.579948 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:16:54.589829 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:16:54.601074 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:16:54.617267 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:16:54.627121 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:16:54.631628 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:16:54.634259 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:16:54.637133 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:16:54.641017 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:16:54.675351 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:16:54.716568 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:16:54.728200 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:16:54.743303 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:16:54.754941 kernel: loop0: detected capacity change from 0 to 113536 Feb 13 15:16:54.831017 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:54.834433 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Feb 13 15:16:54.834466 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Feb 13 15:16:54.846014 systemd-journald[1483]: Time spent on flushing to /var/log/journal/ec2b5656d78b9e9e1f7f45de4fd57932 is 90.582ms for 919 entries. Feb 13 15:16:54.846014 systemd-journald[1483]: System Journal (/var/log/journal/ec2b5656d78b9e9e1f7f45de4fd57932) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:16:54.948142 systemd-journald[1483]: Received client request to flush runtime journal. Feb 13 15:16:54.948307 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:16:54.948346 kernel: loop1: detected capacity change from 0 to 116808 Feb 13 15:16:54.849184 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:16:54.853707 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:16:54.862982 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:16:54.868510 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:54.892367 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:16:54.897307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:54.957723 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:16:54.969411 udevadm[1541]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:16:54.992949 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:16:55.003721 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:16:55.071527 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Feb 13 15:16:55.071560 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Feb 13 15:16:55.083472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:55.090104 kernel: loop2: detected capacity change from 0 to 53784 Feb 13 15:16:55.218356 kernel: loop3: detected capacity change from 0 to 194512 Feb 13 15:16:55.254975 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 15:16:55.271917 kernel: loop5: detected capacity change from 0 to 116808 Feb 13 15:16:55.291268 kernel: loop6: detected capacity change from 0 to 53784 Feb 13 15:16:55.302911 kernel: loop7: detected capacity change from 0 to 194512 Feb 13 15:16:55.323739 (sd-merge)[1559]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:16:55.326253 (sd-merge)[1559]: Merged extensions into '/usr'. Feb 13 15:16:55.334729 systemd[1]: Reloading requested from client PID 1512 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:16:55.334931 systemd[1]: Reloading... Feb 13 15:16:55.526418 zram_generator::config[1582]: No configuration found. Feb 13 15:16:55.832635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:55.953342 systemd[1]: Reloading finished in 617 ms. Feb 13 15:16:56.012036 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:16:56.025424 systemd[1]: Starting ensure-sysext.service... Feb 13 15:16:56.041784 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:16:56.075143 systemd[1]: Reloading requested from client PID 1636 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:16:56.075173 systemd[1]: Reloading... Feb 13 15:16:56.096723 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:16:56.097462 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:16:56.099380 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:16:56.099981 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Feb 13 15:16:56.100125 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Feb 13 15:16:56.105657 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:16:56.105685 systemd-tmpfiles[1637]: Skipping /boot Feb 13 15:16:56.148657 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:16:56.148690 systemd-tmpfiles[1637]: Skipping /boot Feb 13 15:16:56.285071 zram_generator::config[1661]: No configuration found. Feb 13 15:16:56.350616 ldconfig[1508]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:16:56.554805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:56.661530 systemd[1]: Reloading finished in 585 ms. Feb 13 15:16:56.689600 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:16:56.692381 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:16:56.700926 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:56.734597 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:16:56.743194 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:16:56.752457 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:16:56.769237 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:16:56.777310 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:56.787486 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:16:56.803106 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:56.814506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:56.836599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:56.848279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:56.853252 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:56.857238 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:16:56.874667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:56.875135 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:56.888460 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:16:56.901924 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:16:56.908576 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:56.908976 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:56.912375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:56.912734 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:56.931155 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:56.931995 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:56.936598 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:16:56.964784 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:56.978194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:56.988843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:16:57.005271 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:57.027172 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:57.029536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:57.030188 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:16:57.056682 systemd-udevd[1725]: Using default interface naming scheme 'v255'. Feb 13 15:16:57.056938 systemd[1]: Finished ensure-sysext.service. Feb 13 15:16:57.059543 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:16:57.080030 augenrules[1761]: No rules Feb 13 15:16:57.083072 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:16:57.083897 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:16:57.102528 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:57.105381 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:57.113647 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:16:57.114369 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:16:57.121406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:57.124049 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:57.129161 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:57.130200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:57.137599 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:16:57.139347 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:16:57.158633 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:16:57.163330 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:57.192624 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:16:57.195409 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:16:57.195832 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:16:57.414090 systemd-networkd[1775]: lo: Link UP Feb 13 15:16:57.414120 systemd-networkd[1775]: lo: Gained carrier Feb 13 15:16:57.415720 systemd-networkd[1775]: Enumeration completed Feb 13 15:16:57.415963 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:16:57.424186 (udev-worker)[1778]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:16:57.462200 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:16:57.465456 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:16:57.470710 systemd-resolved[1724]: Positive Trust Anchors: Feb 13 15:16:57.470786 systemd-resolved[1724]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:16:57.470849 systemd-resolved[1724]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:16:57.483384 systemd-resolved[1724]: Defaulting to hostname 'linux'. Feb 13 15:16:57.486639 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:16:57.488961 systemd[1]: Reached target network.target - Network. Feb 13 15:16:57.491118 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:57.519045 systemd-networkd[1775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:57.519964 systemd-networkd[1775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:57.523894 systemd-networkd[1775]: eth0: Link UP Feb 13 15:16:57.524284 systemd-networkd[1775]: eth0: Gained carrier Feb 13 15:16:57.524330 systemd-networkd[1775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:57.537092 systemd-networkd[1775]: eth0: DHCPv4 address 172.31.21.146/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:16:57.665026 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1793) Feb 13 15:16:57.834890 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:57.941264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:16:57.956323 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:16:57.959251 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:16:57.975247 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:16:58.018438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:58.023725 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:16:58.027065 lvm[1899]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:16:58.075520 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:16:58.078648 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:58.080844 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:16:58.083574 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:16:58.086262 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:16:58.090566 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:16:58.093046 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:16:58.095472 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:16:58.097843 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:16:58.097961 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:16:58.099796 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:16:58.102672 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:16:58.108608 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:16:58.121203 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:16:58.127388 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:16:58.131719 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:16:58.135004 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:16:58.137120 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:16:58.139316 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:16:58.139371 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:16:58.151619 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:16:58.158658 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:16:58.172612 lvm[1908]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:16:58.172449 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:16:58.183303 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:16:58.190307 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:16:58.194069 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:16:58.198836 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:16:58.209409 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:16:58.225252 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:16:58.246074 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:16:58.254197 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:16:58.263216 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:16:58.277235 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:16:58.280992 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:16:58.283188 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:16:58.291309 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:16:58.300278 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:16:58.342172 jq[1912]: false Feb 13 15:16:58.359634 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:16:58.360146 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:16:58.365042 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:16:58.431536 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:16:58.432826 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:16:58.439982 tar[1931]: linux-arm64/helm Feb 13 15:16:58.479560 jq[1925]: true Feb 13 15:16:58.504932 extend-filesystems[1913]: Found loop4 Feb 13 15:16:58.504932 extend-filesystems[1913]: Found loop5 Feb 13 15:16:58.504932 extend-filesystems[1913]: Found loop6 Feb 13 15:16:58.504932 extend-filesystems[1913]: Found loop7 Feb 13 15:16:58.504932 extend-filesystems[1913]: Found nvme0n1 Feb 13 15:16:58.504932 extend-filesystems[1913]: Found nvme0n1p1 Feb 13 15:16:58.504932 extend-filesystems[1913]: Found nvme0n1p2 Feb 13 15:16:58.504932 extend-filesystems[1913]: Found nvme0n1p3 Feb 13 15:16:58.504932 extend-filesystems[1913]: Found usr Feb 13 15:16:58.504932 extend-filesystems[1913]: Found nvme0n1p4 Feb 13 15:16:58.504932 extend-filesystems[1913]: Found nvme0n1p6 Feb 13 15:16:58.504932 extend-filesystems[1913]: Found nvme0n1p7 Feb 13 15:16:58.504932 extend-filesystems[1913]: Found nvme0n1p9 Feb 13 15:16:58.494797 (ntainerd)[1932]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:16:58.621226 extend-filesystems[1913]: Checking size of /dev/nvme0n1p9 Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:47 UTC 2025 (1): Starting Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: ---------------------------------------------------- Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: corporation. Support and training for ntp-4 are Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: available at https://www.nwtime.org/support Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: ---------------------------------------------------- Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: proto: precision = 0.096 usec (-23) Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: basedate set to 2025-02-01 Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: gps base set to 2025-02-02 (week 2352) Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: Listen normally on 3 eth0 172.31.21.146:123 Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: Listen normally on 4 lo [::1]:123 Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: bind(21) AF_INET6 fe80::482:acff:fedf:4449%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: unable to create socket on eth0 (5) for fe80::482:acff:fedf:4449%2#123 Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: failed to init interface for address fe80::482:acff:fedf:4449%2 Feb 13 15:16:58.623435 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: Listening on routing socket on fd #21 for interface updates Feb 13 15:16:58.506112 dbus-daemon[1911]: [system] SELinux support is enabled Feb 13 15:16:58.645074 update_engine[1922]: I20250213 15:16:58.605383 1922 main.cc:92] Flatcar Update Engine starting Feb 13 15:16:58.645074 update_engine[1922]: I20250213 15:16:58.642656 1922 update_check_scheduler.cc:74] Next update check in 2m0s Feb 13 15:16:58.515824 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:16:58.537293 dbus-daemon[1911]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1775 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:16:58.537163 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:16:58.554233 ntpd[1915]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:47 UTC 2025 (1): Starting Feb 13 15:16:58.537235 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:16:58.554298 ntpd[1915]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:16:58.541115 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:16:58.677735 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:58.677735 ntpd[1915]: 13 Feb 15:16:58 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:58.554318 ntpd[1915]: ---------------------------------------------------- Feb 13 15:16:58.542402 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:16:58.554337 ntpd[1915]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:16:58.609256 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:16:58.554355 ntpd[1915]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:16:58.612704 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:16:58.554375 ntpd[1915]: corporation. Support and training for ntp-4 are Feb 13 15:16:58.613135 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:16:58.554393 ntpd[1915]: available at https://www.nwtime.org/support Feb 13 15:16:58.631124 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:16:58.554413 ntpd[1915]: ---------------------------------------------------- Feb 13 15:16:58.648264 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:16:58.574361 ntpd[1915]: proto: precision = 0.096 usec (-23) Feb 13 15:16:58.696208 extend-filesystems[1913]: Resized partition /dev/nvme0n1p9 Feb 13 15:16:58.673968 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:16:58.575790 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:16:58.704729 jq[1950]: true Feb 13 15:16:58.585546 ntpd[1915]: basedate set to 2025-02-01 Feb 13 15:16:58.585588 ntpd[1915]: gps base set to 2025-02-02 (week 2352) Feb 13 15:16:58.601411 ntpd[1915]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:16:58.601510 ntpd[1915]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:16:58.618376 ntpd[1915]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:16:58.618455 ntpd[1915]: Listen normally on 3 eth0 172.31.21.146:123 Feb 13 15:16:58.618523 ntpd[1915]: Listen normally on 4 lo [::1]:123 Feb 13 15:16:58.618601 ntpd[1915]: bind(21) AF_INET6 fe80::482:acff:fedf:4449%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:16:58.618644 ntpd[1915]: unable to create socket on eth0 (5) for fe80::482:acff:fedf:4449%2#123 Feb 13 15:16:58.618671 ntpd[1915]: failed to init interface for address fe80::482:acff:fedf:4449%2 Feb 13 15:16:58.618723 ntpd[1915]: Listening on routing socket on fd #21 for interface updates Feb 13 15:16:58.670632 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:58.716136 extend-filesystems[1964]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:16:58.670692 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:58.738474 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:16:58.749238 coreos-metadata[1910]: Feb 13 15:16:58.749 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:16:58.752525 coreos-metadata[1910]: Feb 13 15:16:58.751 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:16:58.753885 coreos-metadata[1910]: Feb 13 15:16:58.753 INFO Fetch successful Feb 13 15:16:58.753885 coreos-metadata[1910]: Feb 13 15:16:58.753 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:16:58.760406 coreos-metadata[1910]: Feb 13 15:16:58.759 INFO Fetch successful Feb 13 15:16:58.760406 coreos-metadata[1910]: Feb 13 15:16:58.759 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:16:58.766174 coreos-metadata[1910]: Feb 13 15:16:58.766 INFO Fetch successful Feb 13 15:16:58.766174 coreos-metadata[1910]: Feb 13 15:16:58.766 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:16:58.770064 coreos-metadata[1910]: Feb 13 15:16:58.769 INFO Fetch successful Feb 13 15:16:58.770064 coreos-metadata[1910]: Feb 13 15:16:58.769 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:16:58.786757 coreos-metadata[1910]: Feb 13 15:16:58.773 INFO Fetch failed with 404: resource not found Feb 13 15:16:58.786757 coreos-metadata[1910]: Feb 13 15:16:58.773 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:16:58.786757 coreos-metadata[1910]: Feb 13 15:16:58.782 INFO Fetch successful Feb 13 15:16:58.786757 coreos-metadata[1910]: Feb 13 15:16:58.782 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:16:58.792094 coreos-metadata[1910]: Feb 13 15:16:58.790 INFO Fetch successful Feb 13 15:16:58.792094 coreos-metadata[1910]: Feb 13 15:16:58.791 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:16:58.803937 coreos-metadata[1910]: Feb 13 15:16:58.798 INFO Fetch successful Feb 13 15:16:58.803937 coreos-metadata[1910]: Feb 13 15:16:58.798 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:16:58.805620 coreos-metadata[1910]: Feb 13 15:16:58.805 INFO Fetch successful Feb 13 15:16:58.806478 coreos-metadata[1910]: Feb 13 15:16:58.806 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:16:58.815523 coreos-metadata[1910]: Feb 13 15:16:58.810 INFO Fetch successful Feb 13 15:16:58.855471 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:16:58.906511 systemd-logind[1921]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:16:58.913920 systemd-logind[1921]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 15:16:58.914243 extend-filesystems[1964]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:16:58.914243 extend-filesystems[1964]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:16:58.914243 extend-filesystems[1964]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:16:58.981079 extend-filesystems[1913]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:16:58.916552 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:16:58.916905 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:16:58.932298 systemd-logind[1921]: New seat seat0. Feb 13 15:16:58.996640 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:16:59.028917 locksmithd[1959]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:16:59.113516 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1791) Feb 13 15:16:59.113689 bash[1996]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:16:59.109902 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:16:59.188091 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:16:59.197932 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:16:59.240476 systemd[1]: Starting sshkeys.service... Feb 13 15:16:59.302586 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:16:59.304778 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:16:59.308607 dbus-daemon[1911]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1955 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:16:59.339654 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:16:59.352301 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:16:59.366228 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:16:59.417113 systemd-networkd[1775]: eth0: Gained IPv6LL Feb 13 15:16:59.420948 containerd[1932]: time="2025-02-13T15:16:59.420729253Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:16:59.434527 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:16:59.438750 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:16:59.453621 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:16:59.474654 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:16:59.489068 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:16:59.507543 polkitd[2063]: Started polkitd version 121 Feb 13 15:16:59.546834 polkitd[2063]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:16:59.549086 polkitd[2063]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:16:59.565392 polkitd[2063]: Finished loading, compiling and executing 2 rules Feb 13 15:16:59.585515 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:16:59.585836 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:16:59.588385 polkitd[2063]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:16:59.649185 systemd-hostnamed[1955]: Hostname set to (transient) Feb 13 15:16:59.658035 systemd-resolved[1724]: System hostname changed to 'ip-172-31-21-146'. Feb 13 15:16:59.678913 amazon-ssm-agent[2073]: Initializing new seelog logger Feb 13 15:16:59.678913 amazon-ssm-agent[2073]: New Seelog Logger Creation Complete Feb 13 15:16:59.678913 amazon-ssm-agent[2073]: 2025/02/13 15:16:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:16:59.678913 amazon-ssm-agent[2073]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:16:59.684239 amazon-ssm-agent[2073]: 2025/02/13 15:16:59 processing appconfig overrides Feb 13 15:16:59.684239 amazon-ssm-agent[2073]: 2025-02-13 15:16:59 INFO Proxy environment variables: Feb 13 15:16:59.684458 amazon-ssm-agent[2073]: 2025/02/13 15:16:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:16:59.684559 amazon-ssm-agent[2073]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:16:59.688170 amazon-ssm-agent[2073]: 2025/02/13 15:16:59 processing appconfig overrides Feb 13 15:16:59.688170 amazon-ssm-agent[2073]: 2025/02/13 15:16:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:16:59.688170 amazon-ssm-agent[2073]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:16:59.688170 amazon-ssm-agent[2073]: 2025/02/13 15:16:59 processing appconfig overrides Feb 13 15:16:59.706061 amazon-ssm-agent[2073]: 2025/02/13 15:16:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:16:59.706061 amazon-ssm-agent[2073]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:16:59.706061 amazon-ssm-agent[2073]: 2025/02/13 15:16:59 processing appconfig overrides Feb 13 15:16:59.773984 containerd[1932]: time="2025-02-13T15:16:59.769431315Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:59.776831 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:16:59.788537 containerd[1932]: time="2025-02-13T15:16:59.787196259Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:59.788537 containerd[1932]: time="2025-02-13T15:16:59.787266003Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:16:59.788537 containerd[1932]: time="2025-02-13T15:16:59.787303551Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:16:59.788537 containerd[1932]: time="2025-02-13T15:16:59.787606479Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:16:59.788537 containerd[1932]: time="2025-02-13T15:16:59.787640403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:59.788537 containerd[1932]: time="2025-02-13T15:16:59.787760199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:59.788537 containerd[1932]: time="2025-02-13T15:16:59.787789143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:59.791636 amazon-ssm-agent[2073]: 2025-02-13 15:16:59 INFO https_proxy: Feb 13 15:16:59.793810 containerd[1932]: time="2025-02-13T15:16:59.793731231Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:59.797003 containerd[1932]: time="2025-02-13T15:16:59.796939275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:59.797219 containerd[1932]: time="2025-02-13T15:16:59.797184135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:59.797613 containerd[1932]: time="2025-02-13T15:16:59.797577123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:59.800911 containerd[1932]: time="2025-02-13T15:16:59.798021807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:59.800911 containerd[1932]: time="2025-02-13T15:16:59.798597663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:16:59.816466 containerd[1932]: time="2025-02-13T15:16:59.815425443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:16:59.816466 containerd[1932]: time="2025-02-13T15:16:59.815561127Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:16:59.816466 containerd[1932]: time="2025-02-13T15:16:59.815999127Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:16:59.816466 containerd[1932]: time="2025-02-13T15:16:59.816135363Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:16:59.834370 containerd[1932]: time="2025-02-13T15:16:59.834237052Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:16:59.834625 containerd[1932]: time="2025-02-13T15:16:59.834581452Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:16:59.840463 containerd[1932]: time="2025-02-13T15:16:59.838227412Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:16:59.840463 containerd[1932]: time="2025-02-13T15:16:59.838293052Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:16:59.840463 containerd[1932]: time="2025-02-13T15:16:59.838328116Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:16:59.840463 containerd[1932]: time="2025-02-13T15:16:59.838637044Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:16:59.844887 containerd[1932]: time="2025-02-13T15:16:59.842343028Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:16:59.848698 containerd[1932]: time="2025-02-13T15:16:59.848643604Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:16:59.848935 containerd[1932]: time="2025-02-13T15:16:59.848904616Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:16:59.849092 containerd[1932]: time="2025-02-13T15:16:59.849062272Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.849670192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.849749620Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.849787168Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.849822196Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.849890632Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.849929728Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.849965176Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.849995572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.850054312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.850093408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.850150720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.850192624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.850229044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.850349 containerd[1932]: time="2025-02-13T15:16:59.850274908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.862721 containerd[1932]: time="2025-02-13T15:16:59.855077188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.862721 containerd[1932]: time="2025-02-13T15:16:59.855159064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.862721 containerd[1932]: time="2025-02-13T15:16:59.855193816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.862721 containerd[1932]: time="2025-02-13T15:16:59.855231436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.862721 containerd[1932]: time="2025-02-13T15:16:59.855260692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.862721 containerd[1932]: time="2025-02-13T15:16:59.855304300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.862721 containerd[1932]: time="2025-02-13T15:16:59.855339424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.862721 containerd[1932]: time="2025-02-13T15:16:59.855372724Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:16:59.862721 containerd[1932]: time="2025-02-13T15:16:59.855437632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.862721 containerd[1932]: time="2025-02-13T15:16:59.855472444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.862721 containerd[1932]: time="2025-02-13T15:16:59.855499792Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:16:59.867886 containerd[1932]: time="2025-02-13T15:16:59.863775016Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:16:59.867886 containerd[1932]: time="2025-02-13T15:16:59.863980360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:16:59.867886 containerd[1932]: time="2025-02-13T15:16:59.864011188Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:16:59.867886 containerd[1932]: time="2025-02-13T15:16:59.864050584Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:16:59.867886 containerd[1932]: time="2025-02-13T15:16:59.864077020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.867886 containerd[1932]: time="2025-02-13T15:16:59.864111892Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:16:59.867886 containerd[1932]: time="2025-02-13T15:16:59.864140200Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:16:59.867886 containerd[1932]: time="2025-02-13T15:16:59.864175696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:16:59.870936 containerd[1932]: time="2025-02-13T15:16:59.868524892Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:16:59.873923 containerd[1932]: time="2025-02-13T15:16:59.872036692Z" level=info msg="Connect containerd service" Feb 13 15:16:59.873923 containerd[1932]: time="2025-02-13T15:16:59.872168716Z" level=info msg="using legacy CRI server" Feb 13 15:16:59.873923 containerd[1932]: time="2025-02-13T15:16:59.872188720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:16:59.873923 containerd[1932]: time="2025-02-13T15:16:59.872441812Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:16:59.880902 containerd[1932]: time="2025-02-13T15:16:59.877479940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:16:59.885916 containerd[1932]: time="2025-02-13T15:16:59.883936888Z" level=info msg="Start subscribing containerd event" Feb 13 15:16:59.885916 containerd[1932]: time="2025-02-13T15:16:59.884027812Z" level=info msg="Start recovering state" Feb 13 15:16:59.885916 containerd[1932]: time="2025-02-13T15:16:59.884155528Z" level=info msg="Start event monitor" Feb 13 15:16:59.885916 containerd[1932]: time="2025-02-13T15:16:59.884178652Z" level=info msg="Start snapshots syncer" Feb 13 15:16:59.885916 containerd[1932]: time="2025-02-13T15:16:59.884200612Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:16:59.885916 containerd[1932]: time="2025-02-13T15:16:59.884218300Z" level=info msg="Start streaming server" Feb 13 15:16:59.886706 containerd[1932]: time="2025-02-13T15:16:59.886657612Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:16:59.892088 amazon-ssm-agent[2073]: 2025-02-13 15:16:59 INFO http_proxy: Feb 13 15:16:59.892225 coreos-metadata[2061]: Feb 13 15:16:59.892 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:16:59.896034 containerd[1932]: time="2025-02-13T15:16:59.892879900Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:16:59.896034 containerd[1932]: time="2025-02-13T15:16:59.893016484Z" level=info msg="containerd successfully booted in 0.479700s" Feb 13 15:16:59.893130 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:16:59.905890 coreos-metadata[2061]: Feb 13 15:16:59.904 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:16:59.908256 coreos-metadata[2061]: Feb 13 15:16:59.908 INFO Fetch successful Feb 13 15:16:59.908256 coreos-metadata[2061]: Feb 13 15:16:59.908 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:16:59.910303 coreos-metadata[2061]: Feb 13 15:16:59.910 INFO Fetch successful Feb 13 15:16:59.919462 unknown[2061]: wrote ssh authorized keys file for user: core Feb 13 15:16:59.991201 update-ssh-keys[2130]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:16:59.991883 amazon-ssm-agent[2073]: 2025-02-13 15:16:59 INFO no_proxy: Feb 13 15:16:59.993038 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:17:00.003557 systemd[1]: Finished sshkeys.service. Feb 13 15:17:00.095877 amazon-ssm-agent[2073]: 2025-02-13 15:16:59 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:17:00.190049 amazon-ssm-agent[2073]: 2025-02-13 15:16:59 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:17:00.290787 amazon-ssm-agent[2073]: 2025-02-13 15:17:00 INFO Agent will take identity from EC2 Feb 13 15:17:00.392040 amazon-ssm-agent[2073]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:17:00.492822 amazon-ssm-agent[2073]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:17:00.592758 amazon-ssm-agent[2073]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:17:00.695210 amazon-ssm-agent[2073]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:17:00.787941 tar[1931]: linux-arm64/LICENSE Feb 13 15:17:00.787941 tar[1931]: linux-arm64/README.md Feb 13 15:17:00.800838 amazon-ssm-agent[2073]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 15:17:00.829931 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:17:00.899476 amazon-ssm-agent[2073]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:17:00.999750 amazon-ssm-agent[2073]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:17:01.067119 amazon-ssm-agent[2073]: 2025-02-13 15:17:00 INFO [Registrar] Starting registrar module Feb 13 15:17:01.069354 amazon-ssm-agent[2073]: 2025-02-13 15:17:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:17:01.069354 amazon-ssm-agent[2073]: 2025-02-13 15:17:01 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:17:01.069354 amazon-ssm-agent[2073]: 2025-02-13 15:17:01 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:17:01.069354 amazon-ssm-agent[2073]: 2025-02-13 15:17:01 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:17:01.069354 amazon-ssm-agent[2073]: 2025-02-13 15:17:01 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:17:01.077231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:01.090727 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:01.100227 amazon-ssm-agent[2073]: 2025-02-13 15:17:01 INFO [CredentialRefresher] Next credential rotation will be in 30.141624027266666 minutes Feb 13 15:17:01.210016 sshd_keygen[1951]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:17:01.255761 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:17:01.270074 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:17:01.291306 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:17:01.293494 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:17:01.309648 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:17:01.347009 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:17:01.361550 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:17:01.370519 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:17:01.373364 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:17:01.375453 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:17:01.379146 systemd[1]: Startup finished in 1.218s (kernel) + 11.568s (initrd) + 9.046s (userspace) = 21.833s. Feb 13 15:17:01.555222 ntpd[1915]: Listen normally on 6 eth0 [fe80::482:acff:fedf:4449%2]:123 Feb 13 15:17:01.557182 ntpd[1915]: 13 Feb 15:17:01 ntpd[1915]: Listen normally on 6 eth0 [fe80::482:acff:fedf:4449%2]:123 Feb 13 15:17:01.928727 kubelet[2143]: E0213 15:17:01.928380 2143 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:01.933984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:01.934324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:01.934888 systemd[1]: kubelet.service: Consumed 1.347s CPU time. Feb 13 15:17:02.096272 amazon-ssm-agent[2073]: 2025-02-13 15:17:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:17:02.197188 amazon-ssm-agent[2073]: 2025-02-13 15:17:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2172) started Feb 13 15:17:02.297903 amazon-ssm-agent[2073]: 2025-02-13 15:17:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:17:05.952637 systemd-resolved[1724]: Clock change detected. Flushing caches. Feb 13 15:17:08.380792 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:17:08.389831 systemd[1]: Started sshd@0-172.31.21.146:22-139.178.68.195:53612.service - OpenSSH per-connection server daemon (139.178.68.195:53612). Feb 13 15:17:08.602264 sshd[2183]: Accepted publickey for core from 139.178.68.195 port 53612 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:08.607137 sshd-session[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:08.626237 systemd-logind[1921]: New session 1 of user core. Feb 13 15:17:08.628122 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:17:08.637779 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:17:08.668347 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:17:08.679171 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:17:08.700146 (systemd)[2187]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:17:08.914611 systemd[2187]: Queued start job for default target default.target. Feb 13 15:17:08.926757 systemd[2187]: Created slice app.slice - User Application Slice. Feb 13 15:17:08.926825 systemd[2187]: Reached target paths.target - Paths. Feb 13 15:17:08.926858 systemd[2187]: Reached target timers.target - Timers. Feb 13 15:17:08.930116 systemd[2187]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:17:08.964881 systemd[2187]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:17:08.965243 systemd[2187]: Reached target sockets.target - Sockets. Feb 13 15:17:08.965339 systemd[2187]: Reached target basic.target - Basic System. Feb 13 15:17:08.965447 systemd[2187]: Reached target default.target - Main User Target. Feb 13 15:17:08.965516 systemd[2187]: Startup finished in 253ms. Feb 13 15:17:08.965734 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:17:08.974613 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:17:09.137903 systemd[1]: Started sshd@1-172.31.21.146:22-139.178.68.195:53624.service - OpenSSH per-connection server daemon (139.178.68.195:53624). Feb 13 15:17:09.333927 sshd[2198]: Accepted publickey for core from 139.178.68.195 port 53624 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:09.336754 sshd-session[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:09.344672 systemd-logind[1921]: New session 2 of user core. Feb 13 15:17:09.353601 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:17:09.481418 sshd[2200]: Connection closed by 139.178.68.195 port 53624 Feb 13 15:17:09.482174 sshd-session[2198]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:09.491518 systemd[1]: sshd@1-172.31.21.146:22-139.178.68.195:53624.service: Deactivated successfully. Feb 13 15:17:09.496066 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:17:09.498191 systemd-logind[1921]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:17:09.500098 systemd-logind[1921]: Removed session 2. Feb 13 15:17:09.526974 systemd[1]: Started sshd@2-172.31.21.146:22-139.178.68.195:53638.service - OpenSSH per-connection server daemon (139.178.68.195:53638). Feb 13 15:17:09.710973 sshd[2205]: Accepted publickey for core from 139.178.68.195 port 53638 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:09.713499 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:09.721294 systemd-logind[1921]: New session 3 of user core. Feb 13 15:17:09.732562 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:17:09.853450 sshd[2207]: Connection closed by 139.178.68.195 port 53638 Feb 13 15:17:09.854709 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:09.862154 systemd[1]: sshd@2-172.31.21.146:22-139.178.68.195:53638.service: Deactivated successfully. Feb 13 15:17:09.866434 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:17:09.868260 systemd-logind[1921]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:17:09.870626 systemd-logind[1921]: Removed session 3. Feb 13 15:17:09.897877 systemd[1]: Started sshd@3-172.31.21.146:22-139.178.68.195:53654.service - OpenSSH per-connection server daemon (139.178.68.195:53654). Feb 13 15:17:10.091312 sshd[2212]: Accepted publickey for core from 139.178.68.195 port 53654 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:10.094924 sshd-session[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:10.104613 systemd-logind[1921]: New session 4 of user core. Feb 13 15:17:10.112538 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:17:10.243578 sshd[2214]: Connection closed by 139.178.68.195 port 53654 Feb 13 15:17:10.243452 sshd-session[2212]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:10.250834 systemd[1]: sshd@3-172.31.21.146:22-139.178.68.195:53654.service: Deactivated successfully. Feb 13 15:17:10.256008 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:17:10.257742 systemd-logind[1921]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:17:10.260017 systemd-logind[1921]: Removed session 4. Feb 13 15:17:10.289806 systemd[1]: Started sshd@4-172.31.21.146:22-139.178.68.195:53658.service - OpenSSH per-connection server daemon (139.178.68.195:53658). Feb 13 15:17:10.483648 sshd[2219]: Accepted publickey for core from 139.178.68.195 port 53658 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:10.486565 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:10.496408 systemd-logind[1921]: New session 5 of user core. Feb 13 15:17:10.501554 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:17:10.674776 sudo[2222]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:17:10.676029 sudo[2222]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:10.693232 sudo[2222]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:10.717396 sshd[2221]: Connection closed by 139.178.68.195 port 53658 Feb 13 15:17:10.718940 sshd-session[2219]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:10.731703 systemd[1]: sshd@4-172.31.21.146:22-139.178.68.195:53658.service: Deactivated successfully. Feb 13 15:17:10.732322 systemd-logind[1921]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:17:10.740027 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:17:10.755781 systemd-logind[1921]: Removed session 5. Feb 13 15:17:10.763073 systemd[1]: Started sshd@5-172.31.21.146:22-139.178.68.195:53660.service - OpenSSH per-connection server daemon (139.178.68.195:53660). Feb 13 15:17:10.949317 sshd[2227]: Accepted publickey for core from 139.178.68.195 port 53660 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:10.952206 sshd-session[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:10.962538 systemd-logind[1921]: New session 6 of user core. Feb 13 15:17:10.968601 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:17:11.072958 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:17:11.074052 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:11.082607 sudo[2231]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:11.094175 sudo[2230]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:17:11.094887 sudo[2230]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:11.121007 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:17:11.176305 augenrules[2253]: No rules Feb 13 15:17:11.179057 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:17:11.179881 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:17:11.183327 sudo[2230]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:11.206348 sshd[2229]: Connection closed by 139.178.68.195 port 53660 Feb 13 15:17:11.207136 sshd-session[2227]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:11.213007 systemd-logind[1921]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:17:11.214572 systemd[1]: sshd@5-172.31.21.146:22-139.178.68.195:53660.service: Deactivated successfully. Feb 13 15:17:11.220071 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:17:11.223496 systemd-logind[1921]: Removed session 6. Feb 13 15:17:11.246802 systemd[1]: Started sshd@6-172.31.21.146:22-139.178.68.195:53670.service - OpenSSH per-connection server daemon (139.178.68.195:53670). Feb 13 15:17:11.423887 sshd[2261]: Accepted publickey for core from 139.178.68.195 port 53670 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:11.426497 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:11.436457 systemd-logind[1921]: New session 7 of user core. Feb 13 15:17:11.442564 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:17:11.546468 sudo[2264]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:17:11.547162 sudo[2264]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:12.465286 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:17:12.473209 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:17:12.482016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:12.483692 (dockerd)[2283]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:17:13.154779 dockerd[2283]: time="2025-02-13T15:17:13.154451103Z" level=info msg="Starting up" Feb 13 15:17:13.449425 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1841160445-merged.mount: Deactivated successfully. Feb 13 15:17:13.599563 systemd[1]: var-lib-docker-metacopy\x2dcheck3281153791-merged.mount: Deactivated successfully. Feb 13 15:17:13.701185 dockerd[2283]: time="2025-02-13T15:17:13.700694934Z" level=info msg="Loading containers: start." Feb 13 15:17:13.723797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:13.738338 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:13.853917 kubelet[2311]: E0213 15:17:13.853725 2311 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:13.865349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:13.865684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:14.023307 kernel: Initializing XFRM netlink socket Feb 13 15:17:14.073295 (udev-worker)[2319]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:17:14.183953 systemd-networkd[1775]: docker0: Link UP Feb 13 15:17:14.228005 dockerd[2283]: time="2025-02-13T15:17:14.227945801Z" level=info msg="Loading containers: done." Feb 13 15:17:14.261157 dockerd[2283]: time="2025-02-13T15:17:14.261076997Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:17:14.261402 dockerd[2283]: time="2025-02-13T15:17:14.261230549Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:17:14.261463 dockerd[2283]: time="2025-02-13T15:17:14.261437105Z" level=info msg="Daemon has completed initialization" Feb 13 15:17:14.347634 dockerd[2283]: time="2025-02-13T15:17:14.347351309Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:17:14.347504 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:17:14.439063 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2763750848-merged.mount: Deactivated successfully. Feb 13 15:17:15.608395 containerd[1932]: time="2025-02-13T15:17:15.608247679Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:17:16.429200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344219540.mount: Deactivated successfully. Feb 13 15:17:18.268326 containerd[1932]: time="2025-02-13T15:17:18.267788649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.270165 containerd[1932]: time="2025-02-13T15:17:18.270089637Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205861" Feb 13 15:17:18.271541 containerd[1932]: time="2025-02-13T15:17:18.271420209Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.279068 containerd[1932]: time="2025-02-13T15:17:18.278953185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.281446 containerd[1932]: time="2025-02-13T15:17:18.281380413Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 2.672994074s" Feb 13 15:17:18.281986 containerd[1932]: time="2025-02-13T15:17:18.281686977Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:17:18.328343 containerd[1932]: time="2025-02-13T15:17:18.328236717Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:17:20.161465 containerd[1932]: time="2025-02-13T15:17:20.161348566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:20.163792 containerd[1932]: time="2025-02-13T15:17:20.163661578Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383091" Feb 13 15:17:20.165548 containerd[1932]: time="2025-02-13T15:17:20.165506014Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:20.171521 containerd[1932]: time="2025-02-13T15:17:20.171435814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:20.180312 containerd[1932]: time="2025-02-13T15:17:20.179581258Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 1.851104685s" Feb 13 15:17:20.180312 containerd[1932]: time="2025-02-13T15:17:20.179656858Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:17:20.229718 containerd[1932]: time="2025-02-13T15:17:20.229669774Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:17:21.542108 containerd[1932]: time="2025-02-13T15:17:21.542035585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:21.544144 containerd[1932]: time="2025-02-13T15:17:21.544073797Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766980" Feb 13 15:17:21.545441 containerd[1932]: time="2025-02-13T15:17:21.545304841Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:21.551034 containerd[1932]: time="2025-02-13T15:17:21.550981021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:21.554180 containerd[1932]: time="2025-02-13T15:17:21.553456609Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.323534439s" Feb 13 15:17:21.554180 containerd[1932]: time="2025-02-13T15:17:21.553531897Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:17:21.598120 containerd[1932]: time="2025-02-13T15:17:21.598020337Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:17:23.011257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2309282853.mount: Deactivated successfully. Feb 13 15:17:23.533599 containerd[1932]: time="2025-02-13T15:17:23.533516463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:23.534967 containerd[1932]: time="2025-02-13T15:17:23.534872151Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273375" Feb 13 15:17:23.537065 containerd[1932]: time="2025-02-13T15:17:23.536955507Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:23.541232 containerd[1932]: time="2025-02-13T15:17:23.541088931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:23.542699 containerd[1932]: time="2025-02-13T15:17:23.542463387Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.944349318s" Feb 13 15:17:23.542699 containerd[1932]: time="2025-02-13T15:17:23.542518791Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:17:23.585440 containerd[1932]: time="2025-02-13T15:17:23.585309147Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:17:24.116468 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:17:24.124940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:24.322647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707089439.mount: Deactivated successfully. Feb 13 15:17:24.760240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:24.774174 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:24.952982 kubelet[2597]: E0213 15:17:24.952902 2597 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:24.959559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:24.959932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:25.857908 containerd[1932]: time="2025-02-13T15:17:25.857656518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:25.860264 containerd[1932]: time="2025-02-13T15:17:25.860132250Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:17:25.861610 containerd[1932]: time="2025-02-13T15:17:25.861523674Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:25.867827 containerd[1932]: time="2025-02-13T15:17:25.867738750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:25.871417 containerd[1932]: time="2025-02-13T15:17:25.871111434Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.285713823s" Feb 13 15:17:25.871417 containerd[1932]: time="2025-02-13T15:17:25.871203450Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:17:25.914912 containerd[1932]: time="2025-02-13T15:17:25.914854171Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:17:26.414112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880123649.mount: Deactivated successfully. Feb 13 15:17:26.425377 containerd[1932]: time="2025-02-13T15:17:26.425051717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:26.427115 containerd[1932]: time="2025-02-13T15:17:26.427031993Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 15:17:26.429000 containerd[1932]: time="2025-02-13T15:17:26.428929133Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:26.435405 containerd[1932]: time="2025-02-13T15:17:26.435307493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:26.437382 containerd[1932]: time="2025-02-13T15:17:26.437146721Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 522.231638ms" Feb 13 15:17:26.437382 containerd[1932]: time="2025-02-13T15:17:26.437202389Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:17:26.483764 containerd[1932]: time="2025-02-13T15:17:26.483677597Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:17:27.215326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1602369305.mount: Deactivated successfully. Feb 13 15:17:30.057166 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:17:30.569185 containerd[1932]: time="2025-02-13T15:17:30.569098822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:30.571489 containerd[1932]: time="2025-02-13T15:17:30.571405834Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Feb 13 15:17:30.572878 containerd[1932]: time="2025-02-13T15:17:30.572778430Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:30.578924 containerd[1932]: time="2025-02-13T15:17:30.578866282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:30.582415 containerd[1932]: time="2025-02-13T15:17:30.582093226Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.098338013s" Feb 13 15:17:30.582415 containerd[1932]: time="2025-02-13T15:17:30.582173446Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:17:34.968446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:17:34.977841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:38.415737 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:17:38.415933 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:17:38.416435 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:38.434543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:38.484671 systemd[1]: Reloading requested from client PID 2767 ('systemctl') (unit session-7.scope)... Feb 13 15:17:38.484708 systemd[1]: Reloading... Feb 13 15:17:38.638444 zram_generator::config[2807]: No configuration found. Feb 13 15:17:38.936632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:39.106133 systemd[1]: Reloading finished in 620 ms. Feb 13 15:17:39.184707 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:17:39.184915 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:17:39.185389 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:39.203777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:39.937542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:39.938248 (kubelet)[2867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:17:40.028217 kubelet[2867]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:40.028217 kubelet[2867]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:17:40.028217 kubelet[2867]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:40.028912 kubelet[2867]: I0213 15:17:40.028341 2867 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:17:41.388348 kubelet[2867]: I0213 15:17:41.388200 2867 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:17:41.389237 kubelet[2867]: I0213 15:17:41.388266 2867 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:17:41.389237 kubelet[2867]: I0213 15:17:41.388847 2867 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:17:41.427003 kubelet[2867]: I0213 15:17:41.426565 2867 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:17:41.427846 kubelet[2867]: E0213 15:17:41.427758 2867 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:41.443496 kubelet[2867]: I0213 15:17:41.443426 2867 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:17:41.446264 kubelet[2867]: I0213 15:17:41.446197 2867 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:17:41.446666 kubelet[2867]: I0213 15:17:41.446613 2867 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:17:41.446874 kubelet[2867]: I0213 15:17:41.446674 2867 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:17:41.446874 kubelet[2867]: I0213 15:17:41.446699 2867 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:17:41.449780 kubelet[2867]: I0213 15:17:41.449724 2867 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:41.454626 kubelet[2867]: I0213 15:17:41.454554 2867 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:17:41.454626 kubelet[2867]: I0213 15:17:41.454621 2867 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:17:41.456352 kubelet[2867]: I0213 15:17:41.454673 2867 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:17:41.456352 kubelet[2867]: I0213 15:17:41.454700 2867 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:17:41.458372 kubelet[2867]: W0213 15:17:41.458266 2867 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.21.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:41.458518 kubelet[2867]: E0213 15:17:41.458393 2867 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:41.459020 kubelet[2867]: W0213 15:17:41.458942 2867 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.21.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-146&limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:41.459170 kubelet[2867]: E0213 15:17:41.459025 2867 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-146&limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:41.459233 kubelet[2867]: I0213 15:17:41.459178 2867 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:17:41.460005 kubelet[2867]: I0213 15:17:41.459878 2867 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:17:41.461221 kubelet[2867]: W0213 15:17:41.461169 2867 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:17:41.463175 kubelet[2867]: I0213 15:17:41.463122 2867 server.go:1256] "Started kubelet" Feb 13 15:17:41.472616 kubelet[2867]: E0213 15:17:41.472567 2867 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.146:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.146:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-146.1823cd87278eaff8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-146,UID:ip-172-31-21-146,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-146,},FirstTimestamp:2025-02-13 15:17:41.463085048 +0000 UTC m=+1.516728813,LastTimestamp:2025-02-13 15:17:41.463085048 +0000 UTC m=+1.516728813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-146,}" Feb 13 15:17:41.472960 kubelet[2867]: I0213 15:17:41.472917 2867 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:17:41.475201 kubelet[2867]: I0213 15:17:41.475128 2867 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:17:41.477577 kubelet[2867]: I0213 15:17:41.477350 2867 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:17:41.478253 kubelet[2867]: I0213 15:17:41.477836 2867 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:17:41.479105 kubelet[2867]: I0213 15:17:41.472809 2867 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:17:41.489397 kubelet[2867]: I0213 15:17:41.488262 2867 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:17:41.489397 kubelet[2867]: I0213 15:17:41.488622 2867 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:17:41.489397 kubelet[2867]: I0213 15:17:41.488782 2867 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:17:41.490678 kubelet[2867]: W0213 15:17:41.490569 2867 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.21.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:41.491027 kubelet[2867]: E0213 15:17:41.490691 2867 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:41.492249 kubelet[2867]: E0213 15:17:41.492145 2867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-146?timeout=10s\": dial tcp 172.31.21.146:6443: connect: connection refused" interval="200ms" Feb 13 15:17:41.492530 kubelet[2867]: E0213 15:17:41.492478 2867 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:17:41.494451 kubelet[2867]: I0213 15:17:41.494417 2867 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:17:41.494856 kubelet[2867]: I0213 15:17:41.494815 2867 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:17:41.499353 kubelet[2867]: I0213 15:17:41.499232 2867 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:17:41.549817 kubelet[2867]: I0213 15:17:41.549117 2867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:17:41.549817 kubelet[2867]: I0213 15:17:41.549186 2867 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:17:41.549817 kubelet[2867]: I0213 15:17:41.549209 2867 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:17:41.549817 kubelet[2867]: I0213 15:17:41.549245 2867 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:41.552724 kubelet[2867]: I0213 15:17:41.552659 2867 policy_none.go:49] "None policy: Start" Feb 13 15:17:41.555516 kubelet[2867]: I0213 15:17:41.555420 2867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:17:41.555516 kubelet[2867]: I0213 15:17:41.555509 2867 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:17:41.555758 kubelet[2867]: I0213 15:17:41.555583 2867 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:17:41.555964 kubelet[2867]: E0213 15:17:41.555913 2867 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:17:41.557640 kubelet[2867]: I0213 15:17:41.556370 2867 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:17:41.557640 kubelet[2867]: I0213 15:17:41.556470 2867 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:17:41.560487 kubelet[2867]: W0213 15:17:41.560413 2867 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.21.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:41.560956 kubelet[2867]: E0213 15:17:41.560858 2867 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:41.579158 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:17:41.591529 kubelet[2867]: I0213 15:17:41.591426 2867 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-146" Feb 13 15:17:41.592474 kubelet[2867]: E0213 15:17:41.592433 2867 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.146:6443/api/v1/nodes\": dial tcp 172.31.21.146:6443: connect: connection refused" node="ip-172-31-21-146" Feb 13 15:17:41.598352 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:17:41.605076 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:17:41.618579 kubelet[2867]: I0213 15:17:41.618462 2867 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:17:41.620011 kubelet[2867]: I0213 15:17:41.619128 2867 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:17:41.622622 kubelet[2867]: E0213 15:17:41.622333 2867 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-146\" not found" Feb 13 15:17:41.656585 kubelet[2867]: I0213 15:17:41.656386 2867 topology_manager.go:215] "Topology Admit Handler" podUID="26682ad366e46a79bbd4e1734f3cda36" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-146" Feb 13 15:17:41.659173 kubelet[2867]: I0213 15:17:41.658926 2867 topology_manager.go:215] "Topology Admit Handler" podUID="74f5e6e852265ed5760eac11945246cd" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:41.662226 kubelet[2867]: I0213 15:17:41.661737 2867 topology_manager.go:215] "Topology Admit Handler" podUID="b4ede9213be7159f83ee812907a98f79" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-146" Feb 13 15:17:41.674758 systemd[1]: Created slice kubepods-burstable-pod26682ad366e46a79bbd4e1734f3cda36.slice - libcontainer container kubepods-burstable-pod26682ad366e46a79bbd4e1734f3cda36.slice. Feb 13 15:17:41.690173 kubelet[2867]: I0213 15:17:41.689309 2867 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74f5e6e852265ed5760eac11945246cd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-146\" (UID: \"74f5e6e852265ed5760eac11945246cd\") " pod="kube-system/kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:41.690173 kubelet[2867]: I0213 15:17:41.689385 2867 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74f5e6e852265ed5760eac11945246cd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-146\" (UID: \"74f5e6e852265ed5760eac11945246cd\") " pod="kube-system/kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:41.690173 kubelet[2867]: I0213 15:17:41.689438 2867 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74f5e6e852265ed5760eac11945246cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-146\" (UID: \"74f5e6e852265ed5760eac11945246cd\") " pod="kube-system/kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:41.690173 kubelet[2867]: I0213 15:17:41.689483 2867 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26682ad366e46a79bbd4e1734f3cda36-ca-certs\") pod \"kube-apiserver-ip-172-31-21-146\" (UID: \"26682ad366e46a79bbd4e1734f3cda36\") " pod="kube-system/kube-apiserver-ip-172-31-21-146" Feb 13 15:17:41.690173 kubelet[2867]: I0213 15:17:41.689533 2867 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26682ad366e46a79bbd4e1734f3cda36-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-146\" (UID: \"26682ad366e46a79bbd4e1734f3cda36\") " pod="kube-system/kube-apiserver-ip-172-31-21-146" Feb 13 15:17:41.691615 kubelet[2867]: I0213 15:17:41.689583 2867 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26682ad366e46a79bbd4e1734f3cda36-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-146\" (UID: \"26682ad366e46a79bbd4e1734f3cda36\") " pod="kube-system/kube-apiserver-ip-172-31-21-146" Feb 13 15:17:41.691615 kubelet[2867]: I0213 15:17:41.689629 2867 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74f5e6e852265ed5760eac11945246cd-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-146\" (UID: \"74f5e6e852265ed5760eac11945246cd\") " pod="kube-system/kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:41.691615 kubelet[2867]: I0213 15:17:41.689671 2867 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74f5e6e852265ed5760eac11945246cd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-146\" (UID: \"74f5e6e852265ed5760eac11945246cd\") " pod="kube-system/kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:41.691615 kubelet[2867]: I0213 15:17:41.689728 2867 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4ede9213be7159f83ee812907a98f79-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-146\" (UID: \"b4ede9213be7159f83ee812907a98f79\") " pod="kube-system/kube-scheduler-ip-172-31-21-146" Feb 13 15:17:41.694064 kubelet[2867]: E0213 15:17:41.694026 2867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-146?timeout=10s\": dial tcp 172.31.21.146:6443: connect: connection refused" interval="400ms" Feb 13 15:17:41.699732 systemd[1]: Created slice kubepods-burstable-pod74f5e6e852265ed5760eac11945246cd.slice - libcontainer container kubepods-burstable-pod74f5e6e852265ed5760eac11945246cd.slice. Feb 13 15:17:41.720607 systemd[1]: Created slice kubepods-burstable-podb4ede9213be7159f83ee812907a98f79.slice - libcontainer container kubepods-burstable-podb4ede9213be7159f83ee812907a98f79.slice. Feb 13 15:17:41.796055 kubelet[2867]: I0213 15:17:41.795983 2867 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-146" Feb 13 15:17:41.796527 kubelet[2867]: E0213 15:17:41.796501 2867 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.146:6443/api/v1/nodes\": dial tcp 172.31.21.146:6443: connect: connection refused" node="ip-172-31-21-146" Feb 13 15:17:41.931384 kubelet[2867]: E0213 15:17:41.931138 2867 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.146:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.146:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-146.1823cd87278eaff8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-146,UID:ip-172-31-21-146,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-146,},FirstTimestamp:2025-02-13 15:17:41.463085048 +0000 UTC m=+1.516728813,LastTimestamp:2025-02-13 15:17:41.463085048 +0000 UTC m=+1.516728813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-146,}" Feb 13 15:17:41.991441 containerd[1932]: time="2025-02-13T15:17:41.991358303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-146,Uid:26682ad366e46a79bbd4e1734f3cda36,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:42.016469 containerd[1932]: time="2025-02-13T15:17:42.016248199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-146,Uid:74f5e6e852265ed5760eac11945246cd,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:42.027787 containerd[1932]: time="2025-02-13T15:17:42.027708247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-146,Uid:b4ede9213be7159f83ee812907a98f79,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:42.095823 kubelet[2867]: E0213 15:17:42.095776 2867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-146?timeout=10s\": dial tcp 172.31.21.146:6443: connect: connection refused" interval="800ms" Feb 13 15:17:42.199732 kubelet[2867]: I0213 15:17:42.199591 2867 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-146" Feb 13 15:17:42.200311 kubelet[2867]: E0213 15:17:42.200213 2867 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.146:6443/api/v1/nodes\": dial tcp 172.31.21.146:6443: connect: connection refused" node="ip-172-31-21-146" Feb 13 15:17:42.320888 kubelet[2867]: W0213 15:17:42.320765 2867 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.21.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:42.320888 kubelet[2867]: E0213 15:17:42.320856 2867 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.146:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:42.491918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount482972962.mount: Deactivated successfully. Feb 13 15:17:42.510408 containerd[1932]: time="2025-02-13T15:17:42.510195321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:17:42.513133 containerd[1932]: time="2025-02-13T15:17:42.513044313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:42.515540 containerd[1932]: time="2025-02-13T15:17:42.514759821Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:42.515540 containerd[1932]: time="2025-02-13T15:17:42.515253381Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:17:42.517959 containerd[1932]: time="2025-02-13T15:17:42.517904877Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:42.521966 containerd[1932]: time="2025-02-13T15:17:42.521856573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:17:42.522137 containerd[1932]: time="2025-02-13T15:17:42.522062625Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:42.524132 containerd[1932]: time="2025-02-13T15:17:42.523988865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:42.527615 containerd[1932]: time="2025-02-13T15:17:42.527133837Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.64711ms" Feb 13 15:17:42.530918 containerd[1932]: time="2025-02-13T15:17:42.530841573Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 514.444862ms" Feb 13 15:17:42.591429 containerd[1932]: time="2025-02-13T15:17:42.591362097Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.529026ms" Feb 13 15:17:42.789823 kubelet[2867]: W0213 15:17:42.789055 2867 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.21.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:42.789823 kubelet[2867]: E0213 15:17:42.789125 2867 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:42.860800 containerd[1932]: time="2025-02-13T15:17:42.860569967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:42.860800 containerd[1932]: time="2025-02-13T15:17:42.860699771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:42.860800 containerd[1932]: time="2025-02-13T15:17:42.860737967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:42.861704 containerd[1932]: time="2025-02-13T15:17:42.861404039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:42.866998 containerd[1932]: time="2025-02-13T15:17:42.864401075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:42.867712 containerd[1932]: time="2025-02-13T15:17:42.866934707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:42.867712 containerd[1932]: time="2025-02-13T15:17:42.867113855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:42.867712 containerd[1932]: time="2025-02-13T15:17:42.867530495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:42.873487 containerd[1932]: time="2025-02-13T15:17:42.872531687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:42.873487 containerd[1932]: time="2025-02-13T15:17:42.872637971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:42.873487 containerd[1932]: time="2025-02-13T15:17:42.872668571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:42.873487 containerd[1932]: time="2025-02-13T15:17:42.872856023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:42.898488 kubelet[2867]: E0213 15:17:42.898264 2867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-146?timeout=10s\": dial tcp 172.31.21.146:6443: connect: connection refused" interval="1.6s" Feb 13 15:17:42.926709 systemd[1]: Started cri-containerd-57dcf4ac03bfe65bd3b230f685d2326132b801abfb1db029a38df7b753411f0c.scope - libcontainer container 57dcf4ac03bfe65bd3b230f685d2326132b801abfb1db029a38df7b753411f0c. Feb 13 15:17:42.943105 systemd[1]: Started cri-containerd-e11facb1fbe698e9c32161ffd1a1e51544eaa253a7c98b9d37fe06d1fdd1e824.scope - libcontainer container e11facb1fbe698e9c32161ffd1a1e51544eaa253a7c98b9d37fe06d1fdd1e824. Feb 13 15:17:42.955654 systemd[1]: Started cri-containerd-f63b8dcb7b81233e7598690b8f2e6e0df51a3a7d18df49b308b32a7a6aa0be9e.scope - libcontainer container f63b8dcb7b81233e7598690b8f2e6e0df51a3a7d18df49b308b32a7a6aa0be9e. Feb 13 15:17:43.007497 kubelet[2867]: I0213 15:17:43.005791 2867 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-146" Feb 13 15:17:43.007497 kubelet[2867]: E0213 15:17:43.006458 2867 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.146:6443/api/v1/nodes\": dial tcp 172.31.21.146:6443: connect: connection refused" node="ip-172-31-21-146" Feb 13 15:17:43.018593 kubelet[2867]: W0213 15:17:43.018475 2867 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.21.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-146&limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:43.018593 kubelet[2867]: E0213 15:17:43.018601 2867 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-146&limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:43.082773 containerd[1932]: time="2025-02-13T15:17:43.079944536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-146,Uid:26682ad366e46a79bbd4e1734f3cda36,Namespace:kube-system,Attempt:0,} returns sandbox id \"e11facb1fbe698e9c32161ffd1a1e51544eaa253a7c98b9d37fe06d1fdd1e824\"" Feb 13 15:17:43.087697 kubelet[2867]: W0213 15:17:43.087562 2867 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.21.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:43.088381 kubelet[2867]: E0213 15:17:43.088338 2867 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:43.096773 containerd[1932]: time="2025-02-13T15:17:43.096703628Z" level=info msg="CreateContainer within sandbox \"e11facb1fbe698e9c32161ffd1a1e51544eaa253a7c98b9d37fe06d1fdd1e824\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:17:43.100457 containerd[1932]: time="2025-02-13T15:17:43.100328480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-146,Uid:74f5e6e852265ed5760eac11945246cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"57dcf4ac03bfe65bd3b230f685d2326132b801abfb1db029a38df7b753411f0c\"" Feb 13 15:17:43.107618 containerd[1932]: time="2025-02-13T15:17:43.107506220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-146,Uid:b4ede9213be7159f83ee812907a98f79,Namespace:kube-system,Attempt:0,} returns sandbox id \"f63b8dcb7b81233e7598690b8f2e6e0df51a3a7d18df49b308b32a7a6aa0be9e\"" Feb 13 15:17:43.126918 containerd[1932]: time="2025-02-13T15:17:43.126645932Z" level=info msg="CreateContainer within sandbox \"57dcf4ac03bfe65bd3b230f685d2326132b801abfb1db029a38df7b753411f0c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:17:43.135882 containerd[1932]: time="2025-02-13T15:17:43.135718688Z" level=info msg="CreateContainer within sandbox \"f63b8dcb7b81233e7598690b8f2e6e0df51a3a7d18df49b308b32a7a6aa0be9e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:17:43.156752 containerd[1932]: time="2025-02-13T15:17:43.156574100Z" level=info msg="CreateContainer within sandbox \"e11facb1fbe698e9c32161ffd1a1e51544eaa253a7c98b9d37fe06d1fdd1e824\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ddf4c9b4ca736834e07a248991604dce776656c1be513a97ce91a8e2fef555dc\"" Feb 13 15:17:43.158372 containerd[1932]: time="2025-02-13T15:17:43.158254592Z" level=info msg="StartContainer for \"ddf4c9b4ca736834e07a248991604dce776656c1be513a97ce91a8e2fef555dc\"" Feb 13 15:17:43.161168 containerd[1932]: time="2025-02-13T15:17:43.160687412Z" level=info msg="CreateContainer within sandbox \"57dcf4ac03bfe65bd3b230f685d2326132b801abfb1db029a38df7b753411f0c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0ecebc4d54c7dd1115ab2cf698ad12dfd4f933dee94c24692d9cae864866ea1d\"" Feb 13 15:17:43.161916 containerd[1932]: time="2025-02-13T15:17:43.161830772Z" level=info msg="StartContainer for \"0ecebc4d54c7dd1115ab2cf698ad12dfd4f933dee94c24692d9cae864866ea1d\"" Feb 13 15:17:43.190847 containerd[1932]: time="2025-02-13T15:17:43.190072760Z" level=info msg="CreateContainer within sandbox \"f63b8dcb7b81233e7598690b8f2e6e0df51a3a7d18df49b308b32a7a6aa0be9e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c18c6c784ac3f3f282b6738049e631ef7710856b76ce2144368fb73f0e4ef19d\"" Feb 13 15:17:43.192001 containerd[1932]: time="2025-02-13T15:17:43.191724524Z" level=info msg="StartContainer for \"c18c6c784ac3f3f282b6738049e631ef7710856b76ce2144368fb73f0e4ef19d\"" Feb 13 15:17:43.226019 systemd[1]: Started cri-containerd-ddf4c9b4ca736834e07a248991604dce776656c1be513a97ce91a8e2fef555dc.scope - libcontainer container ddf4c9b4ca736834e07a248991604dce776656c1be513a97ce91a8e2fef555dc. Feb 13 15:17:43.251649 systemd[1]: Started cri-containerd-0ecebc4d54c7dd1115ab2cf698ad12dfd4f933dee94c24692d9cae864866ea1d.scope - libcontainer container 0ecebc4d54c7dd1115ab2cf698ad12dfd4f933dee94c24692d9cae864866ea1d. Feb 13 15:17:43.310608 systemd[1]: Started cri-containerd-c18c6c784ac3f3f282b6738049e631ef7710856b76ce2144368fb73f0e4ef19d.scope - libcontainer container c18c6c784ac3f3f282b6738049e631ef7710856b76ce2144368fb73f0e4ef19d. Feb 13 15:17:43.350740 containerd[1932]: time="2025-02-13T15:17:43.350351085Z" level=info msg="StartContainer for \"ddf4c9b4ca736834e07a248991604dce776656c1be513a97ce91a8e2fef555dc\" returns successfully" Feb 13 15:17:43.407932 containerd[1932]: time="2025-02-13T15:17:43.405445882Z" level=info msg="StartContainer for \"0ecebc4d54c7dd1115ab2cf698ad12dfd4f933dee94c24692d9cae864866ea1d\" returns successfully" Feb 13 15:17:43.433884 kubelet[2867]: E0213 15:17:43.433807 2867 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.146:6443: connect: connection refused Feb 13 15:17:43.516315 containerd[1932]: time="2025-02-13T15:17:43.515479534Z" level=info msg="StartContainer for \"c18c6c784ac3f3f282b6738049e631ef7710856b76ce2144368fb73f0e4ef19d\" returns successfully" Feb 13 15:17:43.945446 update_engine[1922]: I20250213 15:17:43.944324 1922 update_attempter.cc:509] Updating boot flags... Feb 13 15:17:44.061346 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3153) Feb 13 15:17:44.612059 kubelet[2867]: I0213 15:17:44.611166 2867 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-146" Feb 13 15:17:47.976154 kubelet[2867]: E0213 15:17:47.976085 2867 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-146\" not found" node="ip-172-31-21-146" Feb 13 15:17:48.165494 kubelet[2867]: I0213 15:17:48.163650 2867 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-146" Feb 13 15:17:48.463177 kubelet[2867]: I0213 15:17:48.461744 2867 apiserver.go:52] "Watching apiserver" Feb 13 15:17:48.489324 kubelet[2867]: I0213 15:17:48.489188 2867 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:17:51.214195 systemd[1]: Reloading requested from client PID 3240 ('systemctl') (unit session-7.scope)... Feb 13 15:17:51.214762 systemd[1]: Reloading... Feb 13 15:17:51.570726 zram_generator::config[3283]: No configuration found. Feb 13 15:17:52.029888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:52.301598 systemd[1]: Reloading finished in 1085 ms. Feb 13 15:17:52.310309 kubelet[2867]: I0213 15:17:52.307913 2867 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-146" podStartSLOduration=2.307804746 podStartE2EDuration="2.307804746s" podCreationTimestamp="2025-02-13 15:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:51.721508323 +0000 UTC m=+11.775152076" watchObservedRunningTime="2025-02-13 15:17:52.307804746 +0000 UTC m=+12.361448475" Feb 13 15:17:52.433300 kubelet[2867]: I0213 15:17:52.433175 2867 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:17:52.434116 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:52.455339 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:17:52.456802 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:52.457142 systemd[1]: kubelet.service: Consumed 2.426s CPU time, 112.9M memory peak, 0B memory swap peak. Feb 13 15:17:52.472545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:53.036811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:53.052159 (kubelet)[3344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:17:53.165221 kubelet[3344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:53.165221 kubelet[3344]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:17:53.165221 kubelet[3344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:53.165221 kubelet[3344]: I0213 15:17:53.164415 3344 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:17:53.176686 kubelet[3344]: I0213 15:17:53.176607 3344 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:17:53.176686 kubelet[3344]: I0213 15:17:53.176673 3344 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:17:53.177060 kubelet[3344]: I0213 15:17:53.177022 3344 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:17:53.181235 kubelet[3344]: I0213 15:17:53.181180 3344 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:17:53.186643 kubelet[3344]: I0213 15:17:53.186429 3344 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:17:53.207346 kubelet[3344]: I0213 15:17:53.205057 3344 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:17:53.207346 kubelet[3344]: I0213 15:17:53.206712 3344 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:17:53.209221 kubelet[3344]: I0213 15:17:53.209162 3344 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:17:53.209658 kubelet[3344]: I0213 15:17:53.209628 3344 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:17:53.210369 kubelet[3344]: I0213 15:17:53.209776 3344 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:17:53.210369 kubelet[3344]: I0213 15:17:53.209872 3344 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:53.210369 kubelet[3344]: I0213 15:17:53.210160 3344 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:17:53.210369 kubelet[3344]: I0213 15:17:53.210192 3344 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:17:53.210369 kubelet[3344]: I0213 15:17:53.210246 3344 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:17:53.213320 kubelet[3344]: I0213 15:17:53.210885 3344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:17:53.225444 kubelet[3344]: I0213 15:17:53.224134 3344 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:17:53.225444 kubelet[3344]: I0213 15:17:53.224674 3344 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:17:53.227329 kubelet[3344]: I0213 15:17:53.226094 3344 server.go:1256] "Started kubelet" Feb 13 15:17:53.234305 kubelet[3344]: I0213 15:17:53.230799 3344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:17:53.233262 sudo[3358]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 15:17:53.236020 sudo[3358]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 15:17:53.246815 kubelet[3344]: I0213 15:17:53.246766 3344 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:17:53.255330 kubelet[3344]: I0213 15:17:53.254878 3344 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:17:53.262314 kubelet[3344]: I0213 15:17:53.247101 3344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:17:53.262314 kubelet[3344]: I0213 15:17:53.260915 3344 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:17:53.263225 kubelet[3344]: I0213 15:17:53.263185 3344 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:17:53.263634 kubelet[3344]: I0213 15:17:53.263593 3344 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:17:53.289395 kubelet[3344]: I0213 15:17:53.287913 3344 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:17:53.328609 kubelet[3344]: I0213 15:17:53.327130 3344 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:17:53.328609 kubelet[3344]: I0213 15:17:53.328495 3344 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:17:53.332403 kubelet[3344]: I0213 15:17:53.331999 3344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:17:53.344352 kubelet[3344]: I0213 15:17:53.342906 3344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:17:53.344352 kubelet[3344]: I0213 15:17:53.342952 3344 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:17:53.344352 kubelet[3344]: I0213 15:17:53.342988 3344 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:17:53.344352 kubelet[3344]: E0213 15:17:53.343099 3344 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:17:53.377658 kubelet[3344]: I0213 15:17:53.377532 3344 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:17:53.383739 kubelet[3344]: E0213 15:17:53.383092 3344 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Feb 13 15:17:53.415960 kubelet[3344]: I0213 15:17:53.415876 3344 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-146" Feb 13 15:17:53.417029 kubelet[3344]: E0213 15:17:53.416980 3344 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:17:53.446216 kubelet[3344]: E0213 15:17:53.443332 3344 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:17:53.448185 kubelet[3344]: I0213 15:17:53.448101 3344 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-21-146" Feb 13 15:17:53.448454 kubelet[3344]: I0213 15:17:53.448320 3344 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-146" Feb 13 15:17:53.627396 kubelet[3344]: I0213 15:17:53.626129 3344 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:17:53.627396 kubelet[3344]: I0213 15:17:53.626190 3344 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:17:53.627396 kubelet[3344]: I0213 15:17:53.626231 3344 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:53.627396 kubelet[3344]: I0213 15:17:53.627019 3344 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:17:53.627396 kubelet[3344]: I0213 15:17:53.627068 3344 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:17:53.627396 kubelet[3344]: I0213 15:17:53.627086 3344 policy_none.go:49] "None policy: Start" Feb 13 15:17:53.631335 kubelet[3344]: I0213 15:17:53.631245 3344 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:17:53.631335 kubelet[3344]: I0213 15:17:53.631338 3344 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:17:53.631660 kubelet[3344]: I0213 15:17:53.631610 3344 state_mem.go:75] "Updated machine memory state" Feb 13 15:17:53.644650 kubelet[3344]: E0213 15:17:53.643977 3344 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:17:53.653235 kubelet[3344]: I0213 15:17:53.651736 3344 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:17:53.664408 kubelet[3344]: I0213 15:17:53.661147 3344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:17:54.045157 kubelet[3344]: I0213 15:17:54.045055 3344 topology_manager.go:215] "Topology Admit Handler" podUID="b4ede9213be7159f83ee812907a98f79" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-146" Feb 13 15:17:54.045425 kubelet[3344]: I0213 15:17:54.045331 3344 topology_manager.go:215] "Topology Admit Handler" podUID="26682ad366e46a79bbd4e1734f3cda36" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-146" Feb 13 15:17:54.045502 kubelet[3344]: I0213 15:17:54.045468 3344 topology_manager.go:215] "Topology Admit Handler" podUID="74f5e6e852265ed5760eac11945246cd" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:54.062097 kubelet[3344]: E0213 15:17:54.061352 3344 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-21-146\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:54.071891 kubelet[3344]: E0213 15:17:54.071756 3344 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-21-146\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-146" Feb 13 15:17:54.072948 kubelet[3344]: I0213 15:17:54.072876 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26682ad366e46a79bbd4e1734f3cda36-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-146\" (UID: \"26682ad366e46a79bbd4e1734f3cda36\") " pod="kube-system/kube-apiserver-ip-172-31-21-146" Feb 13 15:17:54.073179 kubelet[3344]: I0213 15:17:54.072982 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74f5e6e852265ed5760eac11945246cd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-146\" (UID: \"74f5e6e852265ed5760eac11945246cd\") " pod="kube-system/kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:54.073179 kubelet[3344]: I0213 15:17:54.073044 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4ede9213be7159f83ee812907a98f79-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-146\" (UID: \"b4ede9213be7159f83ee812907a98f79\") " pod="kube-system/kube-scheduler-ip-172-31-21-146" Feb 13 15:17:54.073179 kubelet[3344]: I0213 15:17:54.073099 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26682ad366e46a79bbd4e1734f3cda36-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-146\" (UID: \"26682ad366e46a79bbd4e1734f3cda36\") " pod="kube-system/kube-apiserver-ip-172-31-21-146" Feb 13 15:17:54.073179 kubelet[3344]: I0213 15:17:54.073170 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74f5e6e852265ed5760eac11945246cd-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-146\" (UID: \"74f5e6e852265ed5760eac11945246cd\") " pod="kube-system/kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:54.074817 kubelet[3344]: I0213 15:17:54.073516 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74f5e6e852265ed5760eac11945246cd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-146\" (UID: \"74f5e6e852265ed5760eac11945246cd\") " pod="kube-system/kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:54.074817 kubelet[3344]: I0213 15:17:54.073790 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74f5e6e852265ed5760eac11945246cd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-146\" (UID: \"74f5e6e852265ed5760eac11945246cd\") " pod="kube-system/kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:54.074817 kubelet[3344]: I0213 15:17:54.074568 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74f5e6e852265ed5760eac11945246cd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-146\" (UID: \"74f5e6e852265ed5760eac11945246cd\") " pod="kube-system/kube-controller-manager-ip-172-31-21-146" Feb 13 15:17:54.074817 kubelet[3344]: I0213 15:17:54.074687 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26682ad366e46a79bbd4e1734f3cda36-ca-certs\") pod \"kube-apiserver-ip-172-31-21-146\" (UID: \"26682ad366e46a79bbd4e1734f3cda36\") " pod="kube-system/kube-apiserver-ip-172-31-21-146" Feb 13 15:17:54.236645 kubelet[3344]: I0213 15:17:54.234449 3344 apiserver.go:52] "Watching apiserver" Feb 13 15:17:54.264406 kubelet[3344]: I0213 15:17:54.264334 3344 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:17:54.315535 sudo[3358]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:54.445860 kubelet[3344]: I0213 15:17:54.445772 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-146" podStartSLOduration=2.445689908 podStartE2EDuration="2.445689908s" podCreationTimestamp="2025-02-13 15:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:54.417034304 +0000 UTC m=+1.353485215" watchObservedRunningTime="2025-02-13 15:17:54.445689908 +0000 UTC m=+1.382140807" Feb 13 15:17:54.472543 kubelet[3344]: I0213 15:17:54.472472 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-146" podStartSLOduration=0.472411821 podStartE2EDuration="472.411821ms" podCreationTimestamp="2025-02-13 15:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:54.448548632 +0000 UTC m=+1.384999543" watchObservedRunningTime="2025-02-13 15:17:54.472411821 +0000 UTC m=+1.408862708" Feb 13 15:17:56.992698 sudo[2264]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:57.016004 sshd[2263]: Connection closed by 139.178.68.195 port 53670 Feb 13 15:17:57.018656 sshd-session[2261]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:57.025191 systemd[1]: sshd@6-172.31.21.146:22-139.178.68.195:53670.service: Deactivated successfully. Feb 13 15:17:57.031583 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:17:57.032035 systemd[1]: session-7.scope: Consumed 11.134s CPU time, 185.0M memory peak, 0B memory swap peak. Feb 13 15:17:57.034880 systemd-logind[1921]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:17:57.038499 systemd-logind[1921]: Removed session 7. Feb 13 15:18:04.704865 kubelet[3344]: I0213 15:18:04.704773 3344 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:18:04.707843 containerd[1932]: time="2025-02-13T15:18:04.706537027Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:18:04.708929 kubelet[3344]: I0213 15:18:04.707997 3344 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:18:05.569214 kubelet[3344]: I0213 15:18:05.569063 3344 topology_manager.go:215] "Topology Admit Handler" podUID="afa20cab-0191-4dc9-911a-59c512b43493" podNamespace="kube-system" podName="kube-proxy-6gxdd" Feb 13 15:18:05.591791 systemd[1]: Created slice kubepods-besteffort-podafa20cab_0191_4dc9_911a_59c512b43493.slice - libcontainer container kubepods-besteffort-podafa20cab_0191_4dc9_911a_59c512b43493.slice. Feb 13 15:18:05.609349 kubelet[3344]: I0213 15:18:05.609211 3344 topology_manager.go:215] "Topology Admit Handler" podUID="2e846df7-750a-44ae-8992-21888b096c05" podNamespace="kube-system" podName="cilium-8rctv" Feb 13 15:18:05.638410 systemd[1]: Created slice kubepods-burstable-pod2e846df7_750a_44ae_8992_21888b096c05.slice - libcontainer container kubepods-burstable-pod2e846df7_750a_44ae_8992_21888b096c05.slice. Feb 13 15:18:05.661354 kubelet[3344]: I0213 15:18:05.660512 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n97mq\" (UniqueName: \"kubernetes.io/projected/afa20cab-0191-4dc9-911a-59c512b43493-kube-api-access-n97mq\") pod \"kube-proxy-6gxdd\" (UID: \"afa20cab-0191-4dc9-911a-59c512b43493\") " pod="kube-system/kube-proxy-6gxdd" Feb 13 15:18:05.662799 kubelet[3344]: I0213 15:18:05.661668 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e846df7-750a-44ae-8992-21888b096c05-clustermesh-secrets\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.662799 kubelet[3344]: I0213 15:18:05.661740 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e846df7-750a-44ae-8992-21888b096c05-cilium-config-path\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.662799 kubelet[3344]: I0213 15:18:05.661792 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-host-proc-sys-kernel\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.662799 kubelet[3344]: I0213 15:18:05.661842 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-etc-cni-netd\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.662799 kubelet[3344]: I0213 15:18:05.661887 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-lib-modules\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.662799 kubelet[3344]: I0213 15:18:05.661938 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cni-path\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.663486 kubelet[3344]: I0213 15:18:05.661992 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afa20cab-0191-4dc9-911a-59c512b43493-xtables-lock\") pod \"kube-proxy-6gxdd\" (UID: \"afa20cab-0191-4dc9-911a-59c512b43493\") " pod="kube-system/kube-proxy-6gxdd" Feb 13 15:18:05.663486 kubelet[3344]: I0213 15:18:05.662042 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-xtables-lock\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.663486 kubelet[3344]: I0213 15:18:05.662092 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-bpf-maps\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.663486 kubelet[3344]: I0213 15:18:05.662138 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cilium-cgroup\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.663486 kubelet[3344]: I0213 15:18:05.662181 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e846df7-750a-44ae-8992-21888b096c05-hubble-tls\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.663486 kubelet[3344]: I0213 15:18:05.662231 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cilium-run\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.664661 kubelet[3344]: I0213 15:18:05.662309 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fst\" (UniqueName: \"kubernetes.io/projected/2e846df7-750a-44ae-8992-21888b096c05-kube-api-access-z5fst\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.664661 kubelet[3344]: I0213 15:18:05.662373 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-hostproc\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.664661 kubelet[3344]: I0213 15:18:05.662425 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-host-proc-sys-net\") pod \"cilium-8rctv\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " pod="kube-system/cilium-8rctv" Feb 13 15:18:05.664661 kubelet[3344]: I0213 15:18:05.662469 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afa20cab-0191-4dc9-911a-59c512b43493-kube-proxy\") pod \"kube-proxy-6gxdd\" (UID: \"afa20cab-0191-4dc9-911a-59c512b43493\") " pod="kube-system/kube-proxy-6gxdd" Feb 13 15:18:05.664661 kubelet[3344]: I0213 15:18:05.662512 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afa20cab-0191-4dc9-911a-59c512b43493-lib-modules\") pod \"kube-proxy-6gxdd\" (UID: \"afa20cab-0191-4dc9-911a-59c512b43493\") " pod="kube-system/kube-proxy-6gxdd" Feb 13 15:18:05.957482 containerd[1932]: time="2025-02-13T15:18:05.957312742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rctv,Uid:2e846df7-750a-44ae-8992-21888b096c05,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:05.992312 kubelet[3344]: I0213 15:18:05.991184 3344 topology_manager.go:215] "Topology Admit Handler" podUID="b9a7ebda-8d82-4a70-a546-c8d898adb14f" podNamespace="kube-system" podName="cilium-operator-5cc964979-ktr7w" Feb 13 15:18:06.036961 systemd[1]: Created slice kubepods-besteffort-podb9a7ebda_8d82_4a70_a546_c8d898adb14f.slice - libcontainer container kubepods-besteffort-podb9a7ebda_8d82_4a70_a546_c8d898adb14f.slice. Feb 13 15:18:06.072077 containerd[1932]: time="2025-02-13T15:18:06.070882974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:06.072077 containerd[1932]: time="2025-02-13T15:18:06.071108766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:06.072077 containerd[1932]: time="2025-02-13T15:18:06.071154918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:06.072077 containerd[1932]: time="2025-02-13T15:18:06.071475450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:06.073499 kubelet[3344]: I0213 15:18:06.071251 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9a7ebda-8d82-4a70-a546-c8d898adb14f-cilium-config-path\") pod \"cilium-operator-5cc964979-ktr7w\" (UID: \"b9a7ebda-8d82-4a70-a546-c8d898adb14f\") " pod="kube-system/cilium-operator-5cc964979-ktr7w" Feb 13 15:18:06.073810 kubelet[3344]: I0213 15:18:06.073718 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swnsl\" (UniqueName: \"kubernetes.io/projected/b9a7ebda-8d82-4a70-a546-c8d898adb14f-kube-api-access-swnsl\") pod \"cilium-operator-5cc964979-ktr7w\" (UID: \"b9a7ebda-8d82-4a70-a546-c8d898adb14f\") " pod="kube-system/cilium-operator-5cc964979-ktr7w" Feb 13 15:18:06.112641 systemd[1]: Started cri-containerd-a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d.scope - libcontainer container a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d. Feb 13 15:18:06.180855 containerd[1932]: time="2025-02-13T15:18:06.180774403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8rctv,Uid:2e846df7-750a-44ae-8992-21888b096c05,Namespace:kube-system,Attempt:0,} returns sandbox id \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\"" Feb 13 15:18:06.197826 containerd[1932]: time="2025-02-13T15:18:06.196490839Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:18:06.210723 containerd[1932]: time="2025-02-13T15:18:06.209697403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6gxdd,Uid:afa20cab-0191-4dc9-911a-59c512b43493,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:06.257394 containerd[1932]: time="2025-02-13T15:18:06.256816531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:06.257394 containerd[1932]: time="2025-02-13T15:18:06.256994107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:06.257394 containerd[1932]: time="2025-02-13T15:18:06.257035903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:06.257394 containerd[1932]: time="2025-02-13T15:18:06.257244655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:06.294619 systemd[1]: Started cri-containerd-2418b31607a7aefcb01e9b34b117305f5ac9e49c9fabbf678c57a3fbebb76ae9.scope - libcontainer container 2418b31607a7aefcb01e9b34b117305f5ac9e49c9fabbf678c57a3fbebb76ae9. Feb 13 15:18:06.344140 containerd[1932]: time="2025-02-13T15:18:06.344068267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6gxdd,Uid:afa20cab-0191-4dc9-911a-59c512b43493,Namespace:kube-system,Attempt:0,} returns sandbox id \"2418b31607a7aefcb01e9b34b117305f5ac9e49c9fabbf678c57a3fbebb76ae9\"" Feb 13 15:18:06.353962 containerd[1932]: time="2025-02-13T15:18:06.353877776Z" level=info msg="CreateContainer within sandbox \"2418b31607a7aefcb01e9b34b117305f5ac9e49c9fabbf678c57a3fbebb76ae9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:18:06.355195 containerd[1932]: time="2025-02-13T15:18:06.354701312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ktr7w,Uid:b9a7ebda-8d82-4a70-a546-c8d898adb14f,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:06.406032 containerd[1932]: time="2025-02-13T15:18:06.404821700Z" level=info msg="CreateContainer within sandbox \"2418b31607a7aefcb01e9b34b117305f5ac9e49c9fabbf678c57a3fbebb76ae9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"86506bd5877705e5a15e46c5cedf292ddc9a4bc7747c09118eeeb08615898ddd\"" Feb 13 15:18:06.407511 containerd[1932]: time="2025-02-13T15:18:06.407207264Z" level=info msg="StartContainer for \"86506bd5877705e5a15e46c5cedf292ddc9a4bc7747c09118eeeb08615898ddd\"" Feb 13 15:18:06.427251 containerd[1932]: time="2025-02-13T15:18:06.426910688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:06.427251 containerd[1932]: time="2025-02-13T15:18:06.427055696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:06.427251 containerd[1932]: time="2025-02-13T15:18:06.427082072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:06.430312 containerd[1932]: time="2025-02-13T15:18:06.427222184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:06.475451 systemd[1]: Started cri-containerd-12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f.scope - libcontainer container 12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f. Feb 13 15:18:06.508716 systemd[1]: Started cri-containerd-86506bd5877705e5a15e46c5cedf292ddc9a4bc7747c09118eeeb08615898ddd.scope - libcontainer container 86506bd5877705e5a15e46c5cedf292ddc9a4bc7747c09118eeeb08615898ddd. Feb 13 15:18:06.626098 containerd[1932]: time="2025-02-13T15:18:06.625429017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ktr7w,Uid:b9a7ebda-8d82-4a70-a546-c8d898adb14f,Namespace:kube-system,Attempt:0,} returns sandbox id \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\"" Feb 13 15:18:06.660002 containerd[1932]: time="2025-02-13T15:18:06.658981305Z" level=info msg="StartContainer for \"86506bd5877705e5a15e46c5cedf292ddc9a4bc7747c09118eeeb08615898ddd\" returns successfully" Feb 13 15:18:13.230690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873404733.mount: Deactivated successfully. Feb 13 15:18:13.367812 kubelet[3344]: I0213 15:18:13.366754 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6gxdd" podStartSLOduration=8.36669629 podStartE2EDuration="8.36669629s" podCreationTimestamp="2025-02-13 15:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:07.626482306 +0000 UTC m=+14.562933193" watchObservedRunningTime="2025-02-13 15:18:13.36669629 +0000 UTC m=+20.303147165" Feb 13 15:18:15.998329 containerd[1932]: time="2025-02-13T15:18:15.997825783Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:15.999987 containerd[1932]: time="2025-02-13T15:18:15.999787591Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:18:16.002156 containerd[1932]: time="2025-02-13T15:18:16.002005023Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:16.007510 containerd[1932]: time="2025-02-13T15:18:16.007443159Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.8108856s" Feb 13 15:18:16.007953 containerd[1932]: time="2025-02-13T15:18:16.007509435Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:18:16.010427 containerd[1932]: time="2025-02-13T15:18:16.008910987Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:18:16.013857 containerd[1932]: time="2025-02-13T15:18:16.013380411Z" level=info msg="CreateContainer within sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:18:16.043959 containerd[1932]: time="2025-02-13T15:18:16.043865512Z" level=info msg="CreateContainer within sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2\"" Feb 13 15:18:16.045784 containerd[1932]: time="2025-02-13T15:18:16.045247876Z" level=info msg="StartContainer for \"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2\"" Feb 13 15:18:16.111599 systemd[1]: Started cri-containerd-e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2.scope - libcontainer container e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2. Feb 13 15:18:16.176924 containerd[1932]: time="2025-02-13T15:18:16.176837884Z" level=info msg="StartContainer for \"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2\" returns successfully" Feb 13 15:18:16.192517 systemd[1]: cri-containerd-e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2.scope: Deactivated successfully. Feb 13 15:18:17.031909 systemd[1]: run-containerd-runc-k8s.io-e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2-runc.s10VNj.mount: Deactivated successfully. Feb 13 15:18:17.032441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2-rootfs.mount: Deactivated successfully. Feb 13 15:18:18.179745 containerd[1932]: time="2025-02-13T15:18:18.179624670Z" level=info msg="shim disconnected" id=e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2 namespace=k8s.io Feb 13 15:18:18.179745 containerd[1932]: time="2025-02-13T15:18:18.179738442Z" level=warning msg="cleaning up after shim disconnected" id=e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2 namespace=k8s.io Feb 13 15:18:18.179745 containerd[1932]: time="2025-02-13T15:18:18.179763150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:18.551035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071340944.mount: Deactivated successfully. Feb 13 15:18:18.680550 containerd[1932]: time="2025-02-13T15:18:18.680424405Z" level=info msg="CreateContainer within sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:18:18.825591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994034621.mount: Deactivated successfully. Feb 13 15:18:18.842878 containerd[1932]: time="2025-02-13T15:18:18.842646394Z" level=info msg="CreateContainer within sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7\"" Feb 13 15:18:18.845739 containerd[1932]: time="2025-02-13T15:18:18.845659042Z" level=info msg="StartContainer for \"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7\"" Feb 13 15:18:18.916768 systemd[1]: Started cri-containerd-079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7.scope - libcontainer container 079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7. Feb 13 15:18:19.007762 containerd[1932]: time="2025-02-13T15:18:19.007694742Z" level=info msg="StartContainer for \"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7\" returns successfully" Feb 13 15:18:19.048986 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:18:19.049834 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:19.049958 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:18:19.063747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:18:19.072160 systemd[1]: cri-containerd-079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7.scope: Deactivated successfully. Feb 13 15:18:19.139096 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:18:19.320407 containerd[1932]: time="2025-02-13T15:18:19.319877540Z" level=info msg="shim disconnected" id=079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7 namespace=k8s.io Feb 13 15:18:19.320407 containerd[1932]: time="2025-02-13T15:18:19.320013512Z" level=warning msg="cleaning up after shim disconnected" id=079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7 namespace=k8s.io Feb 13 15:18:19.320407 containerd[1932]: time="2025-02-13T15:18:19.320043464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:19.680599 containerd[1932]: time="2025-02-13T15:18:19.680417734Z" level=info msg="CreateContainer within sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:18:19.725749 containerd[1932]: time="2025-02-13T15:18:19.725595262Z" level=info msg="CreateContainer within sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f\"" Feb 13 15:18:19.729060 containerd[1932]: time="2025-02-13T15:18:19.728980078Z" level=info msg="StartContainer for \"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f\"" Feb 13 15:18:19.864628 systemd[1]: Started cri-containerd-ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f.scope - libcontainer container ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f. Feb 13 15:18:19.915632 containerd[1932]: time="2025-02-13T15:18:19.915498203Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:19.919596 containerd[1932]: time="2025-02-13T15:18:19.919478135Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:18:19.921618 containerd[1932]: time="2025-02-13T15:18:19.921528791Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:19.930134 containerd[1932]: time="2025-02-13T15:18:19.929851007Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.92086268s" Feb 13 15:18:19.930134 containerd[1932]: time="2025-02-13T15:18:19.929938511Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:18:19.937957 containerd[1932]: time="2025-02-13T15:18:19.937591139Z" level=info msg="CreateContainer within sandbox \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:18:19.978674 containerd[1932]: time="2025-02-13T15:18:19.978586679Z" level=info msg="CreateContainer within sandbox \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\"" Feb 13 15:18:19.987260 containerd[1932]: time="2025-02-13T15:18:19.986390315Z" level=info msg="StartContainer for \"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f\" returns successfully" Feb 13 15:18:19.988170 containerd[1932]: time="2025-02-13T15:18:19.988069463Z" level=info msg="StartContainer for \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\"" Feb 13 15:18:20.001572 systemd[1]: cri-containerd-ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f.scope: Deactivated successfully. Feb 13 15:18:20.081796 systemd[1]: Started cri-containerd-a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4.scope - libcontainer container a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4. Feb 13 15:18:20.188862 containerd[1932]: time="2025-02-13T15:18:20.188559620Z" level=info msg="StartContainer for \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\" returns successfully" Feb 13 15:18:20.458017 containerd[1932]: time="2025-02-13T15:18:20.457541374Z" level=info msg="shim disconnected" id=ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f namespace=k8s.io Feb 13 15:18:20.458017 containerd[1932]: time="2025-02-13T15:18:20.457630270Z" level=warning msg="cleaning up after shim disconnected" id=ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f namespace=k8s.io Feb 13 15:18:20.458017 containerd[1932]: time="2025-02-13T15:18:20.457650826Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:20.534445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f-rootfs.mount: Deactivated successfully. Feb 13 15:18:20.705639 containerd[1932]: time="2025-02-13T15:18:20.704787323Z" level=info msg="CreateContainer within sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:18:20.746513 containerd[1932]: time="2025-02-13T15:18:20.746364815Z" level=info msg="CreateContainer within sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6\"" Feb 13 15:18:20.749212 containerd[1932]: time="2025-02-13T15:18:20.748495139Z" level=info msg="StartContainer for \"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6\"" Feb 13 15:18:20.881346 kubelet[3344]: I0213 15:18:20.880585 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-ktr7w" podStartSLOduration=2.57721039 podStartE2EDuration="15.877942932s" podCreationTimestamp="2025-02-13 15:18:05 +0000 UTC" firstStartedPulling="2025-02-13 15:18:06.629604177 +0000 UTC m=+13.566055064" lastFinishedPulling="2025-02-13 15:18:19.930336707 +0000 UTC m=+26.866787606" observedRunningTime="2025-02-13 15:18:20.874837056 +0000 UTC m=+27.811287943" watchObservedRunningTime="2025-02-13 15:18:20.877942932 +0000 UTC m=+27.814393927" Feb 13 15:18:20.923785 systemd[1]: Started cri-containerd-c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6.scope - libcontainer container c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6. Feb 13 15:18:21.062686 containerd[1932]: time="2025-02-13T15:18:21.061699377Z" level=info msg="StartContainer for \"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6\" returns successfully" Feb 13 15:18:21.064820 systemd[1]: cri-containerd-c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6.scope: Deactivated successfully. Feb 13 15:18:21.161791 containerd[1932]: time="2025-02-13T15:18:21.161655693Z" level=info msg="shim disconnected" id=c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6 namespace=k8s.io Feb 13 15:18:21.161791 containerd[1932]: time="2025-02-13T15:18:21.161769849Z" level=warning msg="cleaning up after shim disconnected" id=c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6 namespace=k8s.io Feb 13 15:18:21.161791 containerd[1932]: time="2025-02-13T15:18:21.161794941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:21.531785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6-rootfs.mount: Deactivated successfully. Feb 13 15:18:21.727542 containerd[1932]: time="2025-02-13T15:18:21.725926260Z" level=info msg="CreateContainer within sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:18:21.765606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount898621766.mount: Deactivated successfully. Feb 13 15:18:21.769011 containerd[1932]: time="2025-02-13T15:18:21.766980852Z" level=info msg="CreateContainer within sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\"" Feb 13 15:18:21.771812 containerd[1932]: time="2025-02-13T15:18:21.771477024Z" level=info msg="StartContainer for \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\"" Feb 13 15:18:21.899605 systemd[1]: Started cri-containerd-0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f.scope - libcontainer container 0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f. Feb 13 15:18:22.004972 containerd[1932]: time="2025-02-13T15:18:22.004892445Z" level=info msg="StartContainer for \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\" returns successfully" Feb 13 15:18:22.317850 kubelet[3344]: I0213 15:18:22.317782 3344 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:18:22.422169 kubelet[3344]: I0213 15:18:22.422063 3344 topology_manager.go:215] "Topology Admit Handler" podUID="077078f1-f3b3-4e78-9d7c-3668f5e12456" podNamespace="kube-system" podName="coredns-76f75df574-rqqx2" Feb 13 15:18:22.428458 kubelet[3344]: I0213 15:18:22.428371 3344 topology_manager.go:215] "Topology Admit Handler" podUID="c3451082-69cc-4c2d-aba2-753190de3802" podNamespace="kube-system" podName="coredns-76f75df574-vkhh5" Feb 13 15:18:22.442696 kubelet[3344]: W0213 15:18:22.442633 3344 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-21-146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-146' and this object Feb 13 15:18:22.442696 kubelet[3344]: E0213 15:18:22.442701 3344 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-21-146" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-146' and this object Feb 13 15:18:22.444900 systemd[1]: Created slice kubepods-burstable-pod077078f1_f3b3_4e78_9d7c_3668f5e12456.slice - libcontainer container kubepods-burstable-pod077078f1_f3b3_4e78_9d7c_3668f5e12456.slice. Feb 13 15:18:22.468306 systemd[1]: Created slice kubepods-burstable-podc3451082_69cc_4c2d_aba2_753190de3802.slice - libcontainer container kubepods-burstable-podc3451082_69cc_4c2d_aba2_753190de3802.slice. Feb 13 15:18:22.505588 kubelet[3344]: I0213 15:18:22.505517 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/077078f1-f3b3-4e78-9d7c-3668f5e12456-config-volume\") pod \"coredns-76f75df574-rqqx2\" (UID: \"077078f1-f3b3-4e78-9d7c-3668f5e12456\") " pod="kube-system/coredns-76f75df574-rqqx2" Feb 13 15:18:22.505755 kubelet[3344]: I0213 15:18:22.505597 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3451082-69cc-4c2d-aba2-753190de3802-config-volume\") pod \"coredns-76f75df574-vkhh5\" (UID: \"c3451082-69cc-4c2d-aba2-753190de3802\") " pod="kube-system/coredns-76f75df574-vkhh5" Feb 13 15:18:22.505755 kubelet[3344]: I0213 15:18:22.505654 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgxt5\" (UniqueName: \"kubernetes.io/projected/077078f1-f3b3-4e78-9d7c-3668f5e12456-kube-api-access-bgxt5\") pod \"coredns-76f75df574-rqqx2\" (UID: \"077078f1-f3b3-4e78-9d7c-3668f5e12456\") " pod="kube-system/coredns-76f75df574-rqqx2" Feb 13 15:18:22.505755 kubelet[3344]: I0213 15:18:22.505708 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2qh\" (UniqueName: \"kubernetes.io/projected/c3451082-69cc-4c2d-aba2-753190de3802-kube-api-access-5m2qh\") pod \"coredns-76f75df574-vkhh5\" (UID: \"c3451082-69cc-4c2d-aba2-753190de3802\") " pod="kube-system/coredns-76f75df574-vkhh5" Feb 13 15:18:23.609313 kubelet[3344]: E0213 15:18:23.608357 3344 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:18:23.609313 kubelet[3344]: E0213 15:18:23.608490 3344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/077078f1-f3b3-4e78-9d7c-3668f5e12456-config-volume podName:077078f1-f3b3-4e78-9d7c-3668f5e12456 nodeName:}" failed. No retries permitted until 2025-02-13 15:18:24.108458165 +0000 UTC m=+31.044909052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/077078f1-f3b3-4e78-9d7c-3668f5e12456-config-volume") pod "coredns-76f75df574-rqqx2" (UID: "077078f1-f3b3-4e78-9d7c-3668f5e12456") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:18:23.611234 kubelet[3344]: E0213 15:18:23.610944 3344 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:18:23.611234 kubelet[3344]: E0213 15:18:23.611069 3344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3451082-69cc-4c2d-aba2-753190de3802-config-volume podName:c3451082-69cc-4c2d-aba2-753190de3802 nodeName:}" failed. No retries permitted until 2025-02-13 15:18:24.111039461 +0000 UTC m=+31.047490360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c3451082-69cc-4c2d-aba2-753190de3802-config-volume") pod "coredns-76f75df574-vkhh5" (UID: "c3451082-69cc-4c2d-aba2-753190de3802") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:18:24.260128 containerd[1932]: time="2025-02-13T15:18:24.259986996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rqqx2,Uid:077078f1-f3b3-4e78-9d7c-3668f5e12456,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:24.286319 containerd[1932]: time="2025-02-13T15:18:24.284725177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vkhh5,Uid:c3451082-69cc-4c2d-aba2-753190de3802,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:25.215894 systemd-networkd[1775]: cilium_host: Link UP Feb 13 15:18:25.216246 systemd-networkd[1775]: cilium_net: Link UP Feb 13 15:18:25.216253 systemd-networkd[1775]: cilium_net: Gained carrier Feb 13 15:18:25.216855 (udev-worker)[4109]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:18:25.218983 systemd-networkd[1775]: cilium_host: Gained carrier Feb 13 15:18:25.219659 systemd-networkd[1775]: cilium_host: Gained IPv6LL Feb 13 15:18:25.223865 (udev-worker)[4173]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:18:25.528971 (udev-worker)[4184]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:18:25.546334 systemd-networkd[1775]: cilium_vxlan: Link UP Feb 13 15:18:25.546356 systemd-networkd[1775]: cilium_vxlan: Gained carrier Feb 13 15:18:26.022801 systemd-networkd[1775]: cilium_net: Gained IPv6LL Feb 13 15:18:26.312742 kernel: NET: Registered PF_ALG protocol family Feb 13 15:18:27.432475 systemd-networkd[1775]: cilium_vxlan: Gained IPv6LL Feb 13 15:18:28.195570 (udev-worker)[4179]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:18:28.200499 systemd-networkd[1775]: lxc_health: Link UP Feb 13 15:18:28.212892 systemd-networkd[1775]: lxc_health: Gained carrier Feb 13 15:18:28.500942 systemd[1]: Started sshd@7-172.31.21.146:22-139.178.68.195:34984.service - OpenSSH per-connection server daemon (139.178.68.195:34984). Feb 13 15:18:28.717586 sshd[4512]: Accepted publickey for core from 139.178.68.195 port 34984 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:28.719993 sshd-session[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:28.734473 systemd-logind[1921]: New session 8 of user core. Feb 13 15:18:28.741234 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:18:28.843024 systemd-networkd[1775]: lxc364a6d9a329d: Link UP Feb 13 15:18:28.867340 kernel: eth0: renamed from tmpd7f56 Feb 13 15:18:28.876147 systemd-networkd[1775]: lxc364a6d9a329d: Gained carrier Feb 13 15:18:28.989889 systemd-networkd[1775]: lxc0e8c825f31b7: Link UP Feb 13 15:18:29.006346 kernel: eth0: renamed from tmpa85fd Feb 13 15:18:29.023890 systemd-networkd[1775]: lxc0e8c825f31b7: Gained carrier Feb 13 15:18:29.291370 sshd[4516]: Connection closed by 139.178.68.195 port 34984 Feb 13 15:18:29.293819 sshd-session[4512]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:29.306988 systemd[1]: sshd@7-172.31.21.146:22-139.178.68.195:34984.service: Deactivated successfully. Feb 13 15:18:29.317955 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:18:29.329820 systemd-logind[1921]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:18:29.340027 systemd-logind[1921]: Removed session 8. Feb 13 15:18:29.606724 systemd-networkd[1775]: lxc_health: Gained IPv6LL Feb 13 15:18:29.927017 systemd-networkd[1775]: lxc364a6d9a329d: Gained IPv6LL Feb 13 15:18:30.004562 kubelet[3344]: I0213 15:18:30.004015 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8rctv" podStartSLOduration=15.185801757 podStartE2EDuration="25.003910517s" podCreationTimestamp="2025-02-13 15:18:05 +0000 UTC" firstStartedPulling="2025-02-13 15:18:06.190010095 +0000 UTC m=+13.126460970" lastFinishedPulling="2025-02-13 15:18:16.008118651 +0000 UTC m=+22.944569730" observedRunningTime="2025-02-13 15:18:22.800366161 +0000 UTC m=+29.736817072" watchObservedRunningTime="2025-02-13 15:18:30.003910517 +0000 UTC m=+36.940361416" Feb 13 15:18:30.567938 systemd-networkd[1775]: lxc0e8c825f31b7: Gained IPv6LL Feb 13 15:18:32.952714 ntpd[1915]: Listen normally on 7 cilium_host 192.168.0.13:123 Feb 13 15:18:32.953942 ntpd[1915]: 13 Feb 15:18:32 ntpd[1915]: Listen normally on 7 cilium_host 192.168.0.13:123 Feb 13 15:18:32.953942 ntpd[1915]: 13 Feb 15:18:32 ntpd[1915]: Listen normally on 8 cilium_net [fe80::585d:6dff:fe72:2b6d%4]:123 Feb 13 15:18:32.953942 ntpd[1915]: 13 Feb 15:18:32 ntpd[1915]: Listen normally on 9 cilium_host [fe80::b86d:bdff:fec2:938%5]:123 Feb 13 15:18:32.953942 ntpd[1915]: 13 Feb 15:18:32 ntpd[1915]: Listen normally on 10 cilium_vxlan [fe80::2400:7ff:fed2:2821%6]:123 Feb 13 15:18:32.953942 ntpd[1915]: 13 Feb 15:18:32 ntpd[1915]: Listen normally on 11 lxc_health [fe80::fcb8:fbff:fe5f:8d39%8]:123 Feb 13 15:18:32.952928 ntpd[1915]: Listen normally on 8 cilium_net [fe80::585d:6dff:fe72:2b6d%4]:123 Feb 13 15:18:32.953059 ntpd[1915]: Listen normally on 9 cilium_host [fe80::b86d:bdff:fec2:938%5]:123 Feb 13 15:18:32.953140 ntpd[1915]: Listen normally on 10 cilium_vxlan [fe80::2400:7ff:fed2:2821%6]:123 Feb 13 15:18:32.953215 ntpd[1915]: Listen normally on 11 lxc_health [fe80::fcb8:fbff:fe5f:8d39%8]:123 Feb 13 15:18:32.954527 ntpd[1915]: Listen normally on 12 lxc364a6d9a329d [fe80::6454:40ff:fe54:36b8%10]:123 Feb 13 15:18:32.955008 ntpd[1915]: 13 Feb 15:18:32 ntpd[1915]: Listen normally on 12 lxc364a6d9a329d [fe80::6454:40ff:fe54:36b8%10]:123 Feb 13 15:18:32.955008 ntpd[1915]: 13 Feb 15:18:32 ntpd[1915]: Listen normally on 13 lxc0e8c825f31b7 [fe80::4cbd:1fff:fe88:4541%12]:123 Feb 13 15:18:32.954700 ntpd[1915]: Listen normally on 13 lxc0e8c825f31b7 [fe80::4cbd:1fff:fe88:4541%12]:123 Feb 13 15:18:34.332897 systemd[1]: Started sshd@8-172.31.21.146:22-139.178.68.195:34986.service - OpenSSH per-connection server daemon (139.178.68.195:34986). Feb 13 15:18:34.548364 sshd[4553]: Accepted publickey for core from 139.178.68.195 port 34986 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:34.551245 sshd-session[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:34.563787 systemd-logind[1921]: New session 9 of user core. Feb 13 15:18:34.574881 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:18:34.883229 sshd[4555]: Connection closed by 139.178.68.195 port 34986 Feb 13 15:18:34.884586 sshd-session[4553]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:34.895213 systemd-logind[1921]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:18:34.895937 systemd[1]: sshd@8-172.31.21.146:22-139.178.68.195:34986.service: Deactivated successfully. Feb 13 15:18:34.907388 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:18:34.919083 systemd-logind[1921]: Removed session 9. Feb 13 15:18:39.156115 containerd[1932]: time="2025-02-13T15:18:39.155550470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:39.156115 containerd[1932]: time="2025-02-13T15:18:39.155663666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:39.156115 containerd[1932]: time="2025-02-13T15:18:39.155717618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:39.156115 containerd[1932]: time="2025-02-13T15:18:39.155879066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:39.197771 containerd[1932]: time="2025-02-13T15:18:39.197575059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:39.198581 containerd[1932]: time="2025-02-13T15:18:39.198308019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:39.198581 containerd[1932]: time="2025-02-13T15:18:39.198432543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:39.199844 containerd[1932]: time="2025-02-13T15:18:39.199480911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:39.259646 systemd[1]: Started cri-containerd-d7f565e6963089c43da560b35593f1828775204621c00e50814f1f6b4632dcff.scope - libcontainer container d7f565e6963089c43da560b35593f1828775204621c00e50814f1f6b4632dcff. Feb 13 15:18:39.304764 systemd[1]: Started cri-containerd-a85fd210da506b7e45d8734fe4fb57ed7b93dc6a251e44552e7bf69558087147.scope - libcontainer container a85fd210da506b7e45d8734fe4fb57ed7b93dc6a251e44552e7bf69558087147. Feb 13 15:18:39.445903 containerd[1932]: time="2025-02-13T15:18:39.444645064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rqqx2,Uid:077078f1-f3b3-4e78-9d7c-3668f5e12456,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7f565e6963089c43da560b35593f1828775204621c00e50814f1f6b4632dcff\"" Feb 13 15:18:39.459442 containerd[1932]: time="2025-02-13T15:18:39.458791276Z" level=info msg="CreateContainer within sandbox \"d7f565e6963089c43da560b35593f1828775204621c00e50814f1f6b4632dcff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:39.481026 containerd[1932]: time="2025-02-13T15:18:39.480883132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vkhh5,Uid:c3451082-69cc-4c2d-aba2-753190de3802,Namespace:kube-system,Attempt:0,} returns sandbox id \"a85fd210da506b7e45d8734fe4fb57ed7b93dc6a251e44552e7bf69558087147\"" Feb 13 15:18:39.500946 containerd[1932]: time="2025-02-13T15:18:39.498898708Z" level=info msg="CreateContainer within sandbox \"a85fd210da506b7e45d8734fe4fb57ed7b93dc6a251e44552e7bf69558087147\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:39.523593 containerd[1932]: time="2025-02-13T15:18:39.522435964Z" level=info msg="CreateContainer within sandbox \"d7f565e6963089c43da560b35593f1828775204621c00e50814f1f6b4632dcff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"109703fded4b72f1c0c8971368fe6b43e73ed8a99fd6954e4faa271f8c307dab\"" Feb 13 15:18:39.525265 containerd[1932]: time="2025-02-13T15:18:39.525099220Z" level=info msg="StartContainer for \"109703fded4b72f1c0c8971368fe6b43e73ed8a99fd6954e4faa271f8c307dab\"" Feb 13 15:18:39.573479 containerd[1932]: time="2025-02-13T15:18:39.571152125Z" level=info msg="CreateContainer within sandbox \"a85fd210da506b7e45d8734fe4fb57ed7b93dc6a251e44552e7bf69558087147\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"190f30d26eaa8dbc8c9f13d529fce177186a7ce3d7d8374b104141403307c4c9\"" Feb 13 15:18:39.576526 containerd[1932]: time="2025-02-13T15:18:39.576434033Z" level=info msg="StartContainer for \"190f30d26eaa8dbc8c9f13d529fce177186a7ce3d7d8374b104141403307c4c9\"" Feb 13 15:18:39.672328 systemd[1]: Started cri-containerd-190f30d26eaa8dbc8c9f13d529fce177186a7ce3d7d8374b104141403307c4c9.scope - libcontainer container 190f30d26eaa8dbc8c9f13d529fce177186a7ce3d7d8374b104141403307c4c9. Feb 13 15:18:39.685898 systemd[1]: Started cri-containerd-109703fded4b72f1c0c8971368fe6b43e73ed8a99fd6954e4faa271f8c307dab.scope - libcontainer container 109703fded4b72f1c0c8971368fe6b43e73ed8a99fd6954e4faa271f8c307dab. Feb 13 15:18:39.780679 containerd[1932]: time="2025-02-13T15:18:39.780593178Z" level=info msg="StartContainer for \"190f30d26eaa8dbc8c9f13d529fce177186a7ce3d7d8374b104141403307c4c9\" returns successfully" Feb 13 15:18:39.794198 containerd[1932]: time="2025-02-13T15:18:39.793917510Z" level=info msg="StartContainer for \"109703fded4b72f1c0c8971368fe6b43e73ed8a99fd6954e4faa271f8c307dab\" returns successfully" Feb 13 15:18:39.861555 kubelet[3344]: I0213 15:18:39.861263 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vkhh5" podStartSLOduration=34.86119785 podStartE2EDuration="34.86119785s" podCreationTimestamp="2025-02-13 15:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:39.853225062 +0000 UTC m=+46.789675949" watchObservedRunningTime="2025-02-13 15:18:39.86119785 +0000 UTC m=+46.797648773" Feb 13 15:18:39.902147 kubelet[3344]: I0213 15:18:39.901763 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rqqx2" podStartSLOduration=34.901641906 podStartE2EDuration="34.901641906s" podCreationTimestamp="2025-02-13 15:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:39.89616315 +0000 UTC m=+46.832614049" watchObservedRunningTime="2025-02-13 15:18:39.901641906 +0000 UTC m=+46.838092805" Feb 13 15:18:39.935483 systemd[1]: Started sshd@9-172.31.21.146:22-139.178.68.195:33628.service - OpenSSH per-connection server daemon (139.178.68.195:33628). Feb 13 15:18:40.139978 sshd[4735]: Accepted publickey for core from 139.178.68.195 port 33628 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:40.142249 sshd-session[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:40.150426 systemd-logind[1921]: New session 10 of user core. Feb 13 15:18:40.160597 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:18:40.429182 sshd[4741]: Connection closed by 139.178.68.195 port 33628 Feb 13 15:18:40.430423 sshd-session[4735]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:40.436790 systemd[1]: sshd@9-172.31.21.146:22-139.178.68.195:33628.service: Deactivated successfully. Feb 13 15:18:40.441547 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:18:40.443365 systemd-logind[1921]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:18:40.445003 systemd-logind[1921]: Removed session 10. Feb 13 15:18:45.476035 systemd[1]: Started sshd@10-172.31.21.146:22-139.178.68.195:33636.service - OpenSSH per-connection server daemon (139.178.68.195:33636). Feb 13 15:18:45.680994 sshd[4764]: Accepted publickey for core from 139.178.68.195 port 33636 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:45.684897 sshd-session[4764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:45.697861 systemd-logind[1921]: New session 11 of user core. Feb 13 15:18:45.705836 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:18:45.986362 sshd[4766]: Connection closed by 139.178.68.195 port 33636 Feb 13 15:18:45.986922 sshd-session[4764]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:45.995809 systemd[1]: sshd@10-172.31.21.146:22-139.178.68.195:33636.service: Deactivated successfully. Feb 13 15:18:46.001910 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:18:46.006526 systemd-logind[1921]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:18:46.010370 systemd-logind[1921]: Removed session 11. Feb 13 15:18:51.025911 systemd[1]: Started sshd@11-172.31.21.146:22-139.178.68.195:45932.service - OpenSSH per-connection server daemon (139.178.68.195:45932). Feb 13 15:18:51.224502 sshd[4778]: Accepted publickey for core from 139.178.68.195 port 45932 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:51.227681 sshd-session[4778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:51.237426 systemd-logind[1921]: New session 12 of user core. Feb 13 15:18:51.245577 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:18:51.521295 sshd[4780]: Connection closed by 139.178.68.195 port 45932 Feb 13 15:18:51.522671 sshd-session[4778]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:51.531198 systemd[1]: sshd@11-172.31.21.146:22-139.178.68.195:45932.service: Deactivated successfully. Feb 13 15:18:51.536061 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:18:51.537881 systemd-logind[1921]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:18:51.541029 systemd-logind[1921]: Removed session 12. Feb 13 15:18:51.567946 systemd[1]: Started sshd@12-172.31.21.146:22-139.178.68.195:45942.service - OpenSSH per-connection server daemon (139.178.68.195:45942). Feb 13 15:18:51.752981 sshd[4792]: Accepted publickey for core from 139.178.68.195 port 45942 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:51.756470 sshd-session[4792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:51.766713 systemd-logind[1921]: New session 13 of user core. Feb 13 15:18:51.773643 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:18:52.128701 sshd[4794]: Connection closed by 139.178.68.195 port 45942 Feb 13 15:18:52.131835 sshd-session[4792]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:52.141887 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:18:52.143909 systemd[1]: sshd@12-172.31.21.146:22-139.178.68.195:45942.service: Deactivated successfully. Feb 13 15:18:52.158655 systemd-logind[1921]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:18:52.193456 systemd[1]: Started sshd@13-172.31.21.146:22-139.178.68.195:45946.service - OpenSSH per-connection server daemon (139.178.68.195:45946). Feb 13 15:18:52.198657 systemd-logind[1921]: Removed session 13. Feb 13 15:18:52.377258 sshd[4803]: Accepted publickey for core from 139.178.68.195 port 45946 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:52.380389 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:52.392904 systemd-logind[1921]: New session 14 of user core. Feb 13 15:18:52.397572 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:18:52.667072 sshd[4805]: Connection closed by 139.178.68.195 port 45946 Feb 13 15:18:52.668451 sshd-session[4803]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:52.677141 systemd[1]: sshd@13-172.31.21.146:22-139.178.68.195:45946.service: Deactivated successfully. Feb 13 15:18:52.682133 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:18:52.686053 systemd-logind[1921]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:18:52.689833 systemd-logind[1921]: Removed session 14. Feb 13 15:18:57.711209 systemd[1]: Started sshd@14-172.31.21.146:22-139.178.68.195:38056.service - OpenSSH per-connection server daemon (139.178.68.195:38056). Feb 13 15:18:57.924439 sshd[4818]: Accepted publickey for core from 139.178.68.195 port 38056 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:57.929972 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:57.944120 systemd-logind[1921]: New session 15 of user core. Feb 13 15:18:57.954809 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:18:58.231931 sshd[4820]: Connection closed by 139.178.68.195 port 38056 Feb 13 15:18:58.233593 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:58.242565 systemd[1]: sshd@14-172.31.21.146:22-139.178.68.195:38056.service: Deactivated successfully. Feb 13 15:18:58.248796 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:18:58.255004 systemd-logind[1921]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:18:58.259362 systemd-logind[1921]: Removed session 15. Feb 13 15:18:58.945022 update_engine[1922]: I20250213 15:18:58.944924 1922 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 15:18:58.945022 update_engine[1922]: I20250213 15:18:58.945014 1922 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 15:18:58.946032 update_engine[1922]: I20250213 15:18:58.945433 1922 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 15:18:58.946811 update_engine[1922]: I20250213 15:18:58.946704 1922 omaha_request_params.cc:62] Current group set to stable Feb 13 15:18:58.947012 update_engine[1922]: I20250213 15:18:58.946944 1922 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 15:18:58.947012 update_engine[1922]: I20250213 15:18:58.946986 1922 update_attempter.cc:643] Scheduling an action processor start. Feb 13 15:18:58.947119 update_engine[1922]: I20250213 15:18:58.947028 1922 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:18:58.947168 update_engine[1922]: I20250213 15:18:58.947108 1922 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 15:18:58.948478 update_engine[1922]: I20250213 15:18:58.947248 1922 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:18:58.948478 update_engine[1922]: I20250213 15:18:58.947358 1922 omaha_request_action.cc:272] Request: Feb 13 15:18:58.948478 update_engine[1922]: Feb 13 15:18:58.948478 update_engine[1922]: Feb 13 15:18:58.948478 update_engine[1922]: Feb 13 15:18:58.948478 update_engine[1922]: Feb 13 15:18:58.948478 update_engine[1922]: Feb 13 15:18:58.948478 update_engine[1922]: Feb 13 15:18:58.948478 update_engine[1922]: Feb 13 15:18:58.948478 update_engine[1922]: Feb 13 15:18:58.948478 update_engine[1922]: I20250213 15:18:58.947412 1922 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:18:58.950112 locksmithd[1959]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 15:18:58.951462 update_engine[1922]: I20250213 15:18:58.950351 1922 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:18:58.952063 update_engine[1922]: I20250213 15:18:58.951676 1922 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:18:59.003579 update_engine[1922]: E20250213 15:18:59.003396 1922 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:18:59.003821 update_engine[1922]: I20250213 15:18:59.003707 1922 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 15:19:03.276980 systemd[1]: Started sshd@15-172.31.21.146:22-139.178.68.195:38072.service - OpenSSH per-connection server daemon (139.178.68.195:38072). Feb 13 15:19:03.482012 sshd[4831]: Accepted publickey for core from 139.178.68.195 port 38072 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:03.485153 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:03.494667 systemd-logind[1921]: New session 16 of user core. Feb 13 15:19:03.500560 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:19:03.763307 sshd[4833]: Connection closed by 139.178.68.195 port 38072 Feb 13 15:19:03.762534 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:03.769088 systemd[1]: sshd@15-172.31.21.146:22-139.178.68.195:38072.service: Deactivated successfully. Feb 13 15:19:03.774335 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:19:03.779984 systemd-logind[1921]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:19:03.785736 systemd-logind[1921]: Removed session 16. Feb 13 15:19:08.801770 systemd[1]: Started sshd@16-172.31.21.146:22-139.178.68.195:46316.service - OpenSSH per-connection server daemon (139.178.68.195:46316). Feb 13 15:19:08.947302 update_engine[1922]: I20250213 15:19:08.946528 1922 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:19:08.947302 update_engine[1922]: I20250213 15:19:08.946868 1922 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:19:08.947302 update_engine[1922]: I20250213 15:19:08.947208 1922 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:19:08.948451 update_engine[1922]: E20250213 15:19:08.948400 1922 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:19:08.948740 update_engine[1922]: I20250213 15:19:08.948611 1922 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 15:19:08.996183 sshd[4848]: Accepted publickey for core from 139.178.68.195 port 46316 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:08.999674 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:09.010723 systemd-logind[1921]: New session 17 of user core. Feb 13 15:19:09.020698 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:19:09.291484 sshd[4850]: Connection closed by 139.178.68.195 port 46316 Feb 13 15:19:09.290311 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:09.298661 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:19:09.301260 systemd[1]: sshd@16-172.31.21.146:22-139.178.68.195:46316.service: Deactivated successfully. Feb 13 15:19:09.308437 systemd-logind[1921]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:19:09.310616 systemd-logind[1921]: Removed session 17. Feb 13 15:19:09.340783 systemd[1]: Started sshd@17-172.31.21.146:22-139.178.68.195:46328.service - OpenSSH per-connection server daemon (139.178.68.195:46328). Feb 13 15:19:09.533460 sshd[4860]: Accepted publickey for core from 139.178.68.195 port 46328 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:09.536463 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:09.544893 systemd-logind[1921]: New session 18 of user core. Feb 13 15:19:09.553548 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:19:09.882518 sshd[4862]: Connection closed by 139.178.68.195 port 46328 Feb 13 15:19:09.883698 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:09.892229 systemd[1]: sshd@17-172.31.21.146:22-139.178.68.195:46328.service: Deactivated successfully. Feb 13 15:19:09.897486 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:19:09.899679 systemd-logind[1921]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:19:09.912740 systemd-logind[1921]: Removed session 18. Feb 13 15:19:09.918819 systemd[1]: Started sshd@18-172.31.21.146:22-139.178.68.195:46334.service - OpenSSH per-connection server daemon (139.178.68.195:46334). Feb 13 15:19:10.120396 sshd[4870]: Accepted publickey for core from 139.178.68.195 port 46334 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:10.123252 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:10.132823 systemd-logind[1921]: New session 19 of user core. Feb 13 15:19:10.138663 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:19:12.806966 sshd[4872]: Connection closed by 139.178.68.195 port 46334 Feb 13 15:19:12.807738 sshd-session[4870]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:12.822634 systemd-logind[1921]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:19:12.824806 systemd[1]: sshd@18-172.31.21.146:22-139.178.68.195:46334.service: Deactivated successfully. Feb 13 15:19:12.834756 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:19:12.854846 systemd-logind[1921]: Removed session 19. Feb 13 15:19:12.866355 systemd[1]: Started sshd@19-172.31.21.146:22-139.178.68.195:46342.service - OpenSSH per-connection server daemon (139.178.68.195:46342). Feb 13 15:19:13.076855 sshd[4889]: Accepted publickey for core from 139.178.68.195 port 46342 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:13.080372 sshd-session[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:13.089086 systemd-logind[1921]: New session 20 of user core. Feb 13 15:19:13.097614 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:19:13.648807 sshd[4891]: Connection closed by 139.178.68.195 port 46342 Feb 13 15:19:13.648674 sshd-session[4889]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:13.661244 systemd[1]: sshd@19-172.31.21.146:22-139.178.68.195:46342.service: Deactivated successfully. Feb 13 15:19:13.666956 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:19:13.669008 systemd-logind[1921]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:19:13.690920 systemd[1]: Started sshd@20-172.31.21.146:22-139.178.68.195:46346.service - OpenSSH per-connection server daemon (139.178.68.195:46346). Feb 13 15:19:13.696662 systemd-logind[1921]: Removed session 20. Feb 13 15:19:13.890569 sshd[4900]: Accepted publickey for core from 139.178.68.195 port 46346 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:13.893848 sshd-session[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:13.904541 systemd-logind[1921]: New session 21 of user core. Feb 13 15:19:13.911614 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:19:14.213930 sshd[4902]: Connection closed by 139.178.68.195 port 46346 Feb 13 15:19:14.217053 sshd-session[4900]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:14.224375 systemd[1]: sshd@20-172.31.21.146:22-139.178.68.195:46346.service: Deactivated successfully. Feb 13 15:19:14.227917 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:19:14.230444 systemd-logind[1921]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:19:14.232501 systemd-logind[1921]: Removed session 21. Feb 13 15:19:18.948316 update_engine[1922]: I20250213 15:19:18.947919 1922 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:19:18.948853 update_engine[1922]: I20250213 15:19:18.948352 1922 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:19:18.948853 update_engine[1922]: I20250213 15:19:18.948725 1922 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:19:18.949571 update_engine[1922]: E20250213 15:19:18.949230 1922 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:19:18.949571 update_engine[1922]: I20250213 15:19:18.949452 1922 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 15:19:19.257860 systemd[1]: Started sshd@21-172.31.21.146:22-139.178.68.195:39330.service - OpenSSH per-connection server daemon (139.178.68.195:39330). Feb 13 15:19:19.472538 sshd[4913]: Accepted publickey for core from 139.178.68.195 port 39330 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:19.475530 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:19.484711 systemd-logind[1921]: New session 22 of user core. Feb 13 15:19:19.494539 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:19:19.758897 sshd[4918]: Connection closed by 139.178.68.195 port 39330 Feb 13 15:19:19.759553 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:19.770344 systemd[1]: sshd@21-172.31.21.146:22-139.178.68.195:39330.service: Deactivated successfully. Feb 13 15:19:19.775977 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:19:19.777760 systemd-logind[1921]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:19:19.781346 systemd-logind[1921]: Removed session 22. Feb 13 15:19:24.803810 systemd[1]: Started sshd@22-172.31.21.146:22-139.178.68.195:39346.service - OpenSSH per-connection server daemon (139.178.68.195:39346). Feb 13 15:19:25.004847 sshd[4929]: Accepted publickey for core from 139.178.68.195 port 39346 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:25.008232 sshd-session[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:25.017470 systemd-logind[1921]: New session 23 of user core. Feb 13 15:19:25.024554 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 15:19:25.290313 sshd[4931]: Connection closed by 139.178.68.195 port 39346 Feb 13 15:19:25.291345 sshd-session[4929]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:25.297670 systemd[1]: sshd@22-172.31.21.146:22-139.178.68.195:39346.service: Deactivated successfully. Feb 13 15:19:25.307253 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 15:19:25.312015 systemd-logind[1921]: Session 23 logged out. Waiting for processes to exit. Feb 13 15:19:25.313980 systemd-logind[1921]: Removed session 23. Feb 13 15:19:28.946306 update_engine[1922]: I20250213 15:19:28.946197 1922 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:19:28.946946 update_engine[1922]: I20250213 15:19:28.946601 1922 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:19:28.946946 update_engine[1922]: I20250213 15:19:28.946902 1922 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:19:28.947458 update_engine[1922]: E20250213 15:19:28.947402 1922 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:19:28.947567 update_engine[1922]: I20250213 15:19:28.947490 1922 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:19:28.947567 update_engine[1922]: I20250213 15:19:28.947513 1922 omaha_request_action.cc:617] Omaha request response: Feb 13 15:19:28.947674 update_engine[1922]: E20250213 15:19:28.947646 1922 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 15:19:28.947731 update_engine[1922]: I20250213 15:19:28.947691 1922 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 15:19:28.947731 update_engine[1922]: I20250213 15:19:28.947710 1922 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:19:28.947856 update_engine[1922]: I20250213 15:19:28.947724 1922 update_attempter.cc:306] Processing Done. Feb 13 15:19:28.947856 update_engine[1922]: E20250213 15:19:28.947772 1922 update_attempter.cc:619] Update failed. Feb 13 15:19:28.947856 update_engine[1922]: I20250213 15:19:28.947789 1922 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 15:19:28.947856 update_engine[1922]: I20250213 15:19:28.947805 1922 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 15:19:28.947856 update_engine[1922]: I20250213 15:19:28.947824 1922 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 15:19:28.948118 update_engine[1922]: I20250213 15:19:28.947997 1922 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 15:19:28.948118 update_engine[1922]: I20250213 15:19:28.948072 1922 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 15:19:28.948118 update_engine[1922]: I20250213 15:19:28.948093 1922 omaha_request_action.cc:272] Request: Feb 13 15:19:28.948118 update_engine[1922]: Feb 13 15:19:28.948118 update_engine[1922]: Feb 13 15:19:28.948118 update_engine[1922]: Feb 13 15:19:28.948118 update_engine[1922]: Feb 13 15:19:28.948118 update_engine[1922]: Feb 13 15:19:28.948118 update_engine[1922]: Feb 13 15:19:28.948565 update_engine[1922]: I20250213 15:19:28.948112 1922 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 15:19:28.948565 update_engine[1922]: I20250213 15:19:28.948486 1922 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 15:19:28.949015 update_engine[1922]: I20250213 15:19:28.948763 1922 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 15:19:28.949201 locksmithd[1959]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 15:19:28.950139 update_engine[1922]: E20250213 15:19:28.949472 1922 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 15:19:28.950139 update_engine[1922]: I20250213 15:19:28.949570 1922 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 15:19:28.950139 update_engine[1922]: I20250213 15:19:28.949589 1922 omaha_request_action.cc:617] Omaha request response: Feb 13 15:19:28.950139 update_engine[1922]: I20250213 15:19:28.949610 1922 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:19:28.950139 update_engine[1922]: I20250213 15:19:28.949626 1922 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 15:19:28.950139 update_engine[1922]: I20250213 15:19:28.949641 1922 update_attempter.cc:306] Processing Done. Feb 13 15:19:28.950139 update_engine[1922]: I20250213 15:19:28.949661 1922 update_attempter.cc:310] Error event sent. Feb 13 15:19:28.950139 update_engine[1922]: I20250213 15:19:28.949683 1922 update_check_scheduler.cc:74] Next update check in 45m20s Feb 13 15:19:28.950685 locksmithd[1959]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 15:19:30.335916 systemd[1]: Started sshd@23-172.31.21.146:22-139.178.68.195:56674.service - OpenSSH per-connection server daemon (139.178.68.195:56674). Feb 13 15:19:30.543027 sshd[4941]: Accepted publickey for core from 139.178.68.195 port 56674 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:30.546117 sshd-session[4941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:30.554458 systemd-logind[1921]: New session 24 of user core. Feb 13 15:19:30.565573 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 15:19:30.820193 sshd[4943]: Connection closed by 139.178.68.195 port 56674 Feb 13 15:19:30.821228 sshd-session[4941]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:30.828807 systemd[1]: sshd@23-172.31.21.146:22-139.178.68.195:56674.service: Deactivated successfully. Feb 13 15:19:30.834156 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 15:19:30.836834 systemd-logind[1921]: Session 24 logged out. Waiting for processes to exit. Feb 13 15:19:30.839100 systemd-logind[1921]: Removed session 24. Feb 13 15:19:35.859804 systemd[1]: Started sshd@24-172.31.21.146:22-139.178.68.195:56676.service - OpenSSH per-connection server daemon (139.178.68.195:56676). Feb 13 15:19:36.056296 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 56676 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:36.059178 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:36.067077 systemd-logind[1921]: New session 25 of user core. Feb 13 15:19:36.076642 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 15:19:36.326674 sshd[4955]: Connection closed by 139.178.68.195 port 56676 Feb 13 15:19:36.326224 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:36.334214 systemd[1]: sshd@24-172.31.21.146:22-139.178.68.195:56676.service: Deactivated successfully. Feb 13 15:19:36.338692 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 15:19:36.340867 systemd-logind[1921]: Session 25 logged out. Waiting for processes to exit. Feb 13 15:19:36.342732 systemd-logind[1921]: Removed session 25. Feb 13 15:19:36.369154 systemd[1]: Started sshd@25-172.31.21.146:22-139.178.68.195:56692.service - OpenSSH per-connection server daemon (139.178.68.195:56692). Feb 13 15:19:36.569851 sshd[4966]: Accepted publickey for core from 139.178.68.195 port 56692 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:36.572686 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:36.581922 systemd-logind[1921]: New session 26 of user core. Feb 13 15:19:36.594690 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 15:19:39.827527 containerd[1932]: time="2025-02-13T15:19:39.827419552Z" level=info msg="StopContainer for \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\" with timeout 30 (s)" Feb 13 15:19:39.829612 containerd[1932]: time="2025-02-13T15:19:39.829234648Z" level=info msg="Stop container \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\" with signal terminated" Feb 13 15:19:39.850360 containerd[1932]: time="2025-02-13T15:19:39.850186564Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:19:39.864993 systemd[1]: cri-containerd-a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4.scope: Deactivated successfully. Feb 13 15:19:39.869702 containerd[1932]: time="2025-02-13T15:19:39.869220388Z" level=info msg="StopContainer for \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\" with timeout 2 (s)" Feb 13 15:19:39.870355 containerd[1932]: time="2025-02-13T15:19:39.870143224Z" level=info msg="Stop container \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\" with signal terminated" Feb 13 15:19:39.890009 systemd-networkd[1775]: lxc_health: Link DOWN Feb 13 15:19:39.890030 systemd-networkd[1775]: lxc_health: Lost carrier Feb 13 15:19:39.924466 systemd[1]: cri-containerd-0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f.scope: Deactivated successfully. Feb 13 15:19:39.925531 systemd[1]: cri-containerd-0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f.scope: Consumed 16.990s CPU time. Feb 13 15:19:39.956801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4-rootfs.mount: Deactivated successfully. Feb 13 15:19:39.988694 containerd[1932]: time="2025-02-13T15:19:39.987482525Z" level=info msg="shim disconnected" id=a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4 namespace=k8s.io Feb 13 15:19:39.989672 containerd[1932]: time="2025-02-13T15:19:39.988792325Z" level=warning msg="cleaning up after shim disconnected" id=a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4 namespace=k8s.io Feb 13 15:19:39.988730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f-rootfs.mount: Deactivated successfully. Feb 13 15:19:39.992675 containerd[1932]: time="2025-02-13T15:19:39.988829429Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:39.995927 containerd[1932]: time="2025-02-13T15:19:39.995771633Z" level=info msg="shim disconnected" id=0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f namespace=k8s.io Feb 13 15:19:39.995927 containerd[1932]: time="2025-02-13T15:19:39.995873405Z" level=warning msg="cleaning up after shim disconnected" id=0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f namespace=k8s.io Feb 13 15:19:39.996332 containerd[1932]: time="2025-02-13T15:19:39.995894201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:40.038077 containerd[1932]: time="2025-02-13T15:19:40.037914781Z" level=info msg="StopContainer for \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\" returns successfully" Feb 13 15:19:40.039842 containerd[1932]: time="2025-02-13T15:19:40.039569149Z" level=info msg="StopPodSandbox for \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\"" Feb 13 15:19:40.040328 containerd[1932]: time="2025-02-13T15:19:40.039600685Z" level=info msg="StopContainer for \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\" returns successfully" Feb 13 15:19:40.040328 containerd[1932]: time="2025-02-13T15:19:40.040089661Z" level=info msg="Container to stop \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:40.040878 containerd[1932]: time="2025-02-13T15:19:40.040740961Z" level=info msg="StopPodSandbox for \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\"" Feb 13 15:19:40.040878 containerd[1932]: time="2025-02-13T15:19:40.040831441Z" level=info msg="Container to stop \"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:40.043316 containerd[1932]: time="2025-02-13T15:19:40.041060677Z" level=info msg="Container to stop \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:40.043316 containerd[1932]: time="2025-02-13T15:19:40.041151049Z" level=info msg="Container to stop \"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:40.043316 containerd[1932]: time="2025-02-13T15:19:40.041177497Z" level=info msg="Container to stop \"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:40.043897 containerd[1932]: time="2025-02-13T15:19:40.041198977Z" level=info msg="Container to stop \"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:19:40.046116 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f-shm.mount: Deactivated successfully. Feb 13 15:19:40.052974 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d-shm.mount: Deactivated successfully. Feb 13 15:19:40.066601 systemd[1]: cri-containerd-a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d.scope: Deactivated successfully. Feb 13 15:19:40.073574 systemd[1]: cri-containerd-12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f.scope: Deactivated successfully. Feb 13 15:19:40.152611 containerd[1932]: time="2025-02-13T15:19:40.149455345Z" level=info msg="shim disconnected" id=a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d namespace=k8s.io Feb 13 15:19:40.152611 containerd[1932]: time="2025-02-13T15:19:40.149552521Z" level=warning msg="cleaning up after shim disconnected" id=a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d namespace=k8s.io Feb 13 15:19:40.152611 containerd[1932]: time="2025-02-13T15:19:40.149574193Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:40.152611 containerd[1932]: time="2025-02-13T15:19:40.149457841Z" level=info msg="shim disconnected" id=12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f namespace=k8s.io Feb 13 15:19:40.154084 containerd[1932]: time="2025-02-13T15:19:40.153981529Z" level=warning msg="cleaning up after shim disconnected" id=12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f namespace=k8s.io Feb 13 15:19:40.154357 containerd[1932]: time="2025-02-13T15:19:40.154322197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:40.188128 containerd[1932]: time="2025-02-13T15:19:40.188053514Z" level=info msg="TearDown network for sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" successfully" Feb 13 15:19:40.188349 containerd[1932]: time="2025-02-13T15:19:40.188129834Z" level=info msg="StopPodSandbox for \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" returns successfully" Feb 13 15:19:40.201391 containerd[1932]: time="2025-02-13T15:19:40.201150338Z" level=info msg="TearDown network for sandbox \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\" successfully" Feb 13 15:19:40.201391 containerd[1932]: time="2025-02-13T15:19:40.201212114Z" level=info msg="StopPodSandbox for \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\" returns successfully" Feb 13 15:19:40.262421 kubelet[3344]: I0213 15:19:40.262356 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-host-proc-sys-net\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.263082 kubelet[3344]: I0213 15:19:40.262437 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-host-proc-sys-kernel\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.263082 kubelet[3344]: I0213 15:19:40.262496 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-bpf-maps\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.263082 kubelet[3344]: I0213 15:19:40.262551 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9a7ebda-8d82-4a70-a546-c8d898adb14f-cilium-config-path\") pod \"b9a7ebda-8d82-4a70-a546-c8d898adb14f\" (UID: \"b9a7ebda-8d82-4a70-a546-c8d898adb14f\") " Feb 13 15:19:40.263082 kubelet[3344]: I0213 15:19:40.262597 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e846df7-750a-44ae-8992-21888b096c05-cilium-config-path\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.263082 kubelet[3344]: I0213 15:19:40.262640 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e846df7-750a-44ae-8992-21888b096c05-hubble-tls\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.263082 kubelet[3344]: I0213 15:19:40.262683 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5fst\" (UniqueName: \"kubernetes.io/projected/2e846df7-750a-44ae-8992-21888b096c05-kube-api-access-z5fst\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.264727 kubelet[3344]: I0213 15:19:40.262721 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cilium-cgroup\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.264727 kubelet[3344]: I0213 15:19:40.262794 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-etc-cni-netd\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.264727 kubelet[3344]: I0213 15:19:40.262833 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cilium-run\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.264727 kubelet[3344]: I0213 15:19:40.262878 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e846df7-750a-44ae-8992-21888b096c05-clustermesh-secrets\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.264727 kubelet[3344]: I0213 15:19:40.262922 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cni-path\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.264727 kubelet[3344]: I0213 15:19:40.262964 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-lib-modules\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.265045 kubelet[3344]: I0213 15:19:40.263002 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-hostproc\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.265045 kubelet[3344]: I0213 15:19:40.263039 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-xtables-lock\") pod \"2e846df7-750a-44ae-8992-21888b096c05\" (UID: \"2e846df7-750a-44ae-8992-21888b096c05\") " Feb 13 15:19:40.265045 kubelet[3344]: I0213 15:19:40.263085 3344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swnsl\" (UniqueName: \"kubernetes.io/projected/b9a7ebda-8d82-4a70-a546-c8d898adb14f-kube-api-access-swnsl\") pod \"b9a7ebda-8d82-4a70-a546-c8d898adb14f\" (UID: \"b9a7ebda-8d82-4a70-a546-c8d898adb14f\") " Feb 13 15:19:40.265045 kubelet[3344]: I0213 15:19:40.263435 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:40.265045 kubelet[3344]: I0213 15:19:40.263526 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:40.266315 kubelet[3344]: I0213 15:19:40.263569 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:40.266315 kubelet[3344]: I0213 15:19:40.263617 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:40.270878 kubelet[3344]: I0213 15:19:40.270549 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:40.270878 kubelet[3344]: I0213 15:19:40.270647 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:40.276256 kubelet[3344]: I0213 15:19:40.275957 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cni-path" (OuterVolumeSpecName: "cni-path") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:40.276256 kubelet[3344]: I0213 15:19:40.276112 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:40.276256 kubelet[3344]: I0213 15:19:40.276181 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-hostproc" (OuterVolumeSpecName: "hostproc") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:40.276783 kubelet[3344]: I0213 15:19:40.276225 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:19:40.280857 kubelet[3344]: I0213 15:19:40.279423 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9a7ebda-8d82-4a70-a546-c8d898adb14f-kube-api-access-swnsl" (OuterVolumeSpecName: "kube-api-access-swnsl") pod "b9a7ebda-8d82-4a70-a546-c8d898adb14f" (UID: "b9a7ebda-8d82-4a70-a546-c8d898adb14f"). InnerVolumeSpecName "kube-api-access-swnsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:19:40.292934 kubelet[3344]: I0213 15:19:40.292516 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e846df7-750a-44ae-8992-21888b096c05-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:19:40.302795 kubelet[3344]: I0213 15:19:40.302607 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e846df7-750a-44ae-8992-21888b096c05-kube-api-access-z5fst" (OuterVolumeSpecName: "kube-api-access-z5fst") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "kube-api-access-z5fst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:19:40.306809 kubelet[3344]: I0213 15:19:40.305208 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e846df7-750a-44ae-8992-21888b096c05-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:19:40.309300 kubelet[3344]: I0213 15:19:40.309214 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e846df7-750a-44ae-8992-21888b096c05-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2e846df7-750a-44ae-8992-21888b096c05" (UID: "2e846df7-750a-44ae-8992-21888b096c05"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:19:40.312836 kubelet[3344]: I0213 15:19:40.312735 3344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9a7ebda-8d82-4a70-a546-c8d898adb14f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b9a7ebda-8d82-4a70-a546-c8d898adb14f" (UID: "b9a7ebda-8d82-4a70-a546-c8d898adb14f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:19:40.364367 kubelet[3344]: I0213 15:19:40.363863 3344 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-lib-modules\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.364367 kubelet[3344]: I0213 15:19:40.363928 3344 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-hostproc\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.364367 kubelet[3344]: I0213 15:19:40.363956 3344 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-xtables-lock\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.364367 kubelet[3344]: I0213 15:19:40.363983 3344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-swnsl\" (UniqueName: \"kubernetes.io/projected/b9a7ebda-8d82-4a70-a546-c8d898adb14f-kube-api-access-swnsl\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.364367 kubelet[3344]: I0213 15:19:40.364010 3344 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-host-proc-sys-kernel\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.364367 kubelet[3344]: I0213 15:19:40.364036 3344 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-bpf-maps\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.364367 kubelet[3344]: I0213 15:19:40.364060 3344 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-host-proc-sys-net\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.364367 kubelet[3344]: I0213 15:19:40.364084 3344 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e846df7-750a-44ae-8992-21888b096c05-cilium-config-path\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.365055 kubelet[3344]: I0213 15:19:40.364113 3344 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9a7ebda-8d82-4a70-a546-c8d898adb14f-cilium-config-path\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.365055 kubelet[3344]: I0213 15:19:40.364137 3344 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e846df7-750a-44ae-8992-21888b096c05-hubble-tls\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.365055 kubelet[3344]: I0213 15:19:40.364163 3344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-z5fst\" (UniqueName: \"kubernetes.io/projected/2e846df7-750a-44ae-8992-21888b096c05-kube-api-access-z5fst\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.365055 kubelet[3344]: I0213 15:19:40.364189 3344 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-etc-cni-netd\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.365055 kubelet[3344]: I0213 15:19:40.364214 3344 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cilium-cgroup\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.365055 kubelet[3344]: I0213 15:19:40.364237 3344 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cilium-run\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.365055 kubelet[3344]: I0213 15:19:40.364262 3344 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e846df7-750a-44ae-8992-21888b096c05-clustermesh-secrets\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.365055 kubelet[3344]: I0213 15:19:40.364328 3344 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e846df7-750a-44ae-8992-21888b096c05-cni-path\") on node \"ip-172-31-21-146\" DevicePath \"\"" Feb 13 15:19:40.811297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f-rootfs.mount: Deactivated successfully. Feb 13 15:19:40.811513 systemd[1]: var-lib-kubelet-pods-b9a7ebda\x2d8d82\x2d4a70\x2da546\x2dc8d898adb14f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dswnsl.mount: Deactivated successfully. Feb 13 15:19:40.811666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d-rootfs.mount: Deactivated successfully. Feb 13 15:19:40.811805 systemd[1]: var-lib-kubelet-pods-2e846df7\x2d750a\x2d44ae\x2d8992\x2d21888b096c05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz5fst.mount: Deactivated successfully. Feb 13 15:19:40.811951 systemd[1]: var-lib-kubelet-pods-2e846df7\x2d750a\x2d44ae\x2d8992\x2d21888b096c05-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:19:40.812122 systemd[1]: var-lib-kubelet-pods-2e846df7\x2d750a\x2d44ae\x2d8992\x2d21888b096c05-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:19:41.030603 kubelet[3344]: I0213 15:19:41.030427 3344 scope.go:117] "RemoveContainer" containerID="0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f" Feb 13 15:19:41.038622 containerd[1932]: time="2025-02-13T15:19:41.036008318Z" level=info msg="RemoveContainer for \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\"" Feb 13 15:19:41.050410 systemd[1]: Removed slice kubepods-burstable-pod2e846df7_750a_44ae_8992_21888b096c05.slice - libcontainer container kubepods-burstable-pod2e846df7_750a_44ae_8992_21888b096c05.slice. Feb 13 15:19:41.050792 systemd[1]: kubepods-burstable-pod2e846df7_750a_44ae_8992_21888b096c05.slice: Consumed 17.198s CPU time. Feb 13 15:19:41.053668 containerd[1932]: time="2025-02-13T15:19:41.053391866Z" level=info msg="RemoveContainer for \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\" returns successfully" Feb 13 15:19:41.059495 kubelet[3344]: I0213 15:19:41.058882 3344 scope.go:117] "RemoveContainer" containerID="c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6" Feb 13 15:19:41.062939 systemd[1]: Removed slice kubepods-besteffort-podb9a7ebda_8d82_4a70_a546_c8d898adb14f.slice - libcontainer container kubepods-besteffort-podb9a7ebda_8d82_4a70_a546_c8d898adb14f.slice. Feb 13 15:19:41.069382 containerd[1932]: time="2025-02-13T15:19:41.068102198Z" level=info msg="RemoveContainer for \"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6\"" Feb 13 15:19:41.076003 containerd[1932]: time="2025-02-13T15:19:41.075875282Z" level=info msg="RemoveContainer for \"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6\" returns successfully" Feb 13 15:19:41.077429 kubelet[3344]: I0213 15:19:41.077301 3344 scope.go:117] "RemoveContainer" containerID="ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f" Feb 13 15:19:41.083787 containerd[1932]: time="2025-02-13T15:19:41.083642486Z" level=info msg="RemoveContainer for \"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f\"" Feb 13 15:19:41.094509 containerd[1932]: time="2025-02-13T15:19:41.093900734Z" level=info msg="RemoveContainer for \"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f\" returns successfully" Feb 13 15:19:41.095254 kubelet[3344]: I0213 15:19:41.095077 3344 scope.go:117] "RemoveContainer" containerID="079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7" Feb 13 15:19:41.099373 containerd[1932]: time="2025-02-13T15:19:41.099095366Z" level=info msg="RemoveContainer for \"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7\"" Feb 13 15:19:41.106284 containerd[1932]: time="2025-02-13T15:19:41.106101998Z" level=info msg="RemoveContainer for \"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7\" returns successfully" Feb 13 15:19:41.107308 kubelet[3344]: I0213 15:19:41.107029 3344 scope.go:117] "RemoveContainer" containerID="e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2" Feb 13 15:19:41.110980 containerd[1932]: time="2025-02-13T15:19:41.110082386Z" level=info msg="RemoveContainer for \"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2\"" Feb 13 15:19:41.123200 containerd[1932]: time="2025-02-13T15:19:41.123078626Z" level=info msg="RemoveContainer for \"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2\" returns successfully" Feb 13 15:19:41.123744 kubelet[3344]: I0213 15:19:41.123564 3344 scope.go:117] "RemoveContainer" containerID="0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f" Feb 13 15:19:41.124265 containerd[1932]: time="2025-02-13T15:19:41.124067714Z" level=error msg="ContainerStatus for \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\": not found" Feb 13 15:19:41.124980 kubelet[3344]: E0213 15:19:41.124835 3344 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\": not found" containerID="0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f" Feb 13 15:19:41.125135 kubelet[3344]: I0213 15:19:41.125105 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f"} err="failed to get container status \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e230476c6487d7d95edf8bf3ac3a6cc7b6cfd5b83673af4f8adb02e1d3a822f\": not found" Feb 13 15:19:41.125200 kubelet[3344]: I0213 15:19:41.125147 3344 scope.go:117] "RemoveContainer" containerID="c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6" Feb 13 15:19:41.125846 containerd[1932]: time="2025-02-13T15:19:41.125641190Z" level=error msg="ContainerStatus for \"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6\": not found" Feb 13 15:19:41.126074 kubelet[3344]: E0213 15:19:41.126030 3344 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6\": not found" containerID="c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6" Feb 13 15:19:41.126198 kubelet[3344]: I0213 15:19:41.126122 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6"} err="failed to get container status \"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c764a24f11f1af5475b2e64b1f887aab8447ee59e8f32c09adb7d8dbfcff89d6\": not found" Feb 13 15:19:41.126198 kubelet[3344]: I0213 15:19:41.126161 3344 scope.go:117] "RemoveContainer" containerID="ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f" Feb 13 15:19:41.126741 containerd[1932]: time="2025-02-13T15:19:41.126642014Z" level=error msg="ContainerStatus for \"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f\": not found" Feb 13 15:19:41.127447 kubelet[3344]: E0213 15:19:41.127127 3344 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f\": not found" containerID="ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f" Feb 13 15:19:41.127447 kubelet[3344]: I0213 15:19:41.127244 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f"} err="failed to get container status \"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca9ec3838b2885a542728cfebcf7a63f1c22745a25e959a6e0f14c4582a6876f\": not found" Feb 13 15:19:41.127447 kubelet[3344]: I0213 15:19:41.127321 3344 scope.go:117] "RemoveContainer" containerID="079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7" Feb 13 15:19:41.127992 containerd[1932]: time="2025-02-13T15:19:41.127856114Z" level=error msg="ContainerStatus for \"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7\": not found" Feb 13 15:19:41.128100 kubelet[3344]: E0213 15:19:41.128080 3344 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7\": not found" containerID="079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7" Feb 13 15:19:41.128543 kubelet[3344]: I0213 15:19:41.128135 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7"} err="failed to get container status \"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7\": rpc error: code = NotFound desc = an error occurred when try to find container \"079bc327aeaed23c0394e09535ce7ec73af56c9cac72bb08e7cd59876d9b5ba7\": not found" Feb 13 15:19:41.128543 kubelet[3344]: I0213 15:19:41.128158 3344 scope.go:117] "RemoveContainer" containerID="e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2" Feb 13 15:19:41.129396 containerd[1932]: time="2025-02-13T15:19:41.129090410Z" level=error msg="ContainerStatus for \"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2\": not found" Feb 13 15:19:41.129612 kubelet[3344]: E0213 15:19:41.129574 3344 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2\": not found" containerID="e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2" Feb 13 15:19:41.129704 kubelet[3344]: I0213 15:19:41.129647 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2"} err="failed to get container status \"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2\": rpc error: code = NotFound desc = an error occurred when try to find container \"e667a6e7636792432f1da83fe7260c1a24e822eccfd4954f412bac7327b378a2\": not found" Feb 13 15:19:41.129704 kubelet[3344]: I0213 15:19:41.129674 3344 scope.go:117] "RemoveContainer" containerID="a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4" Feb 13 15:19:41.132338 containerd[1932]: time="2025-02-13T15:19:41.131830178Z" level=info msg="RemoveContainer for \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\"" Feb 13 15:19:41.137159 containerd[1932]: time="2025-02-13T15:19:41.137047346Z" level=info msg="RemoveContainer for \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\" returns successfully" Feb 13 15:19:41.137970 kubelet[3344]: I0213 15:19:41.137817 3344 scope.go:117] "RemoveContainer" containerID="a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4" Feb 13 15:19:41.138621 containerd[1932]: time="2025-02-13T15:19:41.138559802Z" level=error msg="ContainerStatus for \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\": not found" Feb 13 15:19:41.139144 kubelet[3344]: E0213 15:19:41.138936 3344 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\": not found" containerID="a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4" Feb 13 15:19:41.139144 kubelet[3344]: I0213 15:19:41.139040 3344 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4"} err="failed to get container status \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a52d9d060d5e6b47e3ddfdc69361317b4fe4f76ee7d0f85389c48a9a7cb0fff4\": not found" Feb 13 15:19:41.348673 kubelet[3344]: I0213 15:19:41.348530 3344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2e846df7-750a-44ae-8992-21888b096c05" path="/var/lib/kubelet/pods/2e846df7-750a-44ae-8992-21888b096c05/volumes" Feb 13 15:19:41.351923 kubelet[3344]: I0213 15:19:41.351870 3344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b9a7ebda-8d82-4a70-a546-c8d898adb14f" path="/var/lib/kubelet/pods/b9a7ebda-8d82-4a70-a546-c8d898adb14f/volumes" Feb 13 15:19:41.730650 sshd[4968]: Connection closed by 139.178.68.195 port 56692 Feb 13 15:19:41.732159 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:41.741157 systemd[1]: sshd@25-172.31.21.146:22-139.178.68.195:56692.service: Deactivated successfully. Feb 13 15:19:41.746787 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 15:19:41.747462 systemd[1]: session-26.scope: Consumed 2.461s CPU time. Feb 13 15:19:41.749429 systemd-logind[1921]: Session 26 logged out. Waiting for processes to exit. Feb 13 15:19:41.752447 systemd-logind[1921]: Removed session 26. Feb 13 15:19:41.772957 systemd[1]: Started sshd@26-172.31.21.146:22-139.178.68.195:50048.service - OpenSSH per-connection server daemon (139.178.68.195:50048). Feb 13 15:19:41.952642 ntpd[1915]: Deleting interface #11 lxc_health, fe80::fcb8:fbff:fe5f:8d39%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Feb 13 15:19:41.953252 ntpd[1915]: 13 Feb 15:19:41 ntpd[1915]: Deleting interface #11 lxc_health, fe80::fcb8:fbff:fe5f:8d39%8#123, interface stats: received=0, sent=0, dropped=0, active_time=69 secs Feb 13 15:19:41.967605 sshd[5129]: Accepted publickey for core from 139.178.68.195 port 50048 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:41.970622 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:41.981256 systemd-logind[1921]: New session 27 of user core. Feb 13 15:19:41.988593 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 15:19:43.700621 kubelet[3344]: E0213 15:19:43.700455 3344 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:19:44.209127 sshd[5131]: Connection closed by 139.178.68.195 port 50048 Feb 13 15:19:44.213705 sshd-session[5129]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:44.227026 systemd[1]: sshd@26-172.31.21.146:22-139.178.68.195:50048.service: Deactivated successfully. Feb 13 15:19:44.233899 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 15:19:44.235898 systemd[1]: session-27.scope: Consumed 2.011s CPU time. Feb 13 15:19:44.241767 systemd-logind[1921]: Session 27 logged out. Waiting for processes to exit. Feb 13 15:19:44.284723 systemd[1]: Started sshd@27-172.31.21.146:22-139.178.68.195:50052.service - OpenSSH per-connection server daemon (139.178.68.195:50052). Feb 13 15:19:44.290601 kubelet[3344]: I0213 15:19:44.286638 3344 topology_manager.go:215] "Topology Admit Handler" podUID="4655e70c-17ba-4a2d-8877-ad49cfe2f718" podNamespace="kube-system" podName="cilium-mmqc8" Feb 13 15:19:44.290601 kubelet[3344]: E0213 15:19:44.286787 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e846df7-750a-44ae-8992-21888b096c05" containerName="mount-cgroup" Feb 13 15:19:44.290601 kubelet[3344]: E0213 15:19:44.286815 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e846df7-750a-44ae-8992-21888b096c05" containerName="clean-cilium-state" Feb 13 15:19:44.290601 kubelet[3344]: E0213 15:19:44.286837 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e846df7-750a-44ae-8992-21888b096c05" containerName="cilium-agent" Feb 13 15:19:44.290601 kubelet[3344]: E0213 15:19:44.286857 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e846df7-750a-44ae-8992-21888b096c05" containerName="apply-sysctl-overwrites" Feb 13 15:19:44.290601 kubelet[3344]: E0213 15:19:44.286878 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e846df7-750a-44ae-8992-21888b096c05" containerName="mount-bpf-fs" Feb 13 15:19:44.290601 kubelet[3344]: E0213 15:19:44.286900 3344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9a7ebda-8d82-4a70-a546-c8d898adb14f" containerName="cilium-operator" Feb 13 15:19:44.290601 kubelet[3344]: I0213 15:19:44.286945 3344 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e846df7-750a-44ae-8992-21888b096c05" containerName="cilium-agent" Feb 13 15:19:44.290601 kubelet[3344]: I0213 15:19:44.286964 3344 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9a7ebda-8d82-4a70-a546-c8d898adb14f" containerName="cilium-operator" Feb 13 15:19:44.292435 systemd-logind[1921]: Removed session 27. Feb 13 15:19:44.340746 systemd[1]: Created slice kubepods-burstable-pod4655e70c_17ba_4a2d_8877_ad49cfe2f718.slice - libcontainer container kubepods-burstable-pod4655e70c_17ba_4a2d_8877_ad49cfe2f718.slice. Feb 13 15:19:44.397939 kubelet[3344]: I0213 15:19:44.397885 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4655e70c-17ba-4a2d-8877-ad49cfe2f718-cni-path\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.400328 kubelet[3344]: I0213 15:19:44.399847 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4655e70c-17ba-4a2d-8877-ad49cfe2f718-clustermesh-secrets\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.400328 kubelet[3344]: I0213 15:19:44.399936 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4655e70c-17ba-4a2d-8877-ad49cfe2f718-lib-modules\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.400328 kubelet[3344]: I0213 15:19:44.399982 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4655e70c-17ba-4a2d-8877-ad49cfe2f718-host-proc-sys-net\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.400328 kubelet[3344]: I0213 15:19:44.400027 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4655e70c-17ba-4a2d-8877-ad49cfe2f718-cilium-run\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.400328 kubelet[3344]: I0213 15:19:44.400069 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4655e70c-17ba-4a2d-8877-ad49cfe2f718-etc-cni-netd\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.400328 kubelet[3344]: I0213 15:19:44.400117 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4655e70c-17ba-4a2d-8877-ad49cfe2f718-cilium-ipsec-secrets\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.400745 kubelet[3344]: I0213 15:19:44.400164 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4655e70c-17ba-4a2d-8877-ad49cfe2f718-hubble-tls\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.400745 kubelet[3344]: I0213 15:19:44.400209 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rwrb\" (UniqueName: \"kubernetes.io/projected/4655e70c-17ba-4a2d-8877-ad49cfe2f718-kube-api-access-4rwrb\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.400901 kubelet[3344]: I0213 15:19:44.400870 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4655e70c-17ba-4a2d-8877-ad49cfe2f718-bpf-maps\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.401050 kubelet[3344]: I0213 15:19:44.401029 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4655e70c-17ba-4a2d-8877-ad49cfe2f718-host-proc-sys-kernel\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.402378 kubelet[3344]: I0213 15:19:44.401191 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4655e70c-17ba-4a2d-8877-ad49cfe2f718-hostproc\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.402378 kubelet[3344]: I0213 15:19:44.401247 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4655e70c-17ba-4a2d-8877-ad49cfe2f718-cilium-config-path\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.402378 kubelet[3344]: I0213 15:19:44.401321 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4655e70c-17ba-4a2d-8877-ad49cfe2f718-xtables-lock\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.402378 kubelet[3344]: I0213 15:19:44.401374 3344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4655e70c-17ba-4a2d-8877-ad49cfe2f718-cilium-cgroup\") pod \"cilium-mmqc8\" (UID: \"4655e70c-17ba-4a2d-8877-ad49cfe2f718\") " pod="kube-system/cilium-mmqc8" Feb 13 15:19:44.550318 sshd[5141]: Accepted publickey for core from 139.178.68.195 port 50052 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:44.571112 sshd-session[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:44.611436 systemd-logind[1921]: New session 28 of user core. Feb 13 15:19:44.619605 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 15:19:44.651845 containerd[1932]: time="2025-02-13T15:19:44.651772760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmqc8,Uid:4655e70c-17ba-4a2d-8877-ad49cfe2f718,Namespace:kube-system,Attempt:0,}" Feb 13 15:19:44.695409 containerd[1932]: time="2025-02-13T15:19:44.693808592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:19:44.695541 containerd[1932]: time="2025-02-13T15:19:44.695460212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:19:44.695625 containerd[1932]: time="2025-02-13T15:19:44.695545076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:44.695956 containerd[1932]: time="2025-02-13T15:19:44.695791856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:19:44.730594 systemd[1]: Started cri-containerd-ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118.scope - libcontainer container ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118. Feb 13 15:19:44.749154 sshd[5147]: Connection closed by 139.178.68.195 port 50052 Feb 13 15:19:44.750135 sshd-session[5141]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:44.761709 systemd[1]: sshd@27-172.31.21.146:22-139.178.68.195:50052.service: Deactivated successfully. Feb 13 15:19:44.766173 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 15:19:44.768091 systemd-logind[1921]: Session 28 logged out. Waiting for processes to exit. Feb 13 15:19:44.775680 systemd-logind[1921]: Removed session 28. Feb 13 15:19:44.799136 systemd[1]: Started sshd@28-172.31.21.146:22-139.178.68.195:50062.service - OpenSSH per-connection server daemon (139.178.68.195:50062). Feb 13 15:19:44.820412 containerd[1932]: time="2025-02-13T15:19:44.820366473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmqc8,Uid:4655e70c-17ba-4a2d-8877-ad49cfe2f718,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118\"" Feb 13 15:19:44.827421 containerd[1932]: time="2025-02-13T15:19:44.827367069Z" level=info msg="CreateContainer within sandbox \"ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:19:44.854357 containerd[1932]: time="2025-02-13T15:19:44.854249493Z" level=info msg="CreateContainer within sandbox \"ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"305169418ecceeb86361a766fab9cdce9a996722dc68f651e010bee9e6e95fad\"" Feb 13 15:19:44.855535 containerd[1932]: time="2025-02-13T15:19:44.855471153Z" level=info msg="StartContainer for \"305169418ecceeb86361a766fab9cdce9a996722dc68f651e010bee9e6e95fad\"" Feb 13 15:19:44.907679 systemd[1]: Started cri-containerd-305169418ecceeb86361a766fab9cdce9a996722dc68f651e010bee9e6e95fad.scope - libcontainer container 305169418ecceeb86361a766fab9cdce9a996722dc68f651e010bee9e6e95fad. Feb 13 15:19:44.976812 containerd[1932]: time="2025-02-13T15:19:44.976734081Z" level=info msg="StartContainer for \"305169418ecceeb86361a766fab9cdce9a996722dc68f651e010bee9e6e95fad\" returns successfully" Feb 13 15:19:44.990264 systemd[1]: cri-containerd-305169418ecceeb86361a766fab9cdce9a996722dc68f651e010bee9e6e95fad.scope: Deactivated successfully. Feb 13 15:19:45.026163 sshd[5187]: Accepted publickey for core from 139.178.68.195 port 50062 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:45.031180 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:45.049054 systemd-logind[1921]: New session 29 of user core. Feb 13 15:19:45.053911 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 15:19:45.086541 containerd[1932]: time="2025-02-13T15:19:45.086381454Z" level=info msg="shim disconnected" id=305169418ecceeb86361a766fab9cdce9a996722dc68f651e010bee9e6e95fad namespace=k8s.io Feb 13 15:19:45.087190 containerd[1932]: time="2025-02-13T15:19:45.086855682Z" level=warning msg="cleaning up after shim disconnected" id=305169418ecceeb86361a766fab9cdce9a996722dc68f651e010bee9e6e95fad namespace=k8s.io Feb 13 15:19:45.087190 containerd[1932]: time="2025-02-13T15:19:45.086888514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:45.355004 kubelet[3344]: I0213 15:19:45.354612 3344 setters.go:568] "Node became not ready" node="ip-172-31-21-146" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:19:45Z","lastTransitionTime":"2025-02-13T15:19:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:19:46.089708 containerd[1932]: time="2025-02-13T15:19:46.089490643Z" level=info msg="CreateContainer within sandbox \"ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:19:46.113607 containerd[1932]: time="2025-02-13T15:19:46.113523391Z" level=info msg="CreateContainer within sandbox \"ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e6eff4a5a61e6f31731ec3f3b2ada2084d2f373154cbee74ac99d855abf53bc6\"" Feb 13 15:19:46.115301 containerd[1932]: time="2025-02-13T15:19:46.114576379Z" level=info msg="StartContainer for \"e6eff4a5a61e6f31731ec3f3b2ada2084d2f373154cbee74ac99d855abf53bc6\"" Feb 13 15:19:46.199777 systemd[1]: Started cri-containerd-e6eff4a5a61e6f31731ec3f3b2ada2084d2f373154cbee74ac99d855abf53bc6.scope - libcontainer container e6eff4a5a61e6f31731ec3f3b2ada2084d2f373154cbee74ac99d855abf53bc6. Feb 13 15:19:46.259338 containerd[1932]: time="2025-02-13T15:19:46.257588876Z" level=info msg="StartContainer for \"e6eff4a5a61e6f31731ec3f3b2ada2084d2f373154cbee74ac99d855abf53bc6\" returns successfully" Feb 13 15:19:46.275312 systemd[1]: cri-containerd-e6eff4a5a61e6f31731ec3f3b2ada2084d2f373154cbee74ac99d855abf53bc6.scope: Deactivated successfully. Feb 13 15:19:46.321586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6eff4a5a61e6f31731ec3f3b2ada2084d2f373154cbee74ac99d855abf53bc6-rootfs.mount: Deactivated successfully. Feb 13 15:19:46.357784 containerd[1932]: time="2025-02-13T15:19:46.356990264Z" level=info msg="shim disconnected" id=e6eff4a5a61e6f31731ec3f3b2ada2084d2f373154cbee74ac99d855abf53bc6 namespace=k8s.io Feb 13 15:19:46.357784 containerd[1932]: time="2025-02-13T15:19:46.357109208Z" level=warning msg="cleaning up after shim disconnected" id=e6eff4a5a61e6f31731ec3f3b2ada2084d2f373154cbee74ac99d855abf53bc6 namespace=k8s.io Feb 13 15:19:46.357784 containerd[1932]: time="2025-02-13T15:19:46.357156788Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:47.097331 containerd[1932]: time="2025-02-13T15:19:47.097066964Z" level=info msg="CreateContainer within sandbox \"ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:19:47.140956 containerd[1932]: time="2025-02-13T15:19:47.140866436Z" level=info msg="CreateContainer within sandbox \"ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"71d4058819909ee4886409714ce58af5b98000a86b23fbb493b52510c5fd3e43\"" Feb 13 15:19:47.146312 containerd[1932]: time="2025-02-13T15:19:47.144865232Z" level=info msg="StartContainer for \"71d4058819909ee4886409714ce58af5b98000a86b23fbb493b52510c5fd3e43\"" Feb 13 15:19:47.270660 systemd[1]: Started cri-containerd-71d4058819909ee4886409714ce58af5b98000a86b23fbb493b52510c5fd3e43.scope - libcontainer container 71d4058819909ee4886409714ce58af5b98000a86b23fbb493b52510c5fd3e43. Feb 13 15:19:47.345593 kubelet[3344]: E0213 15:19:47.345535 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vkhh5" podUID="c3451082-69cc-4c2d-aba2-753190de3802" Feb 13 15:19:47.423699 containerd[1932]: time="2025-02-13T15:19:47.421644670Z" level=info msg="StartContainer for \"71d4058819909ee4886409714ce58af5b98000a86b23fbb493b52510c5fd3e43\" returns successfully" Feb 13 15:19:47.432654 systemd[1]: cri-containerd-71d4058819909ee4886409714ce58af5b98000a86b23fbb493b52510c5fd3e43.scope: Deactivated successfully. Feb 13 15:19:47.496713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71d4058819909ee4886409714ce58af5b98000a86b23fbb493b52510c5fd3e43-rootfs.mount: Deactivated successfully. Feb 13 15:19:47.510172 containerd[1932]: time="2025-02-13T15:19:47.510068662Z" level=info msg="shim disconnected" id=71d4058819909ee4886409714ce58af5b98000a86b23fbb493b52510c5fd3e43 namespace=k8s.io Feb 13 15:19:47.510172 containerd[1932]: time="2025-02-13T15:19:47.510167098Z" level=warning msg="cleaning up after shim disconnected" id=71d4058819909ee4886409714ce58af5b98000a86b23fbb493b52510c5fd3e43 namespace=k8s.io Feb 13 15:19:47.510172 containerd[1932]: time="2025-02-13T15:19:47.510191578Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:48.111172 containerd[1932]: time="2025-02-13T15:19:48.111038025Z" level=info msg="CreateContainer within sandbox \"ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:19:48.154524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452508626.mount: Deactivated successfully. Feb 13 15:19:48.198392 containerd[1932]: time="2025-02-13T15:19:48.198327969Z" level=info msg="CreateContainer within sandbox \"ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"af3b7bd77ce337b2871d69447a680656d522b00e79360485f080655fdd1bb12e\"" Feb 13 15:19:48.199460 containerd[1932]: time="2025-02-13T15:19:48.199098777Z" level=info msg="StartContainer for \"af3b7bd77ce337b2871d69447a680656d522b00e79360485f080655fdd1bb12e\"" Feb 13 15:19:48.263721 systemd[1]: Started cri-containerd-af3b7bd77ce337b2871d69447a680656d522b00e79360485f080655fdd1bb12e.scope - libcontainer container af3b7bd77ce337b2871d69447a680656d522b00e79360485f080655fdd1bb12e. Feb 13 15:19:48.311859 systemd[1]: cri-containerd-af3b7bd77ce337b2871d69447a680656d522b00e79360485f080655fdd1bb12e.scope: Deactivated successfully. Feb 13 15:19:48.317692 containerd[1932]: time="2025-02-13T15:19:48.317523106Z" level=info msg="StartContainer for \"af3b7bd77ce337b2871d69447a680656d522b00e79360485f080655fdd1bb12e\" returns successfully" Feb 13 15:19:48.358839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af3b7bd77ce337b2871d69447a680656d522b00e79360485f080655fdd1bb12e-rootfs.mount: Deactivated successfully. Feb 13 15:19:48.374266 containerd[1932]: time="2025-02-13T15:19:48.373125514Z" level=info msg="shim disconnected" id=af3b7bd77ce337b2871d69447a680656d522b00e79360485f080655fdd1bb12e namespace=k8s.io Feb 13 15:19:48.374266 containerd[1932]: time="2025-02-13T15:19:48.373368610Z" level=warning msg="cleaning up after shim disconnected" id=af3b7bd77ce337b2871d69447a680656d522b00e79360485f080655fdd1bb12e namespace=k8s.io Feb 13 15:19:48.374266 containerd[1932]: time="2025-02-13T15:19:48.373390858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:48.702494 kubelet[3344]: E0213 15:19:48.701964 3344 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:19:49.131938 containerd[1932]: time="2025-02-13T15:19:49.131779450Z" level=info msg="CreateContainer within sandbox \"ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:19:49.166366 containerd[1932]: time="2025-02-13T15:19:49.163683478Z" level=info msg="CreateContainer within sandbox \"ef69796697a2d4fcf80782df67d79d4ddfcb0d38c12e8c6c497628b0239b8118\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0974eb5a9e0151cb9583fa27467bbb653193fcce845f73ba3b921e9e53c25093\"" Feb 13 15:19:49.166366 containerd[1932]: time="2025-02-13T15:19:49.165064510Z" level=info msg="StartContainer for \"0974eb5a9e0151cb9583fa27467bbb653193fcce845f73ba3b921e9e53c25093\"" Feb 13 15:19:49.233686 systemd[1]: Started cri-containerd-0974eb5a9e0151cb9583fa27467bbb653193fcce845f73ba3b921e9e53c25093.scope - libcontainer container 0974eb5a9e0151cb9583fa27467bbb653193fcce845f73ba3b921e9e53c25093. Feb 13 15:19:49.299812 containerd[1932]: time="2025-02-13T15:19:49.299731631Z" level=info msg="StartContainer for \"0974eb5a9e0151cb9583fa27467bbb653193fcce845f73ba3b921e9e53c25093\" returns successfully" Feb 13 15:19:49.349860 kubelet[3344]: E0213 15:19:49.348870 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vkhh5" podUID="c3451082-69cc-4c2d-aba2-753190de3802" Feb 13 15:19:50.210461 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:19:51.344457 kubelet[3344]: E0213 15:19:51.343994 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vkhh5" podUID="c3451082-69cc-4c2d-aba2-753190de3802" Feb 13 15:19:53.345327 kubelet[3344]: E0213 15:19:53.344581 3344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-vkhh5" podUID="c3451082-69cc-4c2d-aba2-753190de3802" Feb 13 15:19:53.411386 containerd[1932]: time="2025-02-13T15:19:53.410020395Z" level=info msg="StopPodSandbox for \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\"" Feb 13 15:19:53.411386 containerd[1932]: time="2025-02-13T15:19:53.411126267Z" level=info msg="TearDown network for sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" successfully" Feb 13 15:19:53.411386 containerd[1932]: time="2025-02-13T15:19:53.411313347Z" level=info msg="StopPodSandbox for \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" returns successfully" Feb 13 15:19:53.416467 containerd[1932]: time="2025-02-13T15:19:53.415930395Z" level=info msg="RemovePodSandbox for \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\"" Feb 13 15:19:53.416467 containerd[1932]: time="2025-02-13T15:19:53.416021271Z" level=info msg="Forcibly stopping sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\"" Feb 13 15:19:53.416467 containerd[1932]: time="2025-02-13T15:19:53.416189511Z" level=info msg="TearDown network for sandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" successfully" Feb 13 15:19:53.424135 containerd[1932]: time="2025-02-13T15:19:53.423730959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:19:53.424135 containerd[1932]: time="2025-02-13T15:19:53.423840615Z" level=info msg="RemovePodSandbox \"a09223afbe83af02dadb6821b2712cc7ab26420854a5823f187125d42f41a50d\" returns successfully" Feb 13 15:19:53.425546 containerd[1932]: time="2025-02-13T15:19:53.425092827Z" level=info msg="StopPodSandbox for \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\"" Feb 13 15:19:53.425546 containerd[1932]: time="2025-02-13T15:19:53.425248875Z" level=info msg="TearDown network for sandbox \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\" successfully" Feb 13 15:19:53.425546 containerd[1932]: time="2025-02-13T15:19:53.425301963Z" level=info msg="StopPodSandbox for \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\" returns successfully" Feb 13 15:19:53.426439 containerd[1932]: time="2025-02-13T15:19:53.426035091Z" level=info msg="RemovePodSandbox for \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\"" Feb 13 15:19:53.426439 containerd[1932]: time="2025-02-13T15:19:53.426082827Z" level=info msg="Forcibly stopping sandbox \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\"" Feb 13 15:19:53.426439 containerd[1932]: time="2025-02-13T15:19:53.426263127Z" level=info msg="TearDown network for sandbox \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\" successfully" Feb 13 15:19:53.433944 containerd[1932]: time="2025-02-13T15:19:53.433153491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:19:53.433944 containerd[1932]: time="2025-02-13T15:19:53.433452867Z" level=info msg="RemovePodSandbox \"12a0f9100c24ec2a01f995d3c84635f63f35811af79a25dd1a63cad91b454c2f\" returns successfully" Feb 13 15:19:54.767982 (udev-worker)[5986]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:19:54.770938 (udev-worker)[5987]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:19:54.783916 systemd-networkd[1775]: lxc_health: Link UP Feb 13 15:19:54.791770 systemd-networkd[1775]: lxc_health: Gained carrier Feb 13 15:19:56.385454 systemd[1]: run-containerd-runc-k8s.io-0974eb5a9e0151cb9583fa27467bbb653193fcce845f73ba3b921e9e53c25093-runc.7afSto.mount: Deactivated successfully. Feb 13 15:19:56.454657 systemd-networkd[1775]: lxc_health: Gained IPv6LL Feb 13 15:19:56.693775 kubelet[3344]: I0213 15:19:56.693553 3344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mmqc8" podStartSLOduration=12.693475844 podStartE2EDuration="12.693475844s" podCreationTimestamp="2025-02-13 15:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:19:50.189065999 +0000 UTC m=+117.125516922" watchObservedRunningTime="2025-02-13 15:19:56.693475844 +0000 UTC m=+123.629926731" Feb 13 15:19:58.745252 systemd[1]: run-containerd-runc-k8s.io-0974eb5a9e0151cb9583fa27467bbb653193fcce845f73ba3b921e9e53c25093-runc.X8DUPs.mount: Deactivated successfully. Feb 13 15:19:58.952653 ntpd[1915]: Listen normally on 14 lxc_health [fe80::10ff:e1ff:fe1d:eb61%14]:123 Feb 13 15:19:58.953331 ntpd[1915]: 13 Feb 15:19:58 ntpd[1915]: Listen normally on 14 lxc_health [fe80::10ff:e1ff:fe1d:eb61%14]:123 Feb 13 15:20:01.167956 sshd[5243]: Connection closed by 139.178.68.195 port 50062 Feb 13 15:20:01.169438 sshd-session[5187]: pam_unix(sshd:session): session closed for user core Feb 13 15:20:01.178693 systemd[1]: sshd@28-172.31.21.146:22-139.178.68.195:50062.service: Deactivated successfully. Feb 13 15:20:01.188491 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 15:20:01.196392 systemd-logind[1921]: Session 29 logged out. Waiting for processes to exit. Feb 13 15:20:01.200973 systemd-logind[1921]: Removed session 29. Feb 13 15:20:14.349844 systemd[1]: cri-containerd-0ecebc4d54c7dd1115ab2cf698ad12dfd4f933dee94c24692d9cae864866ea1d.scope: Deactivated successfully. Feb 13 15:20:14.351188 systemd[1]: cri-containerd-0ecebc4d54c7dd1115ab2cf698ad12dfd4f933dee94c24692d9cae864866ea1d.scope: Consumed 6.400s CPU time, 22.2M memory peak, 0B memory swap peak. Feb 13 15:20:14.398393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ecebc4d54c7dd1115ab2cf698ad12dfd4f933dee94c24692d9cae864866ea1d-rootfs.mount: Deactivated successfully. Feb 13 15:20:14.420967 containerd[1932]: time="2025-02-13T15:20:14.420872196Z" level=info msg="shim disconnected" id=0ecebc4d54c7dd1115ab2cf698ad12dfd4f933dee94c24692d9cae864866ea1d namespace=k8s.io Feb 13 15:20:14.420967 containerd[1932]: time="2025-02-13T15:20:14.420956472Z" level=warning msg="cleaning up after shim disconnected" id=0ecebc4d54c7dd1115ab2cf698ad12dfd4f933dee94c24692d9cae864866ea1d namespace=k8s.io Feb 13 15:20:14.421854 containerd[1932]: time="2025-02-13T15:20:14.420978132Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:20:15.237916 kubelet[3344]: I0213 15:20:15.237626 3344 scope.go:117] "RemoveContainer" containerID="0ecebc4d54c7dd1115ab2cf698ad12dfd4f933dee94c24692d9cae864866ea1d" Feb 13 15:20:15.243886 containerd[1932]: time="2025-02-13T15:20:15.243803004Z" level=info msg="CreateContainer within sandbox \"57dcf4ac03bfe65bd3b230f685d2326132b801abfb1db029a38df7b753411f0c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:20:15.264695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777675496.mount: Deactivated successfully. Feb 13 15:20:15.274352 containerd[1932]: time="2025-02-13T15:20:15.274098768Z" level=info msg="CreateContainer within sandbox \"57dcf4ac03bfe65bd3b230f685d2326132b801abfb1db029a38df7b753411f0c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1e23033a255bb0f1aec79ea51553517e2c4197f0babe2ef3f16cbb09c8b5b28c\"" Feb 13 15:20:15.275627 containerd[1932]: time="2025-02-13T15:20:15.275541456Z" level=info msg="StartContainer for \"1e23033a255bb0f1aec79ea51553517e2c4197f0babe2ef3f16cbb09c8b5b28c\"" Feb 13 15:20:15.337596 systemd[1]: Started cri-containerd-1e23033a255bb0f1aec79ea51553517e2c4197f0babe2ef3f16cbb09c8b5b28c.scope - libcontainer container 1e23033a255bb0f1aec79ea51553517e2c4197f0babe2ef3f16cbb09c8b5b28c. Feb 13 15:20:15.463965 containerd[1932]: time="2025-02-13T15:20:15.463879993Z" level=info msg="StartContainer for \"1e23033a255bb0f1aec79ea51553517e2c4197f0babe2ef3f16cbb09c8b5b28c\" returns successfully" Feb 13 15:20:17.340732 kubelet[3344]: E0213 15:20:17.340648 3344 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-146?timeout=10s\": context deadline exceeded" Feb 13 15:20:20.605640 systemd[1]: cri-containerd-c18c6c784ac3f3f282b6738049e631ef7710856b76ce2144368fb73f0e4ef19d.scope: Deactivated successfully. Feb 13 15:20:20.607095 systemd[1]: cri-containerd-c18c6c784ac3f3f282b6738049e631ef7710856b76ce2144368fb73f0e4ef19d.scope: Consumed 5.573s CPU time, 15.1M memory peak, 0B memory swap peak. Feb 13 15:20:20.652169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c18c6c784ac3f3f282b6738049e631ef7710856b76ce2144368fb73f0e4ef19d-rootfs.mount: Deactivated successfully. Feb 13 15:20:20.671229 containerd[1932]: time="2025-02-13T15:20:20.671098399Z" level=info msg="shim disconnected" id=c18c6c784ac3f3f282b6738049e631ef7710856b76ce2144368fb73f0e4ef19d namespace=k8s.io Feb 13 15:20:20.671936 containerd[1932]: time="2025-02-13T15:20:20.671243803Z" level=warning msg="cleaning up after shim disconnected" id=c18c6c784ac3f3f282b6738049e631ef7710856b76ce2144368fb73f0e4ef19d namespace=k8s.io Feb 13 15:20:20.671936 containerd[1932]: time="2025-02-13T15:20:20.671265859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:20:21.264960 kubelet[3344]: I0213 15:20:21.264879 3344 scope.go:117] "RemoveContainer" containerID="c18c6c784ac3f3f282b6738049e631ef7710856b76ce2144368fb73f0e4ef19d" Feb 13 15:20:21.269974 containerd[1932]: time="2025-02-13T15:20:21.269901702Z" level=info msg="CreateContainer within sandbox \"f63b8dcb7b81233e7598690b8f2e6e0df51a3a7d18df49b308b32a7a6aa0be9e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:20:21.299853 containerd[1932]: time="2025-02-13T15:20:21.299767038Z" level=info msg="CreateContainer within sandbox \"f63b8dcb7b81233e7598690b8f2e6e0df51a3a7d18df49b308b32a7a6aa0be9e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"75a179d7102c3e4ede73cbaf65ecd72dee8528dba72eba075e2724f2b2430e30\"" Feb 13 15:20:21.301566 containerd[1932]: time="2025-02-13T15:20:21.301348614Z" level=info msg="StartContainer for \"75a179d7102c3e4ede73cbaf65ecd72dee8528dba72eba075e2724f2b2430e30\"" Feb 13 15:20:21.366749 systemd[1]: Started cri-containerd-75a179d7102c3e4ede73cbaf65ecd72dee8528dba72eba075e2724f2b2430e30.scope - libcontainer container 75a179d7102c3e4ede73cbaf65ecd72dee8528dba72eba075e2724f2b2430e30. Feb 13 15:20:21.436195 containerd[1932]: time="2025-02-13T15:20:21.436003662Z" level=info msg="StartContainer for \"75a179d7102c3e4ede73cbaf65ecd72dee8528dba72eba075e2724f2b2430e30\" returns successfully" Feb 13 15:20:27.340998 kubelet[3344]: E0213 15:20:27.340943 3344 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-146?timeout=10s\": context deadline exceeded"