Jan 13 20:07:04.204431 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 20:07:04.204475 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:07:04.204498 kernel: KASLR disabled due to lack of seed Jan 13 20:07:04.204515 kernel: efi: EFI v2.7 by EDK II Jan 13 20:07:04.204530 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Jan 13 20:07:04.204546 kernel: secureboot: Secure boot disabled Jan 13 20:07:04.204563 kernel: ACPI: Early table checksum verification disabled Jan 13 20:07:04.204579 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 20:07:04.204595 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 20:07:04.204611 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:07:04.204630 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 20:07:04.208711 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:07:04.208753 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 20:07:04.208771 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 20:07:04.208791 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 20:07:04.208817 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:07:04.208835 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 20:07:04.208852 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 20:07:04.208869 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 20:07:04.208886 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 20:07:04.208902 kernel: printk: bootconsole [uart0] enabled Jan 13 20:07:04.208918 kernel: NUMA: Failed to initialise from firmware Jan 13 20:07:04.208935 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:07:04.208952 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 20:07:04.208968 kernel: Zone ranges: Jan 13 20:07:04.208985 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:07:04.209005 kernel: DMA32 empty Jan 13 20:07:04.209021 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 20:07:04.209038 kernel: Movable zone start for each node Jan 13 20:07:04.209054 kernel: Early memory node ranges Jan 13 20:07:04.209070 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 20:07:04.209086 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 20:07:04.209103 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 20:07:04.209119 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 20:07:04.209135 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 20:07:04.209151 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 20:07:04.209167 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 20:07:04.209183 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 20:07:04.209203 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 20:07:04.209220 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 20:07:04.209243 kernel: psci: probing for conduit method from ACPI. Jan 13 20:07:04.209260 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 20:07:04.209278 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:07:04.209300 kernel: psci: Trusted OS migration not required Jan 13 20:07:04.209319 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:07:04.209339 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:07:04.209358 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:07:04.209378 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:07:04.209396 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:07:04.209415 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:07:04.209453 kernel: CPU features: detected: Spectre-v2 Jan 13 20:07:04.209475 kernel: CPU features: detected: Spectre-v3a Jan 13 20:07:04.209493 kernel: CPU features: detected: Spectre-BHB Jan 13 20:07:04.209511 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 20:07:04.209529 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 20:07:04.209553 kernel: alternatives: applying boot alternatives Jan 13 20:07:04.209573 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:07:04.209593 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:07:04.209611 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:07:04.209630 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:07:04.209697 kernel: Fallback order for Node 0: 0 Jan 13 20:07:04.209719 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 20:07:04.209737 kernel: Policy zone: Normal Jan 13 20:07:04.209754 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:07:04.209772 kernel: software IO TLB: area num 2. Jan 13 20:07:04.209798 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 20:07:04.209817 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Jan 13 20:07:04.209835 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:07:04.209854 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:07:04.209872 kernel: rcu: RCU event tracing is enabled. Jan 13 20:07:04.209890 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:07:04.209908 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:07:04.209926 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:07:04.209943 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:07:04.209962 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:07:04.209980 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:07:04.210002 kernel: GICv3: 96 SPIs implemented Jan 13 20:07:04.210020 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:07:04.210038 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:07:04.210056 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 20:07:04.210073 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 20:07:04.210091 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 20:07:04.210108 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:07:04.210126 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:07:04.210143 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 20:07:04.210160 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 20:07:04.210177 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 20:07:04.210194 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:07:04.210216 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 20:07:04.210233 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 20:07:04.210250 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 20:07:04.210268 kernel: Console: colour dummy device 80x25 Jan 13 20:07:04.210286 kernel: printk: console [tty1] enabled Jan 13 20:07:04.210304 kernel: ACPI: Core revision 20230628 Jan 13 20:07:04.210322 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 20:07:04.210339 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:07:04.210357 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:07:04.210378 kernel: landlock: Up and running. Jan 13 20:07:04.210396 kernel: SELinux: Initializing. Jan 13 20:07:04.210414 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:07:04.210431 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:07:04.210449 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:07:04.210467 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:07:04.210484 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:07:04.210502 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:07:04.210520 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 20:07:04.210541 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 20:07:04.210559 kernel: Remapping and enabling EFI services. Jan 13 20:07:04.210577 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:07:04.210594 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:07:04.210612 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 20:07:04.210630 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 20:07:04.212724 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 20:07:04.212767 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:07:04.212786 kernel: SMP: Total of 2 processors activated. Jan 13 20:07:04.212815 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:07:04.212833 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 20:07:04.212851 kernel: CPU features: detected: CRC32 instructions Jan 13 20:07:04.212881 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:07:04.212903 kernel: alternatives: applying system-wide alternatives Jan 13 20:07:04.212921 kernel: devtmpfs: initialized Jan 13 20:07:04.212940 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:07:04.212959 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:07:04.212977 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:07:04.212995 kernel: SMBIOS 3.0.0 present. Jan 13 20:07:04.213018 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 20:07:04.213036 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:07:04.213054 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:07:04.213073 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:07:04.213092 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:07:04.213110 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:07:04.213129 kernel: audit: type=2000 audit(0.229:1): state=initialized audit_enabled=0 res=1 Jan 13 20:07:04.213151 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:07:04.213169 kernel: cpuidle: using governor menu Jan 13 20:07:04.213188 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:07:04.213206 kernel: ASID allocator initialised with 65536 entries Jan 13 20:07:04.213225 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:07:04.213243 kernel: Serial: AMBA PL011 UART driver Jan 13 20:07:04.213261 kernel: Modules: 17440 pages in range for non-PLT usage Jan 13 20:07:04.213280 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:07:04.213298 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:07:04.213320 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:07:04.213339 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:07:04.213357 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:07:04.213376 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:07:04.213394 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:07:04.213412 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:07:04.213446 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:07:04.213470 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:07:04.213489 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:07:04.213513 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:07:04.213533 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:07:04.213551 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:07:04.213569 kernel: ACPI: Interpreter enabled Jan 13 20:07:04.213588 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:07:04.213606 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:07:04.213624 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 20:07:04.213973 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:07:04.214192 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:07:04.214397 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:07:04.214600 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 20:07:04.218957 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 20:07:04.219003 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 20:07:04.219023 kernel: acpiphp: Slot [1] registered Jan 13 20:07:04.219043 kernel: acpiphp: Slot [2] registered Jan 13 20:07:04.219062 kernel: acpiphp: Slot [3] registered Jan 13 20:07:04.219091 kernel: acpiphp: Slot [4] registered Jan 13 20:07:04.219110 kernel: acpiphp: Slot [5] registered Jan 13 20:07:04.219129 kernel: acpiphp: Slot [6] registered Jan 13 20:07:04.219148 kernel: acpiphp: Slot [7] registered Jan 13 20:07:04.219166 kernel: acpiphp: Slot [8] registered Jan 13 20:07:04.219184 kernel: acpiphp: Slot [9] registered Jan 13 20:07:04.219203 kernel: acpiphp: Slot [10] registered Jan 13 20:07:04.219221 kernel: acpiphp: Slot [11] registered Jan 13 20:07:04.219239 kernel: acpiphp: Slot [12] registered Jan 13 20:07:04.219257 kernel: acpiphp: Slot [13] registered Jan 13 20:07:04.219280 kernel: acpiphp: Slot [14] registered Jan 13 20:07:04.219299 kernel: acpiphp: Slot [15] registered Jan 13 20:07:04.219317 kernel: acpiphp: Slot [16] registered Jan 13 20:07:04.219335 kernel: acpiphp: Slot [17] registered Jan 13 20:07:04.219353 kernel: acpiphp: Slot [18] registered Jan 13 20:07:04.219371 kernel: acpiphp: Slot [19] registered Jan 13 20:07:04.219390 kernel: acpiphp: Slot [20] registered Jan 13 20:07:04.219408 kernel: acpiphp: Slot [21] registered Jan 13 20:07:04.219427 kernel: acpiphp: Slot [22] registered Jan 13 20:07:04.219449 kernel: acpiphp: Slot [23] registered Jan 13 20:07:04.219468 kernel: acpiphp: Slot [24] registered Jan 13 20:07:04.219486 kernel: acpiphp: Slot [25] registered Jan 13 20:07:04.219505 kernel: acpiphp: Slot [26] registered Jan 13 20:07:04.219523 kernel: acpiphp: Slot [27] registered Jan 13 20:07:04.219541 kernel: acpiphp: Slot [28] registered Jan 13 20:07:04.219560 kernel: acpiphp: Slot [29] registered Jan 13 20:07:04.219578 kernel: acpiphp: Slot [30] registered Jan 13 20:07:04.219597 kernel: acpiphp: Slot [31] registered Jan 13 20:07:04.219616 kernel: PCI host bridge to bus 0000:00 Jan 13 20:07:04.219911 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 20:07:04.220105 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:07:04.220290 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 20:07:04.220995 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 20:07:04.221268 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 20:07:04.221524 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 20:07:04.221778 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 20:07:04.222002 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:07:04.222211 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 20:07:04.222420 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:07:04.228904 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:07:04.229167 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 20:07:04.229413 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 20:07:04.229795 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 20:07:04.230034 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 20:07:04.230244 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 20:07:04.230449 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 20:07:04.230693 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 20:07:04.230911 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 20:07:04.231123 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 20:07:04.231324 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 20:07:04.231504 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:07:04.231713 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 20:07:04.231740 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:07:04.231760 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:07:04.231779 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:07:04.231798 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:07:04.231817 kernel: iommu: Default domain type: Translated Jan 13 20:07:04.231841 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:07:04.231860 kernel: efivars: Registered efivars operations Jan 13 20:07:04.231878 kernel: vgaarb: loaded Jan 13 20:07:04.231897 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:07:04.231915 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:07:04.231933 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:07:04.231952 kernel: pnp: PnP ACPI init Jan 13 20:07:04.232173 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 20:07:04.232206 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:07:04.232225 kernel: NET: Registered PF_INET protocol family Jan 13 20:07:04.232244 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:07:04.232262 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:07:04.232281 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:07:04.232300 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:07:04.232318 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:07:04.232337 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:07:04.232356 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:07:04.232379 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:07:04.232398 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:07:04.232417 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:07:04.232435 kernel: kvm [1]: HYP mode not available Jan 13 20:07:04.232454 kernel: Initialise system trusted keyrings Jan 13 20:07:04.232473 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:07:04.232491 kernel: Key type asymmetric registered Jan 13 20:07:04.232511 kernel: Asymmetric key parser 'x509' registered Jan 13 20:07:04.232530 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:07:04.232554 kernel: io scheduler mq-deadline registered Jan 13 20:07:04.232572 kernel: io scheduler kyber registered Jan 13 20:07:04.232591 kernel: io scheduler bfq registered Jan 13 20:07:04.235934 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 20:07:04.235979 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:07:04.235998 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:07:04.236017 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 20:07:04.236036 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 20:07:04.236064 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:07:04.236084 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:07:04.236295 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 20:07:04.236322 kernel: printk: console [ttyS0] disabled Jan 13 20:07:04.236342 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 20:07:04.236361 kernel: printk: console [ttyS0] enabled Jan 13 20:07:04.236380 kernel: printk: bootconsole [uart0] disabled Jan 13 20:07:04.236399 kernel: thunder_xcv, ver 1.0 Jan 13 20:07:04.236418 kernel: thunder_bgx, ver 1.0 Jan 13 20:07:04.236441 kernel: nicpf, ver 1.0 Jan 13 20:07:04.236461 kernel: nicvf, ver 1.0 Jan 13 20:07:04.236778 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:07:04.236988 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:07:03 UTC (1736798823) Jan 13 20:07:04.237014 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:07:04.237034 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 20:07:04.237055 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:07:04.237073 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:07:04.237099 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:07:04.237118 kernel: Segment Routing with IPv6 Jan 13 20:07:04.237136 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:07:04.237156 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:07:04.237176 kernel: Key type dns_resolver registered Jan 13 20:07:04.237195 kernel: registered taskstats version 1 Jan 13 20:07:04.237213 kernel: Loading compiled-in X.509 certificates Jan 13 20:07:04.237233 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:07:04.237252 kernel: Key type .fscrypt registered Jan 13 20:07:04.237274 kernel: Key type fscrypt-provisioning registered Jan 13 20:07:04.237293 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:07:04.237311 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:07:04.237330 kernel: ima: No architecture policies found Jan 13 20:07:04.237348 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:07:04.237366 kernel: clk: Disabling unused clocks Jan 13 20:07:04.237385 kernel: Freeing unused kernel memory: 39680K Jan 13 20:07:04.237403 kernel: Run /init as init process Jan 13 20:07:04.237421 kernel: with arguments: Jan 13 20:07:04.237474 kernel: /init Jan 13 20:07:04.237501 kernel: with environment: Jan 13 20:07:04.237519 kernel: HOME=/ Jan 13 20:07:04.237538 kernel: TERM=linux Jan 13 20:07:04.237556 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:07:04.237578 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:07:04.237602 systemd[1]: Detected virtualization amazon. Jan 13 20:07:04.237623 systemd[1]: Detected architecture arm64. Jan 13 20:07:04.237664 systemd[1]: Running in initrd. Jan 13 20:07:04.237690 systemd[1]: No hostname configured, using default hostname. Jan 13 20:07:04.237710 systemd[1]: Hostname set to . Jan 13 20:07:04.237731 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:07:04.237766 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:07:04.237792 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:07:04.237813 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:07:04.237835 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:07:04.237863 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:07:04.237884 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:07:04.237905 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:07:04.237929 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:07:04.237950 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:07:04.237971 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:07:04.237991 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:07:04.238015 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:07:04.238036 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:07:04.238056 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:07:04.238076 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:07:04.238096 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:07:04.238116 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:07:04.238158 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:07:04.238179 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:07:04.238200 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:07:04.238226 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:07:04.238247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:07:04.238267 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:07:04.238287 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:07:04.238308 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:07:04.238328 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:07:04.238349 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:07:04.238369 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:07:04.238394 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:07:04.238415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:07:04.238436 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:07:04.238456 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:07:04.238476 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:07:04.238539 systemd-journald[252]: Collecting audit messages is disabled. Jan 13 20:07:04.238589 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:07:04.238611 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:07:04.238634 kernel: Bridge firewalling registered Jan 13 20:07:04.239735 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:07:04.239761 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:07:04.239783 systemd-journald[252]: Journal started Jan 13 20:07:04.239822 systemd-journald[252]: Runtime Journal (/run/log/journal/ec26b9cc20611c44031acc1c36fe0379) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:07:04.189383 systemd-modules-load[253]: Inserted module 'overlay' Jan 13 20:07:04.247767 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:07:04.218171 systemd-modules-load[253]: Inserted module 'br_netfilter' Jan 13 20:07:04.250714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:04.257916 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:07:04.271176 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:07:04.272053 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:07:04.287639 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:07:04.305253 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:07:04.329769 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:07:04.335613 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:07:04.338068 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:07:04.358044 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:07:04.364944 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:07:04.386861 dracut-cmdline[287]: dracut-dracut-053 Jan 13 20:07:04.394684 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:07:04.438202 systemd-resolved[288]: Positive Trust Anchors: Jan 13 20:07:04.438265 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:07:04.438328 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:07:04.579702 kernel: SCSI subsystem initialized Jan 13 20:07:04.587796 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:07:04.600910 kernel: iscsi: registered transport (tcp) Jan 13 20:07:04.624682 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:07:04.624756 kernel: QLogic iSCSI HBA Driver Jan 13 20:07:04.691823 kernel: random: crng init done Jan 13 20:07:04.691979 systemd-resolved[288]: Defaulting to hostname 'linux'. Jan 13 20:07:04.695626 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:07:04.700618 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:07:04.727598 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:07:04.738028 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:07:04.776831 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:07:04.776906 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:07:04.778627 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:07:04.846718 kernel: raid6: neonx8 gen() 6590 MB/s Jan 13 20:07:04.863711 kernel: raid6: neonx4 gen() 6392 MB/s Jan 13 20:07:04.880712 kernel: raid6: neonx2 gen() 5311 MB/s Jan 13 20:07:04.897708 kernel: raid6: neonx1 gen() 3897 MB/s Jan 13 20:07:04.914703 kernel: raid6: int64x8 gen() 3771 MB/s Jan 13 20:07:04.931715 kernel: raid6: int64x4 gen() 3662 MB/s Jan 13 20:07:04.948711 kernel: raid6: int64x2 gen() 3512 MB/s Jan 13 20:07:04.966523 kernel: raid6: int64x1 gen() 2737 MB/s Jan 13 20:07:04.966597 kernel: raid6: using algorithm neonx8 gen() 6590 MB/s Jan 13 20:07:04.984489 kernel: raid6: .... xor() 4836 MB/s, rmw enabled Jan 13 20:07:04.984566 kernel: raid6: using neon recovery algorithm Jan 13 20:07:04.992703 kernel: xor: measuring software checksum speed Jan 13 20:07:04.993698 kernel: 8regs : 10098 MB/sec Jan 13 20:07:04.995893 kernel: 32regs : 10615 MB/sec Jan 13 20:07:04.995960 kernel: arm64_neon : 9551 MB/sec Jan 13 20:07:04.995986 kernel: xor: using function: 32regs (10615 MB/sec) Jan 13 20:07:05.082711 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:07:05.103735 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:07:05.114005 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:07:05.160184 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jan 13 20:07:05.169784 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:07:05.186950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:07:05.222863 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jan 13 20:07:05.285854 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:07:05.297016 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:07:05.422090 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:07:05.441264 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:07:05.507302 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:07:05.524340 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:07:05.541576 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:07:05.557598 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:07:05.579034 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:07:05.638140 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:07:05.672769 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:07:05.672858 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 20:07:05.717833 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:07:05.718163 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:07:05.718446 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d2:a6:6f:a1:95 Jan 13 20:07:05.718796 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:07:05.718832 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:07:05.680612 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:07:05.680934 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:07:05.684094 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:07:05.686342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:07:05.686710 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:05.738349 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:07:05.701770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:07:05.711562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:07:05.746787 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:07:05.746867 kernel: GPT:9289727 != 16777215 Jan 13 20:07:05.749606 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:07:05.749703 kernel: GPT:9289727 != 16777215 Jan 13 20:07:05.749731 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:07:05.750713 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:07:05.761121 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:07:05.774927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:05.789045 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:07:05.837794 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:07:05.860751 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (527) Jan 13 20:07:05.905712 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (516) Jan 13 20:07:05.999122 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:07:06.015762 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:07:06.032597 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:07:06.047556 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:07:06.057095 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:07:06.071891 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:07:06.082124 disk-uuid[661]: Primary Header is updated. Jan 13 20:07:06.082124 disk-uuid[661]: Secondary Entries is updated. Jan 13 20:07:06.082124 disk-uuid[661]: Secondary Header is updated. Jan 13 20:07:06.092688 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:07:07.108780 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:07:07.111633 disk-uuid[662]: The operation has completed successfully. Jan 13 20:07:07.323904 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:07:07.324141 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:07:07.363902 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:07:07.372669 sh[924]: Success Jan 13 20:07:07.399731 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:07:07.502414 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:07:07.518933 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:07:07.527739 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:07:07.564836 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:07:07.564919 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:07:07.564947 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:07:07.567841 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:07:07.567911 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:07:07.662703 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:07:07.685818 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:07:07.687754 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:07:07.703111 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:07:07.716605 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:07:07.739422 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:07:07.739484 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:07:07.741796 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:07:07.747686 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:07:07.767349 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:07:07.770040 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:07:07.780265 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:07:07.800280 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:07:07.918641 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:07:07.938407 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:07:07.982546 systemd-networkd[1117]: lo: Link UP Jan 13 20:07:07.982566 systemd-networkd[1117]: lo: Gained carrier Jan 13 20:07:07.985394 systemd-networkd[1117]: Enumeration completed Jan 13 20:07:07.985587 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:07:07.986540 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:07.986547 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:07:08.002408 systemd[1]: Reached target network.target - Network. Jan 13 20:07:08.006389 systemd-networkd[1117]: eth0: Link UP Jan 13 20:07:08.006401 systemd-networkd[1117]: eth0: Gained carrier Jan 13 20:07:08.006419 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:08.026815 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.17.103/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:07:08.135985 ignition[1017]: Ignition 2.20.0 Jan 13 20:07:08.136015 ignition[1017]: Stage: fetch-offline Jan 13 20:07:08.136482 ignition[1017]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:08.136509 ignition[1017]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:08.141634 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:07:08.139374 ignition[1017]: Ignition finished successfully Jan 13 20:07:08.160019 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:07:08.192005 ignition[1127]: Ignition 2.20.0 Jan 13 20:07:08.192037 ignition[1127]: Stage: fetch Jan 13 20:07:08.193367 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:08.193394 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:08.193681 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:08.216363 ignition[1127]: PUT result: OK Jan 13 20:07:08.219487 ignition[1127]: parsed url from cmdline: "" Jan 13 20:07:08.219510 ignition[1127]: no config URL provided Jan 13 20:07:08.219526 ignition[1127]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:07:08.219579 ignition[1127]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:07:08.219614 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:08.223371 ignition[1127]: PUT result: OK Jan 13 20:07:08.223451 ignition[1127]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:07:08.225483 ignition[1127]: GET result: OK Jan 13 20:07:08.227438 ignition[1127]: parsing config with SHA512: 3f36a583b55e6c0f09a6b002ea31071772049ac429f244422a4e5ffa9cecedcc10f7384bf5b7d3cd1bf30c2afc19b386489b64c32fc855cb032bccbf2de8a65d Jan 13 20:07:08.240116 unknown[1127]: fetched base config from "system" Jan 13 20:07:08.240148 unknown[1127]: fetched base config from "system" Jan 13 20:07:08.240162 unknown[1127]: fetched user config from "aws" Jan 13 20:07:08.242632 ignition[1127]: fetch: fetch complete Jan 13 20:07:08.242957 ignition[1127]: fetch: fetch passed Jan 13 20:07:08.243394 ignition[1127]: Ignition finished successfully Jan 13 20:07:08.256145 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:07:08.273017 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:07:08.295583 ignition[1134]: Ignition 2.20.0 Jan 13 20:07:08.296088 ignition[1134]: Stage: kargs Jan 13 20:07:08.296706 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:08.296731 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:08.296908 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:08.299462 ignition[1134]: PUT result: OK Jan 13 20:07:08.309146 ignition[1134]: kargs: kargs passed Jan 13 20:07:08.309255 ignition[1134]: Ignition finished successfully Jan 13 20:07:08.313689 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:07:08.332022 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:07:08.355710 ignition[1141]: Ignition 2.20.0 Jan 13 20:07:08.355732 ignition[1141]: Stage: disks Jan 13 20:07:08.356304 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:08.356327 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:08.356475 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:08.359674 ignition[1141]: PUT result: OK Jan 13 20:07:08.369928 ignition[1141]: disks: disks passed Jan 13 20:07:08.370106 ignition[1141]: Ignition finished successfully Jan 13 20:07:08.373729 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:07:08.378412 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:07:08.381438 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:07:08.383681 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:07:08.385563 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:07:08.387568 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:07:08.418048 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:07:08.457465 systemd-fsck[1150]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:07:08.466144 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:07:08.483890 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:07:08.569105 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:07:08.570162 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:07:08.574019 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:07:08.591818 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:07:08.597870 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:07:08.600204 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:07:08.600289 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:07:08.600337 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:07:08.624702 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1169) Jan 13 20:07:08.628543 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:07:08.628621 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:07:08.628666 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:07:08.636168 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:07:08.643446 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:07:08.651313 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:07:08.657562 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:07:09.027052 initrd-setup-root[1193]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:07:09.047710 initrd-setup-root[1200]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:07:09.056361 initrd-setup-root[1207]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:07:09.065276 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:07:09.317829 systemd-networkd[1117]: eth0: Gained IPv6LL Jan 13 20:07:09.423391 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:07:09.438404 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:07:09.445962 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:07:09.464272 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:07:09.466143 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:07:09.510355 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:07:09.512855 ignition[1281]: INFO : Ignition 2.20.0 Jan 13 20:07:09.512855 ignition[1281]: INFO : Stage: mount Jan 13 20:07:09.518501 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:09.518501 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:09.518501 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:09.536455 ignition[1281]: INFO : PUT result: OK Jan 13 20:07:09.541356 ignition[1281]: INFO : mount: mount passed Jan 13 20:07:09.541356 ignition[1281]: INFO : Ignition finished successfully Jan 13 20:07:09.544594 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:07:09.560864 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:07:09.588007 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:07:09.615689 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1293) Jan 13 20:07:09.619861 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:07:09.619930 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:07:09.621099 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:07:09.626681 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:07:09.630432 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:07:09.673370 ignition[1310]: INFO : Ignition 2.20.0 Jan 13 20:07:09.673370 ignition[1310]: INFO : Stage: files Jan 13 20:07:09.676778 ignition[1310]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:09.676778 ignition[1310]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:09.676778 ignition[1310]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:09.684187 ignition[1310]: INFO : PUT result: OK Jan 13 20:07:09.688668 ignition[1310]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:07:09.691878 ignition[1310]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:07:09.691878 ignition[1310]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:07:09.699136 ignition[1310]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:07:09.702111 ignition[1310]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:07:09.705378 unknown[1310]: wrote ssh authorized keys file for user: core Jan 13 20:07:09.707766 ignition[1310]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:07:09.718448 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:07:09.718448 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:07:09.820846 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:07:09.976746 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:07:09.976746 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:07:09.984068 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:07:10.442343 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:07:10.586164 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:07:10.586164 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:07:10.592864 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 13 20:07:11.022975 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:07:11.367178 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:07:11.367178 ignition[1310]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 20:07:11.380927 ignition[1310]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:07:11.384419 ignition[1310]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:07:11.384419 ignition[1310]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 20:07:11.384419 ignition[1310]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:07:11.384419 ignition[1310]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:07:11.384419 ignition[1310]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:07:11.384419 ignition[1310]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:07:11.384419 ignition[1310]: INFO : files: files passed Jan 13 20:07:11.384419 ignition[1310]: INFO : Ignition finished successfully Jan 13 20:07:11.388471 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:07:11.419966 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:07:11.428869 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:07:11.434313 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:07:11.434520 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:07:11.480095 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:07:11.480095 initrd-setup-root-after-ignition[1338]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:07:11.487975 initrd-setup-root-after-ignition[1342]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:07:11.494778 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:07:11.501100 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:07:11.511935 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:07:11.565137 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:07:11.565570 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:07:11.572978 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:07:11.576985 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:07:11.579287 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:07:11.593054 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:07:11.621299 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:07:11.638086 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:07:11.661617 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:07:11.662822 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:07:11.663516 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:07:11.664204 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:07:11.664435 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:07:11.665307 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:07:11.665623 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:07:11.666227 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:07:11.666525 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:07:11.666845 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:07:11.667399 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:07:11.667739 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:07:11.668320 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:07:11.668633 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:07:11.668924 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:07:11.669162 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:07:11.669387 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:07:11.670418 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:07:11.671330 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:07:11.671846 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:07:11.692358 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:07:11.692642 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:07:11.692934 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:07:11.693894 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:07:11.694132 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:07:11.694910 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:07:11.695121 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:07:11.764131 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:07:11.771039 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:07:11.775896 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:07:11.778878 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:07:11.783377 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:07:11.783610 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:07:11.806880 ignition[1362]: INFO : Ignition 2.20.0 Jan 13 20:07:11.806880 ignition[1362]: INFO : Stage: umount Jan 13 20:07:11.806880 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:07:11.806880 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:07:11.813890 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:07:11.813890 ignition[1362]: INFO : PUT result: OK Jan 13 20:07:11.823823 ignition[1362]: INFO : umount: umount passed Jan 13 20:07:11.823823 ignition[1362]: INFO : Ignition finished successfully Jan 13 20:07:11.826464 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:07:11.827980 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:07:11.835991 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:07:11.836542 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:07:11.844886 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:07:11.845004 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:07:11.847909 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:07:11.848004 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:07:11.854450 systemd[1]: Stopped target network.target - Network. Jan 13 20:07:11.854625 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:07:11.855622 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:07:11.862884 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:07:11.866791 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:07:11.869731 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:07:11.873486 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:07:11.876105 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:07:11.881866 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:07:11.881953 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:07:11.882154 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:07:11.882216 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:07:11.882432 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:07:11.882516 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:07:11.885937 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:07:11.886021 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:07:11.886436 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:07:11.887211 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:07:11.891137 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:07:11.892287 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:07:11.892484 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:07:11.936228 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:07:11.936723 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:07:11.940775 systemd-networkd[1117]: eth0: DHCPv6 lease lost Jan 13 20:07:11.949392 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:07:11.949855 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:07:11.956575 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:07:11.956719 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:07:11.968834 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:07:11.972345 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:07:11.972486 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:07:11.983871 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:07:11.983998 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:07:11.987923 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:07:11.988039 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:07:12.007459 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:07:12.007570 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:07:12.011056 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:07:12.022512 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:07:12.027342 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:07:12.036152 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:07:12.036357 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:07:12.052447 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:07:12.052816 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:07:12.055848 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:07:12.055951 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:07:12.060092 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:07:12.060169 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:07:12.066974 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:07:12.067067 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:07:12.104040 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:07:12.104155 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:07:12.110099 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:07:12.110213 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:07:12.128999 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:07:12.142120 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:07:12.142263 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:07:12.148352 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:07:12.148482 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:07:12.152845 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:07:12.152958 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:07:12.161570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:07:12.161717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:12.164531 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:07:12.164748 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:07:12.167584 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:07:12.167783 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:07:12.171703 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:07:12.193932 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:07:12.211842 systemd[1]: Switching root. Jan 13 20:07:12.279614 systemd-journald[252]: Journal stopped Jan 13 20:07:14.864346 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jan 13 20:07:14.864513 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:07:14.864558 kernel: SELinux: policy capability open_perms=1 Jan 13 20:07:14.864588 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:07:14.864619 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:07:14.864689 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:07:14.864724 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:07:14.864755 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:07:14.864785 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:07:14.864821 kernel: audit: type=1403 audit(1736798832.858:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:07:14.864860 systemd[1]: Successfully loaded SELinux policy in 51.199ms. Jan 13 20:07:14.864899 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.300ms. Jan 13 20:07:14.864936 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:07:14.864970 systemd[1]: Detected virtualization amazon. Jan 13 20:07:14.864999 systemd[1]: Detected architecture arm64. Jan 13 20:07:14.865031 systemd[1]: Detected first boot. Jan 13 20:07:14.865062 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:07:14.865094 zram_generator::config[1405]: No configuration found. Jan 13 20:07:14.865138 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:07:14.865171 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:07:14.865203 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:07:14.865235 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:07:14.865267 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:07:14.865299 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:07:14.865330 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:07:14.865361 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:07:14.865399 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:07:14.865451 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:07:14.865484 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:07:14.865516 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:07:14.865548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:07:14.865581 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:07:14.865613 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:07:14.866319 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:07:14.866412 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:07:14.866447 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:07:14.866479 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:07:14.866522 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:07:14.866557 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:07:14.866589 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:07:14.866623 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:07:14.867730 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:07:14.867803 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:07:14.867839 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:07:14.867871 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:07:14.870053 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:07:14.870117 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:07:14.870150 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:07:14.870180 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:07:14.870214 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:07:14.870245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:07:14.870275 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:07:14.870318 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:07:14.870350 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:07:14.870382 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:07:14.870413 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:07:14.870446 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:07:14.870478 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:07:14.870510 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:07:14.870546 systemd[1]: Reached target machines.target - Containers. Jan 13 20:07:14.870585 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:07:14.870617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:07:14.870700 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:07:14.870737 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:07:14.870767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:07:14.870800 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:07:14.870830 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:07:14.870860 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:07:14.870903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:07:14.870937 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:07:14.870967 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:07:14.870997 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:07:14.871026 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:07:14.871055 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:07:14.871084 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:07:14.871114 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:07:14.871144 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:07:14.871181 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:07:14.871212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:07:14.871244 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:07:14.871280 systemd[1]: Stopped verity-setup.service. Jan 13 20:07:14.871309 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:07:14.871338 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:07:14.871372 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:07:14.871403 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:07:14.871436 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:07:14.871472 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:07:14.871504 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:07:14.871591 systemd-journald[1483]: Collecting audit messages is disabled. Jan 13 20:07:14.871700 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:07:14.871742 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:07:14.871772 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:07:14.871806 systemd-journald[1483]: Journal started Jan 13 20:07:14.871856 systemd-journald[1483]: Runtime Journal (/run/log/journal/ec26b9cc20611c44031acc1c36fe0379) is 8.0M, max 75.3M, 67.3M free. Jan 13 20:07:14.237964 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:07:14.875147 kernel: fuse: init (API version 7.39) Jan 13 20:07:14.875236 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:07:14.336572 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:07:14.337490 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:07:14.881708 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:07:14.889990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:07:14.891783 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:07:14.894986 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:07:14.895331 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:07:14.899077 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:07:14.910869 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:07:14.915762 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:07:14.926148 kernel: loop: module loaded Jan 13 20:07:14.926254 kernel: ACPI: bus type drm_connector registered Jan 13 20:07:14.929895 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:07:14.930897 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:07:14.934726 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:07:14.935943 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:07:14.968056 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:07:14.979041 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:07:14.990404 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:07:14.993920 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:07:14.993994 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:07:14.998792 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:07:15.013928 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:07:15.030871 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:07:15.033140 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:07:15.045132 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:07:15.050718 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:07:15.053288 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:07:15.057369 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:07:15.059962 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:07:15.068178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:07:15.074053 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:07:15.085928 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:07:15.096856 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:07:15.099715 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:07:15.102446 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:07:15.106753 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:07:15.182828 systemd-journald[1483]: Time spent on flushing to /var/log/journal/ec26b9cc20611c44031acc1c36fe0379 is 70.575ms for 911 entries. Jan 13 20:07:15.182828 systemd-journald[1483]: System Journal (/var/log/journal/ec26b9cc20611c44031acc1c36fe0379) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:07:15.276371 systemd-journald[1483]: Received client request to flush runtime journal. Jan 13 20:07:15.276487 kernel: loop0: detected capacity change from 0 to 189592 Jan 13 20:07:15.190451 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:07:15.194299 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:07:15.207027 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:07:15.224806 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:07:15.229349 systemd-tmpfiles[1534]: ACLs are not supported, ignoring. Jan 13 20:07:15.229374 systemd-tmpfiles[1534]: ACLs are not supported, ignoring. Jan 13 20:07:15.296155 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:07:15.296795 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:07:15.305135 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:07:15.313540 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:07:15.329101 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:07:15.332119 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:07:15.357722 kernel: loop1: detected capacity change from 0 to 116808 Jan 13 20:07:15.421872 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:07:15.439045 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:07:15.475227 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:07:15.488159 kernel: loop2: detected capacity change from 0 to 53784 Jan 13 20:07:15.486184 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:07:15.506303 udevadm[1555]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:07:15.550942 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Jan 13 20:07:15.551576 systemd-tmpfiles[1558]: ACLs are not supported, ignoring. Jan 13 20:07:15.559711 kernel: loop3: detected capacity change from 0 to 113536 Jan 13 20:07:15.575939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:07:15.688931 kernel: loop4: detected capacity change from 0 to 189592 Jan 13 20:07:15.728735 kernel: loop5: detected capacity change from 0 to 116808 Jan 13 20:07:15.748713 kernel: loop6: detected capacity change from 0 to 53784 Jan 13 20:07:15.777775 kernel: loop7: detected capacity change from 0 to 113536 Jan 13 20:07:15.791545 (sd-merge)[1563]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:07:15.793224 (sd-merge)[1563]: Merged extensions into '/usr'. Jan 13 20:07:15.807054 systemd[1]: Reloading requested from client PID 1533 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:07:15.807096 systemd[1]: Reloading... Jan 13 20:07:15.985687 zram_generator::config[1592]: No configuration found. Jan 13 20:07:16.343047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:16.486003 systemd[1]: Reloading finished in 678 ms. Jan 13 20:07:16.543861 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:07:16.547698 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:07:16.567983 systemd[1]: Starting ensure-sysext.service... Jan 13 20:07:16.574044 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:07:16.585040 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:07:16.611986 systemd[1]: Reloading requested from client PID 1641 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:07:16.612018 systemd[1]: Reloading... Jan 13 20:07:16.637263 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:07:16.640047 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:07:16.646976 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:07:16.649158 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Jan 13 20:07:16.649324 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Jan 13 20:07:16.677080 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:07:16.677113 systemd-tmpfiles[1642]: Skipping /boot Jan 13 20:07:16.738615 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:07:16.738640 systemd-tmpfiles[1642]: Skipping /boot Jan 13 20:07:16.747617 systemd-udevd[1643]: Using default interface naming scheme 'v255'. Jan 13 20:07:16.821688 zram_generator::config[1671]: No configuration found. Jan 13 20:07:17.076396 (udev-worker)[1690]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:07:17.125443 ldconfig[1528]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:07:17.289384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:17.351606 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1708) Jan 13 20:07:17.455290 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:07:17.456290 systemd[1]: Reloading finished in 843 ms. Jan 13 20:07:17.518487 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:07:17.521976 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:07:17.557569 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:07:17.610760 systemd[1]: Finished ensure-sysext.service. Jan 13 20:07:17.679269 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:07:17.699018 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:07:17.701639 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:07:17.706985 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:07:17.713046 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:07:17.725130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:07:17.741030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:07:17.745063 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:07:17.752278 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:07:17.762497 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:07:17.772991 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:07:17.775496 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:07:17.784019 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:07:17.789146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:07:17.796961 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:07:17.800576 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:07:17.803778 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:07:17.806826 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:07:17.808888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:07:17.812739 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:07:17.813055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:07:17.834227 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:07:17.845345 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:07:17.846825 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:07:17.877931 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:07:17.886996 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:07:17.889231 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:07:17.889354 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:07:17.914725 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:07:17.917452 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:07:17.934843 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:07:17.942764 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:07:17.962785 lvm[1861]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:07:17.988774 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:07:17.994715 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:07:18.014039 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:07:18.026784 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:07:18.027338 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:07:18.036842 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:07:18.065689 lvm[1885]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:07:18.075691 augenrules[1887]: No rules Jan 13 20:07:18.080283 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:07:18.082784 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:07:18.093438 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:07:18.114470 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:07:18.122395 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:07:18.198145 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:07:18.234413 systemd-networkd[1847]: lo: Link UP Jan 13 20:07:18.234433 systemd-networkd[1847]: lo: Gained carrier Jan 13 20:07:18.237233 systemd-networkd[1847]: Enumeration completed Jan 13 20:07:18.237453 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:07:18.239397 systemd-networkd[1847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:18.239405 systemd-networkd[1847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:07:18.241865 systemd-networkd[1847]: eth0: Link UP Jan 13 20:07:18.242166 systemd-networkd[1847]: eth0: Gained carrier Jan 13 20:07:18.242199 systemd-networkd[1847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:07:18.249037 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:07:18.254782 systemd-networkd[1847]: eth0: DHCPv4 address 172.31.17.103/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:07:18.276528 systemd-resolved[1848]: Positive Trust Anchors: Jan 13 20:07:18.276606 systemd-resolved[1848]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:07:18.276706 systemd-resolved[1848]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:07:18.285386 systemd-resolved[1848]: Defaulting to hostname 'linux'. Jan 13 20:07:18.288461 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:07:18.290747 systemd[1]: Reached target network.target - Network. Jan 13 20:07:18.292432 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:07:18.294787 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:07:18.297042 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:07:18.299584 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:07:18.302371 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:07:18.305059 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:07:18.307406 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:07:18.309724 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:07:18.309784 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:07:18.311516 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:07:18.314979 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:07:18.319716 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:07:18.330357 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:07:18.333894 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:07:18.336564 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:07:18.338735 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:07:18.340694 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:07:18.340753 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:07:18.344909 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:07:18.354169 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:07:18.361015 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:07:18.367197 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:07:18.377044 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:07:18.380164 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:07:18.386989 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:07:18.396357 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:07:18.412529 jq[1911]: false Jan 13 20:07:18.419442 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:07:18.430622 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:07:18.437077 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:07:18.455176 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:07:18.467035 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:07:18.471073 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:07:18.471996 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:07:18.478934 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:07:18.483932 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:07:18.492490 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:07:18.493809 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:07:18.551196 dbus-daemon[1910]: [system] SELinux support is enabled Jan 13 20:07:18.554503 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:07:18.558332 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:07:18.560779 dbus-daemon[1910]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1847 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:07:18.567153 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:07:18.567222 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:07:18.570555 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:07:18.571919 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:07:18.571959 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:07:18.593826 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:07 UTC 2025 (1): Starting Jan 13 20:07:18.593826 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:07:18.593826 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: ---------------------------------------------------- Jan 13 20:07:18.593826 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:07:18.593826 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:07:18.593826 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: corporation. Support and training for ntp-4 are Jan 13 20:07:18.593826 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: available at https://www.nwtime.org/support Jan 13 20:07:18.593826 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: ---------------------------------------------------- Jan 13 20:07:18.594542 jq[1923]: true Jan 13 20:07:18.592962 ntpd[1914]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:29:07 UTC 2025 (1): Starting Jan 13 20:07:18.593012 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:07:18.593032 ntpd[1914]: ---------------------------------------------------- Jan 13 20:07:18.593052 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:07:18.593071 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:07:18.593090 ntpd[1914]: corporation. Support and training for ntp-4 are Jan 13 20:07:18.593108 ntpd[1914]: available at https://www.nwtime.org/support Jan 13 20:07:18.593126 ntpd[1914]: ---------------------------------------------------- Jan 13 20:07:18.601100 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:07:18.603895 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: proto: precision = 0.096 usec (-23) Jan 13 20:07:18.602081 ntpd[1914]: proto: precision = 0.096 usec (-23) Jan 13 20:07:18.604897 ntpd[1914]: basedate set to 2025-01-01 Jan 13 20:07:18.605090 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: basedate set to 2025-01-01 Jan 13 20:07:18.605175 ntpd[1914]: gps base set to 2025-01-05 (week 2348) Jan 13 20:07:18.605329 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: gps base set to 2025-01-05 (week 2348) Jan 13 20:07:18.610101 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:07:18.614699 extend-filesystems[1912]: Found loop4 Jan 13 20:07:18.612474 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:07:18.624726 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:07:18.624726 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:07:18.624726 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:07:18.624726 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: Listen normally on 3 eth0 172.31.17.103:123 Jan 13 20:07:18.624726 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: Listen normally on 4 lo [::1]:123 Jan 13 20:07:18.624726 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: bind(21) AF_INET6 fe80::4d2:a6ff:fe6f:a195%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:18.624726 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: unable to create socket on eth0 (5) for fe80::4d2:a6ff:fe6f:a195%2#123 Jan 13 20:07:18.624726 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: failed to init interface for address fe80::4d2:a6ff:fe6f:a195%2 Jan 13 20:07:18.624726 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: Listening on routing socket on fd #21 for interface updates Jan 13 20:07:18.625140 extend-filesystems[1912]: Found loop5 Jan 13 20:07:18.625140 extend-filesystems[1912]: Found loop6 Jan 13 20:07:18.625140 extend-filesystems[1912]: Found loop7 Jan 13 20:07:18.625140 extend-filesystems[1912]: Found nvme0n1 Jan 13 20:07:18.625140 extend-filesystems[1912]: Found nvme0n1p1 Jan 13 20:07:18.612833 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:07:18.642089 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:18.642089 ntpd[1914]: 13 Jan 20:07:18 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:18.612915 ntpd[1914]: Listen normally on 3 eth0 172.31.17.103:123 Jan 13 20:07:18.612988 ntpd[1914]: Listen normally on 4 lo [::1]:123 Jan 13 20:07:18.613088 ntpd[1914]: bind(21) AF_INET6 fe80::4d2:a6ff:fe6f:a195%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:18.613133 ntpd[1914]: unable to create socket on eth0 (5) for fe80::4d2:a6ff:fe6f:a195%2#123 Jan 13 20:07:18.613168 ntpd[1914]: failed to init interface for address fe80::4d2:a6ff:fe6f:a195%2 Jan 13 20:07:18.613235 ntpd[1914]: Listening on routing socket on fd #21 for interface updates Jan 13 20:07:18.627349 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:18.627406 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:07:18.648818 extend-filesystems[1912]: Found nvme0n1p2 Jan 13 20:07:18.648818 extend-filesystems[1912]: Found nvme0n1p3 Jan 13 20:07:18.648818 extend-filesystems[1912]: Found usr Jan 13 20:07:18.648818 extend-filesystems[1912]: Found nvme0n1p4 Jan 13 20:07:18.648818 extend-filesystems[1912]: Found nvme0n1p6 Jan 13 20:07:18.648818 extend-filesystems[1912]: Found nvme0n1p7 Jan 13 20:07:18.648818 extend-filesystems[1912]: Found nvme0n1p9 Jan 13 20:07:18.648818 extend-filesystems[1912]: Checking size of /dev/nvme0n1p9 Jan 13 20:07:18.682479 tar[1925]: linux-arm64/helm Jan 13 20:07:18.722226 (ntainerd)[1945]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:07:18.727177 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:07:18.728781 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:07:18.759105 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:07:18.759616 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:07:18.786771 jq[1939]: true Jan 13 20:07:18.811866 update_engine[1922]: I20250113 20:07:18.807675 1922 main.cc:92] Flatcar Update Engine starting Jan 13 20:07:18.823771 extend-filesystems[1912]: Resized partition /dev/nvme0n1p9 Jan 13 20:07:18.826931 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:07:18.830695 update_engine[1922]: I20250113 20:07:18.828019 1922 update_check_scheduler.cc:74] Next update check in 9m49s Jan 13 20:07:18.835971 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:07:18.847138 extend-filesystems[1964]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:07:18.853312 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:07:18.877910 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:07:18.995223 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:07:19.014379 coreos-metadata[1909]: Jan 13 20:07:19.014 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:07:19.032953 coreos-metadata[1909]: Jan 13 20:07:19.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:07:19.032953 coreos-metadata[1909]: Jan 13 20:07:19.022 INFO Fetch successful Jan 13 20:07:19.032953 coreos-metadata[1909]: Jan 13 20:07:19.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:07:19.032953 coreos-metadata[1909]: Jan 13 20:07:19.026 INFO Fetch successful Jan 13 20:07:19.032953 coreos-metadata[1909]: Jan 13 20:07:19.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:07:19.032953 coreos-metadata[1909]: Jan 13 20:07:19.030 INFO Fetch successful Jan 13 20:07:19.032953 coreos-metadata[1909]: Jan 13 20:07:19.030 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:07:19.033305 extend-filesystems[1964]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:07:19.033305 extend-filesystems[1964]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:07:19.033305 extend-filesystems[1964]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:07:19.059851 extend-filesystems[1912]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:07:19.036689 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:07:19.038876 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:07:19.063222 coreos-metadata[1909]: Jan 13 20:07:19.062 INFO Fetch successful Jan 13 20:07:19.063222 coreos-metadata[1909]: Jan 13 20:07:19.062 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:07:19.065619 coreos-metadata[1909]: Jan 13 20:07:19.065 INFO Fetch failed with 404: resource not found Jan 13 20:07:19.065619 coreos-metadata[1909]: Jan 13 20:07:19.065 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:07:19.074486 coreos-metadata[1909]: Jan 13 20:07:19.074 INFO Fetch successful Jan 13 20:07:19.074486 coreos-metadata[1909]: Jan 13 20:07:19.074 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:07:19.076980 coreos-metadata[1909]: Jan 13 20:07:19.076 INFO Fetch successful Jan 13 20:07:19.076980 coreos-metadata[1909]: Jan 13 20:07:19.076 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:07:19.078745 coreos-metadata[1909]: Jan 13 20:07:19.077 INFO Fetch successful Jan 13 20:07:19.078745 coreos-metadata[1909]: Jan 13 20:07:19.077 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:07:19.080504 systemd-logind[1920]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:07:19.080561 systemd-logind[1920]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 20:07:19.083028 coreos-metadata[1909]: Jan 13 20:07:19.082 INFO Fetch successful Jan 13 20:07:19.083028 coreos-metadata[1909]: Jan 13 20:07:19.082 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:07:19.084486 coreos-metadata[1909]: Jan 13 20:07:19.084 INFO Fetch successful Jan 13 20:07:19.084852 systemd-logind[1920]: New seat seat0. Jan 13 20:07:19.093350 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:07:19.124753 bash[1989]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:07:19.157386 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:07:19.171440 systemd[1]: Starting sshkeys.service... Jan 13 20:07:19.263359 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:07:19.267804 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:07:19.278834 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:07:19.289361 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:07:19.322696 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1690) Jan 13 20:07:19.404202 containerd[1945]: time="2025-01-13T20:07:19.402316498Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:07:19.451976 locksmithd[1965]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:07:19.533879 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:07:19.535092 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:07:19.549822 dbus-daemon[1910]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1937 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:07:19.560361 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:07:19.595329 ntpd[1914]: bind(24) AF_INET6 fe80::4d2:a6ff:fe6f:a195%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:19.596091 ntpd[1914]: 13 Jan 20:07:19 ntpd[1914]: bind(24) AF_INET6 fe80::4d2:a6ff:fe6f:a195%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:07:19.596091 ntpd[1914]: 13 Jan 20:07:19 ntpd[1914]: unable to create socket on eth0 (6) for fe80::4d2:a6ff:fe6f:a195%2#123 Jan 13 20:07:19.596091 ntpd[1914]: 13 Jan 20:07:19 ntpd[1914]: failed to init interface for address fe80::4d2:a6ff:fe6f:a195%2 Jan 13 20:07:19.595389 ntpd[1914]: unable to create socket on eth0 (6) for fe80::4d2:a6ff:fe6f:a195%2#123 Jan 13 20:07:19.595418 ntpd[1914]: failed to init interface for address fe80::4d2:a6ff:fe6f:a195%2 Jan 13 20:07:19.617852 polkitd[2066]: Started polkitd version 121 Jan 13 20:07:19.625740 containerd[1945]: time="2025-01-13T20:07:19.624519240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:19.632684 containerd[1945]: time="2025-01-13T20:07:19.631245180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:19.632684 containerd[1945]: time="2025-01-13T20:07:19.631307460Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:07:19.632684 containerd[1945]: time="2025-01-13T20:07:19.631343520Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:07:19.632684 containerd[1945]: time="2025-01-13T20:07:19.631636212Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:07:19.632684 containerd[1945]: time="2025-01-13T20:07:19.631688400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:19.632684 containerd[1945]: time="2025-01-13T20:07:19.631809732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:19.632684 containerd[1945]: time="2025-01-13T20:07:19.631837080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:19.632684 containerd[1945]: time="2025-01-13T20:07:19.632113044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:19.632684 containerd[1945]: time="2025-01-13T20:07:19.632141628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:19.632684 containerd[1945]: time="2025-01-13T20:07:19.632171124Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:19.632684 containerd[1945]: time="2025-01-13T20:07:19.632194032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:19.633185 containerd[1945]: time="2025-01-13T20:07:19.632360316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:19.634944 containerd[1945]: time="2025-01-13T20:07:19.634874616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:07:19.635950 containerd[1945]: time="2025-01-13T20:07:19.635124612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:07:19.635950 containerd[1945]: time="2025-01-13T20:07:19.635166876Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:07:19.635950 containerd[1945]: time="2025-01-13T20:07:19.635375856Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:07:19.635950 containerd[1945]: time="2025-01-13T20:07:19.635471496Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:07:19.645945 polkitd[2066]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:07:19.646073 polkitd[2066]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:07:19.646678 containerd[1945]: time="2025-01-13T20:07:19.646592592Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:07:19.647132 containerd[1945]: time="2025-01-13T20:07:19.646763136Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:07:19.647132 containerd[1945]: time="2025-01-13T20:07:19.646837284Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:07:19.647132 containerd[1945]: time="2025-01-13T20:07:19.646900776Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:07:19.647132 containerd[1945]: time="2025-01-13T20:07:19.646977492Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:07:19.649222 containerd[1945]: time="2025-01-13T20:07:19.647482764Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:07:19.649371 containerd[1945]: time="2025-01-13T20:07:19.649289844Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.649592772Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.649642416Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651340116Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651413316Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651448620Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651501936Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651545016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651605244Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651712764Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651747168Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651801360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651848916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651906252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.653217 containerd[1945]: time="2025-01-13T20:07:19.651936264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.651126 polkitd[2066]: Finished loading, compiling and executing 2 rules Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.652656924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.652743744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.652780188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.652837872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.652872888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.652983084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.653492796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.653533776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.653590368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.653626356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.653702448Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:07:19.654000 containerd[1945]: time="2025-01-13T20:07:19.653787264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.655396 containerd[1945]: time="2025-01-13T20:07:19.654718224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.655396 containerd[1945]: time="2025-01-13T20:07:19.654813192Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:07:19.655396 containerd[1945]: time="2025-01-13T20:07:19.655018212Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:07:19.655396 containerd[1945]: time="2025-01-13T20:07:19.655081704Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:07:19.655396 containerd[1945]: time="2025-01-13T20:07:19.655113384Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:07:19.655396 containerd[1945]: time="2025-01-13T20:07:19.655167540Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:07:19.655396 containerd[1945]: time="2025-01-13T20:07:19.655194012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.655396 containerd[1945]: time="2025-01-13T20:07:19.655224744Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:07:19.655396 containerd[1945]: time="2025-01-13T20:07:19.655274280Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:07:19.655396 containerd[1945]: time="2025-01-13T20:07:19.655299708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:07:19.660684 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.657833628Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.657987456Z" level=info msg="Connect containerd service" Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.658069680Z" level=info msg="using legacy CRI server" Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.658088076Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.658364364Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.661221984Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.661483596Z" level=info msg="Start subscribing containerd event" Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.661565472Z" level=info msg="Start recovering state" Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.661723920Z" level=info msg="Start event monitor" Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.661750968Z" level=info msg="Start snapshots syncer" Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.661774392Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:07:19.664379 containerd[1945]: time="2025-01-13T20:07:19.661795308Z" level=info msg="Start streaming server" Jan 13 20:07:19.661831 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:07:19.672833 coreos-metadata[2013]: Jan 13 20:07:19.666 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:07:19.668868 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:07:19.673450 containerd[1945]: time="2025-01-13T20:07:19.668450736Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:07:19.673450 containerd[1945]: time="2025-01-13T20:07:19.668580816Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:07:19.673450 containerd[1945]: time="2025-01-13T20:07:19.671746008Z" level=info msg="containerd successfully booted in 0.272684s" Jan 13 20:07:19.667525 polkitd[2066]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:07:19.676443 coreos-metadata[2013]: Jan 13 20:07:19.673 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:07:19.677333 coreos-metadata[2013]: Jan 13 20:07:19.676 INFO Fetch successful Jan 13 20:07:19.677333 coreos-metadata[2013]: Jan 13 20:07:19.676 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:07:19.683687 coreos-metadata[2013]: Jan 13 20:07:19.681 INFO Fetch successful Jan 13 20:07:19.685808 unknown[2013]: wrote ssh authorized keys file for user: core Jan 13 20:07:19.765488 systemd-hostnamed[1937]: Hostname set to (transient) Jan 13 20:07:19.765705 systemd-resolved[1848]: System hostname changed to 'ip-172-31-17-103'. Jan 13 20:07:19.782183 update-ssh-keys[2098]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:07:19.785547 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:07:19.792515 systemd[1]: Finished sshkeys.service. Jan 13 20:07:19.879749 systemd-networkd[1847]: eth0: Gained IPv6LL Jan 13 20:07:19.887147 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:07:19.891524 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:07:19.904088 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:07:19.920224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:19.930976 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:07:20.018802 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:07:20.066936 amazon-ssm-agent[2114]: Initializing new seelog logger Jan 13 20:07:20.068044 amazon-ssm-agent[2114]: New Seelog Logger Creation Complete Jan 13 20:07:20.068044 amazon-ssm-agent[2114]: 2025/01/13 20:07:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:20.068044 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:20.069699 amazon-ssm-agent[2114]: 2025/01/13 20:07:20 processing appconfig overrides Jan 13 20:07:20.072685 amazon-ssm-agent[2114]: 2025/01/13 20:07:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:20.072685 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:20.072685 amazon-ssm-agent[2114]: 2025/01/13 20:07:20 processing appconfig overrides Jan 13 20:07:20.072685 amazon-ssm-agent[2114]: 2025/01/13 20:07:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:20.072685 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:20.072685 amazon-ssm-agent[2114]: 2025/01/13 20:07:20 processing appconfig overrides Jan 13 20:07:20.075072 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO Proxy environment variables: Jan 13 20:07:20.078640 amazon-ssm-agent[2114]: 2025/01/13 20:07:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:20.083673 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:07:20.083673 amazon-ssm-agent[2114]: 2025/01/13 20:07:20 processing appconfig overrides Jan 13 20:07:20.174524 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO https_proxy: Jan 13 20:07:20.274946 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO http_proxy: Jan 13 20:07:20.334855 sshd_keygen[1952]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:07:20.377052 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO no_proxy: Jan 13 20:07:20.450338 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:07:20.462118 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:07:20.473333 systemd[1]: Started sshd@0-172.31.17.103:22-139.178.68.195:45842.service - OpenSSH per-connection server daemon (139.178.68.195:45842). Jan 13 20:07:20.482470 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:07:20.526231 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:07:20.526627 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:07:20.543117 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:07:20.584359 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:07:20.610962 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:07:20.627209 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:07:20.641187 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:07:20.643687 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:07:20.683052 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO Agent will take identity from EC2 Jan 13 20:07:20.708899 tar[1925]: linux-arm64/LICENSE Jan 13 20:07:20.708899 tar[1925]: linux-arm64/README.md Jan 13 20:07:20.745932 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:07:20.782705 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:20.843994 sshd[2141]: Accepted publickey for core from 139.178.68.195 port 45842 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:20.847360 sshd-session[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:20.864639 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:07:20.878008 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:07:20.883002 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:20.889205 systemd-logind[1920]: New session 1 of user core. Jan 13 20:07:20.917870 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:07:20.917870 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:07:20.917870 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 20:07:20.917870 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:07:20.917870 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:07:20.917870 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [Registrar] Starting registrar module Jan 13 20:07:20.917870 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:07:20.917870 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:07:20.917870 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:07:20.917870 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:07:20.917870 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:07:20.915488 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:07:20.928243 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:07:20.948158 (systemd)[2155]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:07:20.981791 amazon-ssm-agent[2114]: 2025-01-13 20:07:20 INFO [CredentialRefresher] Next credential rotation will be in 30.008321689633334 minutes Jan 13 20:07:21.164866 systemd[2155]: Queued start job for default target default.target. Jan 13 20:07:21.178446 systemd[2155]: Created slice app.slice - User Application Slice. Jan 13 20:07:21.178517 systemd[2155]: Reached target paths.target - Paths. Jan 13 20:07:21.178551 systemd[2155]: Reached target timers.target - Timers. Jan 13 20:07:21.183919 systemd[2155]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:07:21.211583 systemd[2155]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:07:21.211756 systemd[2155]: Reached target sockets.target - Sockets. Jan 13 20:07:21.211790 systemd[2155]: Reached target basic.target - Basic System. Jan 13 20:07:21.211888 systemd[2155]: Reached target default.target - Main User Target. Jan 13 20:07:21.211953 systemd[2155]: Startup finished in 251ms. Jan 13 20:07:21.212527 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:07:21.222947 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:07:21.393167 systemd[1]: Started sshd@1-172.31.17.103:22-139.178.68.195:45844.service - OpenSSH per-connection server daemon (139.178.68.195:45844). Jan 13 20:07:21.548419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:21.551632 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:07:21.557830 systemd[1]: Startup finished in 1.193s (kernel) + 9.051s (initrd) + 8.747s (userspace) = 18.992s. Jan 13 20:07:21.579183 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:21.600811 sshd[2166]: Accepted publickey for core from 139.178.68.195 port 45844 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:21.602721 sshd-session[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:21.611865 systemd-logind[1920]: New session 2 of user core. Jan 13 20:07:21.622946 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:07:21.754438 sshd[2178]: Connection closed by 139.178.68.195 port 45844 Jan 13 20:07:21.756522 sshd-session[2166]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:21.761368 systemd[1]: sshd@1-172.31.17.103:22-139.178.68.195:45844.service: Deactivated successfully. Jan 13 20:07:21.765534 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:07:21.769047 systemd-logind[1920]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:07:21.771889 systemd-logind[1920]: Removed session 2. Jan 13 20:07:21.793005 systemd[1]: Started sshd@2-172.31.17.103:22-139.178.68.195:45848.service - OpenSSH per-connection server daemon (139.178.68.195:45848). Jan 13 20:07:21.944477 amazon-ssm-agent[2114]: 2025-01-13 20:07:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:07:21.989724 sshd[2187]: Accepted publickey for core from 139.178.68.195 port 45848 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:21.994621 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:22.007201 systemd-logind[1920]: New session 3 of user core. Jan 13 20:07:22.014735 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:07:22.046084 amazon-ssm-agent[2114]: 2025-01-13 20:07:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2191) started Jan 13 20:07:22.140116 sshd[2195]: Connection closed by 139.178.68.195 port 45848 Jan 13 20:07:22.140913 sshd-session[2187]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:22.147726 amazon-ssm-agent[2114]: 2025-01-13 20:07:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:07:22.150374 systemd[1]: sshd@2-172.31.17.103:22-139.178.68.195:45848.service: Deactivated successfully. Jan 13 20:07:22.154967 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:07:22.160931 systemd-logind[1920]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:07:22.180165 systemd[1]: Started sshd@3-172.31.17.103:22-139.178.68.195:45850.service - OpenSSH per-connection server daemon (139.178.68.195:45850). Jan 13 20:07:22.181625 systemd-logind[1920]: Removed session 3. Jan 13 20:07:22.391843 sshd[2205]: Accepted publickey for core from 139.178.68.195 port 45850 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:22.394392 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:22.404800 systemd-logind[1920]: New session 4 of user core. Jan 13 20:07:22.407308 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:07:22.481123 kubelet[2173]: E0113 20:07:22.481034 2173 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:22.485685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:22.486038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:22.486754 systemd[1]: kubelet.service: Consumed 1.264s CPU time. Jan 13 20:07:22.537121 sshd[2210]: Connection closed by 139.178.68.195 port 45850 Jan 13 20:07:22.537934 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:22.543020 systemd-logind[1920]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:07:22.545383 systemd[1]: sshd@3-172.31.17.103:22-139.178.68.195:45850.service: Deactivated successfully. Jan 13 20:07:22.549145 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:07:22.550746 systemd-logind[1920]: Removed session 4. Jan 13 20:07:22.581149 systemd[1]: Started sshd@4-172.31.17.103:22-139.178.68.195:45856.service - OpenSSH per-connection server daemon (139.178.68.195:45856). Jan 13 20:07:22.605862 ntpd[1914]: Listen normally on 7 eth0 [fe80::4d2:a6ff:fe6f:a195%2]:123 Jan 13 20:07:22.606393 ntpd[1914]: 13 Jan 20:07:22 ntpd[1914]: Listen normally on 7 eth0 [fe80::4d2:a6ff:fe6f:a195%2]:123 Jan 13 20:07:22.761907 sshd[2216]: Accepted publickey for core from 139.178.68.195 port 45856 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:22.764266 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:22.771739 systemd-logind[1920]: New session 5 of user core. Jan 13 20:07:22.776900 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:07:22.912205 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:07:22.913349 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:22.933242 sudo[2219]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:22.955775 sshd[2218]: Connection closed by 139.178.68.195 port 45856 Jan 13 20:07:22.956915 sshd-session[2216]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:22.962987 systemd[1]: sshd@4-172.31.17.103:22-139.178.68.195:45856.service: Deactivated successfully. Jan 13 20:07:22.965888 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:07:22.968585 systemd-logind[1920]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:07:22.970866 systemd-logind[1920]: Removed session 5. Jan 13 20:07:23.001418 systemd[1]: Started sshd@5-172.31.17.103:22-139.178.68.195:45868.service - OpenSSH per-connection server daemon (139.178.68.195:45868). Jan 13 20:07:23.185274 sshd[2224]: Accepted publickey for core from 139.178.68.195 port 45868 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:23.187805 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:23.196310 systemd-logind[1920]: New session 6 of user core. Jan 13 20:07:23.202925 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:07:23.308417 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:07:23.309070 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:23.315033 sudo[2228]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:23.324941 sudo[2227]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:07:23.325580 sudo[2227]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:23.350213 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:07:23.396032 augenrules[2250]: No rules Jan 13 20:07:23.398642 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:07:23.399799 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:07:23.401886 sudo[2227]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:23.425953 sshd[2226]: Connection closed by 139.178.68.195 port 45868 Jan 13 20:07:23.426935 sshd-session[2224]: pam_unix(sshd:session): session closed for user core Jan 13 20:07:23.433259 systemd[1]: sshd@5-172.31.17.103:22-139.178.68.195:45868.service: Deactivated successfully. Jan 13 20:07:23.437588 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:07:23.439509 systemd-logind[1920]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:07:23.441159 systemd-logind[1920]: Removed session 6. Jan 13 20:07:23.458182 systemd[1]: Started sshd@6-172.31.17.103:22-139.178.68.195:45882.service - OpenSSH per-connection server daemon (139.178.68.195:45882). Jan 13 20:07:23.647191 sshd[2258]: Accepted publickey for core from 139.178.68.195 port 45882 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:07:23.649582 sshd-session[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:07:23.657745 systemd-logind[1920]: New session 7 of user core. Jan 13 20:07:23.663932 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:07:23.765581 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:07:23.766381 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:07:24.480136 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:07:24.482299 (dockerd)[2279]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:07:24.928782 dockerd[2279]: time="2025-01-13T20:07:24.928317270Z" level=info msg="Starting up" Jan 13 20:07:25.231358 systemd[1]: var-lib-docker-metacopy\x2dcheck467838421-merged.mount: Deactivated successfully. Jan 13 20:07:25.244400 dockerd[2279]: time="2025-01-13T20:07:25.244320784Z" level=info msg="Loading containers: start." Jan 13 20:07:25.545859 kernel: Initializing XFRM netlink socket Jan 13 20:07:25.099879 systemd-resolved[1848]: Clock change detected. Flushing caches. Jan 13 20:07:25.106881 systemd-journald[1483]: Time jumped backwards, rotating. Jan 13 20:07:25.129349 (udev-worker)[2387]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:07:25.231499 systemd-networkd[1847]: docker0: Link UP Jan 13 20:07:25.275419 dockerd[2279]: time="2025-01-13T20:07:25.275347228Z" level=info msg="Loading containers: done." Jan 13 20:07:25.300184 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2612168187-merged.mount: Deactivated successfully. Jan 13 20:07:25.301947 dockerd[2279]: time="2025-01-13T20:07:25.300361336Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:07:25.301947 dockerd[2279]: time="2025-01-13T20:07:25.300513628Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:07:25.301947 dockerd[2279]: time="2025-01-13T20:07:25.300745348Z" level=info msg="Daemon has completed initialization" Jan 13 20:07:25.361438 dockerd[2279]: time="2025-01-13T20:07:25.361085201Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:07:25.361963 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:07:26.440622 containerd[1945]: time="2025-01-13T20:07:26.440482890Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 20:07:27.106705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1497307355.mount: Deactivated successfully. Jan 13 20:07:28.522112 containerd[1945]: time="2025-01-13T20:07:28.522033536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:28.524581 containerd[1945]: time="2025-01-13T20:07:28.524459181Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615585" Jan 13 20:07:28.526029 containerd[1945]: time="2025-01-13T20:07:28.525918393Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:28.532545 containerd[1945]: time="2025-01-13T20:07:28.532460157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:28.535928 containerd[1945]: time="2025-01-13T20:07:28.535192617Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 2.094612323s" Jan 13 20:07:28.535928 containerd[1945]: time="2025-01-13T20:07:28.535267581Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Jan 13 20:07:28.536426 containerd[1945]: time="2025-01-13T20:07:28.536360169Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 20:07:29.993021 containerd[1945]: time="2025-01-13T20:07:29.992956044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:29.995172 containerd[1945]: time="2025-01-13T20:07:29.995103000Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470096" Jan 13 20:07:29.997093 containerd[1945]: time="2025-01-13T20:07:29.997022448Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:30.009011 containerd[1945]: time="2025-01-13T20:07:30.008905364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:30.011267 containerd[1945]: time="2025-01-13T20:07:30.011076824Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.474649299s" Jan 13 20:07:30.011267 containerd[1945]: time="2025-01-13T20:07:30.011132552Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Jan 13 20:07:30.012145 containerd[1945]: time="2025-01-13T20:07:30.012081716Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 20:07:31.205864 containerd[1945]: time="2025-01-13T20:07:31.205788118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:31.207860 containerd[1945]: time="2025-01-13T20:07:31.207792022Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024202" Jan 13 20:07:31.208763 containerd[1945]: time="2025-01-13T20:07:31.208382218Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:31.213885 containerd[1945]: time="2025-01-13T20:07:31.213785146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:31.216284 containerd[1945]: time="2025-01-13T20:07:31.216088546Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.203946578s" Jan 13 20:07:31.216284 containerd[1945]: time="2025-01-13T20:07:31.216143782Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Jan 13 20:07:31.217051 containerd[1945]: time="2025-01-13T20:07:31.216986194Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:07:32.074671 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:07:32.085233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:32.480043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:32.494267 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:32.599539 kubelet[2543]: E0113 20:07:32.599472 2543 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:32.607317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:32.607660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:32.762529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1048862582.mount: Deactivated successfully. Jan 13 20:07:33.301784 containerd[1945]: time="2025-01-13T20:07:33.301697136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:33.304113 containerd[1945]: time="2025-01-13T20:07:33.304047888Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771426" Jan 13 20:07:33.305844 containerd[1945]: time="2025-01-13T20:07:33.305769732Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:33.310595 containerd[1945]: time="2025-01-13T20:07:33.310496736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:33.312080 containerd[1945]: time="2025-01-13T20:07:33.311831196Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 2.094779074s" Jan 13 20:07:33.312080 containerd[1945]: time="2025-01-13T20:07:33.311894532Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 13 20:07:33.312556 containerd[1945]: time="2025-01-13T20:07:33.312436920Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:07:33.848376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1378613838.mount: Deactivated successfully. Jan 13 20:07:34.966842 containerd[1945]: time="2025-01-13T20:07:34.965203397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:34.968536 containerd[1945]: time="2025-01-13T20:07:34.968435369Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 13 20:07:34.970904 containerd[1945]: time="2025-01-13T20:07:34.970806689Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:34.976942 containerd[1945]: time="2025-01-13T20:07:34.976843541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:34.979549 containerd[1945]: time="2025-01-13T20:07:34.979291625Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.666805817s" Jan 13 20:07:34.979549 containerd[1945]: time="2025-01-13T20:07:34.979364273Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:07:34.980529 containerd[1945]: time="2025-01-13T20:07:34.980454017Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 20:07:35.561933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2371950090.mount: Deactivated successfully. Jan 13 20:07:35.575196 containerd[1945]: time="2025-01-13T20:07:35.574800592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:35.576041 containerd[1945]: time="2025-01-13T20:07:35.575865868Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 13 20:07:35.577191 containerd[1945]: time="2025-01-13T20:07:35.577066804Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:35.585440 containerd[1945]: time="2025-01-13T20:07:35.585350968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:35.589781 containerd[1945]: time="2025-01-13T20:07:35.589545988Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 609.017067ms" Jan 13 20:07:35.591129 containerd[1945]: time="2025-01-13T20:07:35.591061732Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 20:07:35.592351 containerd[1945]: time="2025-01-13T20:07:35.592294924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 20:07:36.133548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1492307187.mount: Deactivated successfully. Jan 13 20:07:38.298937 containerd[1945]: time="2025-01-13T20:07:38.298845029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:38.310351 containerd[1945]: time="2025-01-13T20:07:38.310249433Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 13 20:07:38.324317 containerd[1945]: time="2025-01-13T20:07:38.324230297Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:38.356023 containerd[1945]: time="2025-01-13T20:07:38.355914929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:07:38.358832 containerd[1945]: time="2025-01-13T20:07:38.358781153Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.766426709s" Jan 13 20:07:38.359567 containerd[1945]: time="2025-01-13T20:07:38.359001041Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 13 20:07:42.825661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:07:42.834888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:43.144074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:43.146621 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:07:43.228977 kubelet[2683]: E0113 20:07:43.228914 2683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:07:43.233398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:07:43.233922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:07:47.156013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:47.168426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:47.233284 systemd[1]: Reloading requested from client PID 2697 ('systemctl') (unit session-7.scope)... Jan 13 20:07:47.233335 systemd[1]: Reloading... Jan 13 20:07:47.483775 zram_generator::config[2741]: No configuration found. Jan 13 20:07:47.741582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:47.919204 systemd[1]: Reloading finished in 684 ms. Jan 13 20:07:48.012695 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:07:48.012953 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:07:48.013590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:48.020563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:48.313765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:48.328262 (kubelet)[2801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:07:48.397301 kubelet[2801]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:48.397301 kubelet[2801]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:07:48.397301 kubelet[2801]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:48.397872 kubelet[2801]: I0113 20:07:48.397444 2801 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:07:49.050835 kubelet[2801]: I0113 20:07:49.050509 2801 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:07:49.050835 kubelet[2801]: I0113 20:07:49.050553 2801 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:07:49.051162 kubelet[2801]: I0113 20:07:49.051120 2801 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:07:49.107984 kubelet[2801]: E0113 20:07:49.107930 2801 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.103:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:07:49.109617 kubelet[2801]: I0113 20:07:49.109356 2801 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:07:49.120182 kubelet[2801]: E0113 20:07:49.119969 2801 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:07:49.120182 kubelet[2801]: I0113 20:07:49.120020 2801 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:07:49.129950 kubelet[2801]: I0113 20:07:49.129913 2801 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:07:49.131448 kubelet[2801]: I0113 20:07:49.131340 2801 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:07:49.132790 kubelet[2801]: I0113 20:07:49.131880 2801 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:07:49.132790 kubelet[2801]: I0113 20:07:49.131927 2801 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-103","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:07:49.132790 kubelet[2801]: I0113 20:07:49.132281 2801 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:07:49.132790 kubelet[2801]: I0113 20:07:49.132300 2801 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:07:49.133131 kubelet[2801]: I0113 20:07:49.132490 2801 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:07:49.136886 kubelet[2801]: I0113 20:07:49.136851 2801 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:07:49.137034 kubelet[2801]: I0113 20:07:49.137014 2801 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:07:49.137166 kubelet[2801]: I0113 20:07:49.137147 2801 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:07:49.137283 kubelet[2801]: I0113 20:07:49.137263 2801 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:07:49.140825 kubelet[2801]: W0113 20:07:49.140697 2801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-103&limit=500&resourceVersion=0": dial tcp 172.31.17.103:6443: connect: connection refused Jan 13 20:07:49.140974 kubelet[2801]: E0113 20:07:49.140870 2801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-103&limit=500&resourceVersion=0\": dial tcp 172.31.17.103:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:07:49.142330 kubelet[2801]: W0113 20:07:49.141500 2801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.103:6443: connect: connection refused Jan 13 20:07:49.142330 kubelet[2801]: E0113 20:07:49.141587 2801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.103:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:07:49.142330 kubelet[2801]: I0113 20:07:49.142019 2801 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:07:49.145848 kubelet[2801]: I0113 20:07:49.145643 2801 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:07:49.149191 kubelet[2801]: W0113 20:07:49.148335 2801 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:07:49.151068 kubelet[2801]: I0113 20:07:49.151029 2801 server.go:1269] "Started kubelet" Jan 13 20:07:49.157098 kubelet[2801]: I0113 20:07:49.156152 2801 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:07:49.159257 kubelet[2801]: I0113 20:07:49.159219 2801 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:07:49.163098 kubelet[2801]: I0113 20:07:49.162365 2801 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:07:49.163098 kubelet[2801]: I0113 20:07:49.162984 2801 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:07:49.173755 kubelet[2801]: E0113 20:07:49.167053 2801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.103:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.103:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-103.181a595e9a0b8a47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-103,UID:ip-172-31-17-103,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-103,},FirstTimestamp:2025-01-13 20:07:49.150992967 +0000 UTC m=+0.816812369,LastTimestamp:2025-01-13 20:07:49.150992967 +0000 UTC m=+0.816812369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-103,}" Jan 13 20:07:49.174146 kubelet[2801]: I0113 20:07:49.174116 2801 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:07:49.180205 kubelet[2801]: I0113 20:07:49.180172 2801 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:07:49.181461 kubelet[2801]: E0113 20:07:49.181419 2801 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-103\" not found" Jan 13 20:07:49.182018 kubelet[2801]: I0113 20:07:49.181970 2801 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:07:49.186911 kubelet[2801]: E0113 20:07:49.186769 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-103?timeout=10s\": dial tcp 172.31.17.103:6443: connect: connection refused" interval="200ms" Jan 13 20:07:49.189969 kubelet[2801]: I0113 20:07:49.187656 2801 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:07:49.189969 kubelet[2801]: I0113 20:07:49.187887 2801 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:07:49.195798 kubelet[2801]: I0113 20:07:49.194276 2801 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:07:49.195798 kubelet[2801]: I0113 20:07:49.194361 2801 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:07:49.196550 kubelet[2801]: W0113 20:07:49.196477 2801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.103:6443: connect: connection refused Jan 13 20:07:49.196851 kubelet[2801]: E0113 20:07:49.196813 2801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.103:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:07:49.197272 kubelet[2801]: I0113 20:07:49.197245 2801 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:07:49.237334 kubelet[2801]: I0113 20:07:49.237275 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:07:49.240031 kubelet[2801]: I0113 20:07:49.239987 2801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:07:49.240251 kubelet[2801]: I0113 20:07:49.240231 2801 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:07:49.240393 kubelet[2801]: I0113 20:07:49.240375 2801 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:07:49.240629 kubelet[2801]: E0113 20:07:49.240598 2801 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:07:49.251357 kubelet[2801]: W0113 20:07:49.251256 2801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.103:6443: connect: connection refused Jan 13 20:07:49.251576 kubelet[2801]: E0113 20:07:49.251382 2801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.103:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:07:49.263239 kubelet[2801]: I0113 20:07:49.263205 2801 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:07:49.263551 kubelet[2801]: I0113 20:07:49.263528 2801 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:07:49.263728 kubelet[2801]: I0113 20:07:49.263689 2801 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:07:49.267587 kubelet[2801]: I0113 20:07:49.267092 2801 policy_none.go:49] "None policy: Start" Jan 13 20:07:49.268404 kubelet[2801]: I0113 20:07:49.268375 2801 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:07:49.268778 kubelet[2801]: I0113 20:07:49.268705 2801 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:07:49.280639 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:07:49.281904 kubelet[2801]: E0113 20:07:49.281805 2801 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-103\" not found" Jan 13 20:07:49.297400 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:07:49.304817 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:07:49.315498 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:07:49.322393 kubelet[2801]: I0113 20:07:49.322342 2801 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:07:49.323215 kubelet[2801]: I0113 20:07:49.322647 2801 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:07:49.323215 kubelet[2801]: I0113 20:07:49.322682 2801 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:07:49.323215 kubelet[2801]: I0113 20:07:49.323033 2801 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:07:49.328164 kubelet[2801]: E0113 20:07:49.328108 2801 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-103\" not found" Jan 13 20:07:49.360345 systemd[1]: Created slice kubepods-burstable-pod16bff66dd6fa20d452dc062dbe32f215.slice - libcontainer container kubepods-burstable-pod16bff66dd6fa20d452dc062dbe32f215.slice. Jan 13 20:07:49.377496 systemd[1]: Created slice kubepods-burstable-pode45072f4307f412d61a3024fc149a147.slice - libcontainer container kubepods-burstable-pode45072f4307f412d61a3024fc149a147.slice. Jan 13 20:07:49.387985 kubelet[2801]: E0113 20:07:49.387926 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-103?timeout=10s\": dial tcp 172.31.17.103:6443: connect: connection refused" interval="400ms" Jan 13 20:07:49.393841 systemd[1]: Created slice kubepods-burstable-pod6becfcf085572cf746d147b06dda5c4f.slice - libcontainer container kubepods-burstable-pod6becfcf085572cf746d147b06dda5c4f.slice. Jan 13 20:07:49.395242 kubelet[2801]: I0113 20:07:49.395142 2801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e45072f4307f412d61a3024fc149a147-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-103\" (UID: \"e45072f4307f412d61a3024fc149a147\") " pod="kube-system/kube-controller-manager-ip-172-31-17-103" Jan 13 20:07:49.396297 kubelet[2801]: I0113 20:07:49.395859 2801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6becfcf085572cf746d147b06dda5c4f-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-103\" (UID: \"6becfcf085572cf746d147b06dda5c4f\") " pod="kube-system/kube-scheduler-ip-172-31-17-103" Jan 13 20:07:49.396297 kubelet[2801]: I0113 20:07:49.395929 2801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e45072f4307f412d61a3024fc149a147-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-103\" (UID: \"e45072f4307f412d61a3024fc149a147\") " pod="kube-system/kube-controller-manager-ip-172-31-17-103" Jan 13 20:07:49.396297 kubelet[2801]: I0113 20:07:49.395967 2801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16bff66dd6fa20d452dc062dbe32f215-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-103\" (UID: \"16bff66dd6fa20d452dc062dbe32f215\") " pod="kube-system/kube-apiserver-ip-172-31-17-103" Jan 13 20:07:49.396297 kubelet[2801]: I0113 20:07:49.396012 2801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16bff66dd6fa20d452dc062dbe32f215-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-103\" (UID: \"16bff66dd6fa20d452dc062dbe32f215\") " pod="kube-system/kube-apiserver-ip-172-31-17-103" Jan 13 20:07:49.396297 kubelet[2801]: I0113 20:07:49.396047 2801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e45072f4307f412d61a3024fc149a147-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-103\" (UID: \"e45072f4307f412d61a3024fc149a147\") " pod="kube-system/kube-controller-manager-ip-172-31-17-103" Jan 13 20:07:49.396622 kubelet[2801]: I0113 20:07:49.396084 2801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e45072f4307f412d61a3024fc149a147-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-103\" (UID: \"e45072f4307f412d61a3024fc149a147\") " pod="kube-system/kube-controller-manager-ip-172-31-17-103" Jan 13 20:07:49.396622 kubelet[2801]: I0113 20:07:49.396118 2801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e45072f4307f412d61a3024fc149a147-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-103\" (UID: \"e45072f4307f412d61a3024fc149a147\") " pod="kube-system/kube-controller-manager-ip-172-31-17-103" Jan 13 20:07:49.396622 kubelet[2801]: I0113 20:07:49.396167 2801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16bff66dd6fa20d452dc062dbe32f215-ca-certs\") pod \"kube-apiserver-ip-172-31-17-103\" (UID: \"16bff66dd6fa20d452dc062dbe32f215\") " pod="kube-system/kube-apiserver-ip-172-31-17-103" Jan 13 20:07:49.425921 kubelet[2801]: I0113 20:07:49.425882 2801 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-103" Jan 13 20:07:49.427170 kubelet[2801]: E0113 20:07:49.427108 2801 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.103:6443/api/v1/nodes\": dial tcp 172.31.17.103:6443: connect: connection refused" node="ip-172-31-17-103" Jan 13 20:07:49.629933 kubelet[2801]: I0113 20:07:49.629874 2801 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-103" Jan 13 20:07:49.630432 kubelet[2801]: E0113 20:07:49.630380 2801 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.103:6443/api/v1/nodes\": dial tcp 172.31.17.103:6443: connect: connection refused" node="ip-172-31-17-103" Jan 13 20:07:49.674421 containerd[1945]: time="2025-01-13T20:07:49.674258130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-103,Uid:16bff66dd6fa20d452dc062dbe32f215,Namespace:kube-system,Attempt:0,}" Jan 13 20:07:49.683102 containerd[1945]: time="2025-01-13T20:07:49.683047830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-103,Uid:e45072f4307f412d61a3024fc149a147,Namespace:kube-system,Attempt:0,}" Jan 13 20:07:49.700183 containerd[1945]: time="2025-01-13T20:07:49.700128798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-103,Uid:6becfcf085572cf746d147b06dda5c4f,Namespace:kube-system,Attempt:0,}" Jan 13 20:07:49.788633 kubelet[2801]: E0113 20:07:49.788573 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-103?timeout=10s\": dial tcp 172.31.17.103:6443: connect: connection refused" interval="800ms" Jan 13 20:07:50.032868 kubelet[2801]: I0113 20:07:50.032748 2801 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-103" Jan 13 20:07:50.033421 kubelet[2801]: E0113 20:07:50.033340 2801 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.103:6443/api/v1/nodes\": dial tcp 172.31.17.103:6443: connect: connection refused" node="ip-172-31-17-103" Jan 13 20:07:50.094129 kubelet[2801]: W0113 20:07:50.094071 2801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.103:6443: connect: connection refused Jan 13 20:07:50.094248 kubelet[2801]: E0113 20:07:50.094146 2801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.103:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:07:50.159011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405664483.mount: Deactivated successfully. Jan 13 20:07:50.166825 containerd[1945]: time="2025-01-13T20:07:50.166759564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:50.170635 containerd[1945]: time="2025-01-13T20:07:50.169113412Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:50.172766 containerd[1945]: time="2025-01-13T20:07:50.172666720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 20:07:50.173753 containerd[1945]: time="2025-01-13T20:07:50.173666332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:07:50.176280 containerd[1945]: time="2025-01-13T20:07:50.176215696Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:50.178968 containerd[1945]: time="2025-01-13T20:07:50.178692676Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:07:50.178968 containerd[1945]: time="2025-01-13T20:07:50.178858864Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:50.184359 containerd[1945]: time="2025-01-13T20:07:50.184308136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:07:50.187555 containerd[1945]: time="2025-01-13T20:07:50.187507048Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 504.352538ms" Jan 13 20:07:50.197524 containerd[1945]: time="2025-01-13T20:07:50.196676008Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 522.273758ms" Jan 13 20:07:50.201550 containerd[1945]: time="2025-01-13T20:07:50.201478156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 501.230594ms" Jan 13 20:07:50.205879 kubelet[2801]: W0113 20:07:50.205825 2801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.103:6443: connect: connection refused Jan 13 20:07:50.206249 kubelet[2801]: E0113 20:07:50.206181 2801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.103:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:07:50.300125 kubelet[2801]: W0113 20:07:50.299873 2801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.103:6443: connect: connection refused Jan 13 20:07:50.300125 kubelet[2801]: E0113 20:07:50.299975 2801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.103:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:07:50.361241 containerd[1945]: time="2025-01-13T20:07:50.360907601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:07:50.365123 containerd[1945]: time="2025-01-13T20:07:50.363493625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:07:50.365622 containerd[1945]: time="2025-01-13T20:07:50.365314409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:50.365622 containerd[1945]: time="2025-01-13T20:07:50.365505053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:50.367693 containerd[1945]: time="2025-01-13T20:07:50.367110269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:07:50.367693 containerd[1945]: time="2025-01-13T20:07:50.367297001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:07:50.368391 containerd[1945]: time="2025-01-13T20:07:50.368129825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:07:50.368391 containerd[1945]: time="2025-01-13T20:07:50.368236133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:07:50.368391 containerd[1945]: time="2025-01-13T20:07:50.368272925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:50.369304 containerd[1945]: time="2025-01-13T20:07:50.369181709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:50.372928 containerd[1945]: time="2025-01-13T20:07:50.367364873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:50.373108 containerd[1945]: time="2025-01-13T20:07:50.373046201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:07:50.416013 systemd[1]: Started cri-containerd-0aa26196db0c93faa12a79a354c0aaf51726fb3dee3b192abeef3359e74539c9.scope - libcontainer container 0aa26196db0c93faa12a79a354c0aaf51726fb3dee3b192abeef3359e74539c9. Jan 13 20:07:50.419855 systemd[1]: Started cri-containerd-b373230340c9022df89211c63a68ff9443f1379796c4e716e3edbf9acea6552c.scope - libcontainer container b373230340c9022df89211c63a68ff9443f1379796c4e716e3edbf9acea6552c. Jan 13 20:07:50.440105 systemd[1]: Started cri-containerd-66644a51cff52db31a99770dc11e278bc3fb342513374d830c5c32d7e3ccfa8c.scope - libcontainer container 66644a51cff52db31a99770dc11e278bc3fb342513374d830c5c32d7e3ccfa8c. Jan 13 20:07:50.476346 kubelet[2801]: W0113 20:07:50.476046 2801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-103&limit=500&resourceVersion=0": dial tcp 172.31.17.103:6443: connect: connection refused Jan 13 20:07:50.478500 kubelet[2801]: E0113 20:07:50.477580 2801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-103&limit=500&resourceVersion=0\": dial tcp 172.31.17.103:6443: connect: connection refused" logger="UnhandledError" Jan 13 20:07:50.560009 containerd[1945]: time="2025-01-13T20:07:50.559862202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-103,Uid:16bff66dd6fa20d452dc062dbe32f215,Namespace:kube-system,Attempt:0,} returns sandbox id \"0aa26196db0c93faa12a79a354c0aaf51726fb3dee3b192abeef3359e74539c9\"" Jan 13 20:07:50.566895 containerd[1945]: time="2025-01-13T20:07:50.566814414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-103,Uid:6becfcf085572cf746d147b06dda5c4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b373230340c9022df89211c63a68ff9443f1379796c4e716e3edbf9acea6552c\"" Jan 13 20:07:50.573001 containerd[1945]: time="2025-01-13T20:07:50.572894934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-103,Uid:e45072f4307f412d61a3024fc149a147,Namespace:kube-system,Attempt:0,} returns sandbox id \"66644a51cff52db31a99770dc11e278bc3fb342513374d830c5c32d7e3ccfa8c\"" Jan 13 20:07:50.574546 containerd[1945]: time="2025-01-13T20:07:50.574434234Z" level=info msg="CreateContainer within sandbox \"0aa26196db0c93faa12a79a354c0aaf51726fb3dee3b192abeef3359e74539c9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:07:50.578140 containerd[1945]: time="2025-01-13T20:07:50.575996514Z" level=info msg="CreateContainer within sandbox \"b373230340c9022df89211c63a68ff9443f1379796c4e716e3edbf9acea6552c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:07:50.580135 containerd[1945]: time="2025-01-13T20:07:50.580081038Z" level=info msg="CreateContainer within sandbox \"66644a51cff52db31a99770dc11e278bc3fb342513374d830c5c32d7e3ccfa8c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:07:50.589734 kubelet[2801]: E0113 20:07:50.589660 2801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-103?timeout=10s\": dial tcp 172.31.17.103:6443: connect: connection refused" interval="1.6s" Jan 13 20:07:50.600869 containerd[1945]: time="2025-01-13T20:07:50.600804222Z" level=info msg="CreateContainer within sandbox \"0aa26196db0c93faa12a79a354c0aaf51726fb3dee3b192abeef3359e74539c9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"671124499b0c4a4201c2cfd50619646a28057d564400d356dd3a703a749fcc63\"" Jan 13 20:07:50.601760 containerd[1945]: time="2025-01-13T20:07:50.601675218Z" level=info msg="StartContainer for \"671124499b0c4a4201c2cfd50619646a28057d564400d356dd3a703a749fcc63\"" Jan 13 20:07:50.605535 containerd[1945]: time="2025-01-13T20:07:50.605241318Z" level=info msg="CreateContainer within sandbox \"b373230340c9022df89211c63a68ff9443f1379796c4e716e3edbf9acea6552c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"68489e8e6b0357f97e5b7b64b4118d529a4c3818a3917b097d9372b7acf07f03\"" Jan 13 20:07:50.606993 containerd[1945]: time="2025-01-13T20:07:50.606927978Z" level=info msg="StartContainer for \"68489e8e6b0357f97e5b7b64b4118d529a4c3818a3917b097d9372b7acf07f03\"" Jan 13 20:07:50.609108 containerd[1945]: time="2025-01-13T20:07:50.609043506Z" level=info msg="CreateContainer within sandbox \"66644a51cff52db31a99770dc11e278bc3fb342513374d830c5c32d7e3ccfa8c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4c44fc89694fec028d1a5fd046253d697f24ce5c3785d1da96bbc0a92edcc9c1\"" Jan 13 20:07:50.609955 containerd[1945]: time="2025-01-13T20:07:50.609781650Z" level=info msg="StartContainer for \"4c44fc89694fec028d1a5fd046253d697f24ce5c3785d1da96bbc0a92edcc9c1\"" Jan 13 20:07:50.673475 systemd[1]: Started cri-containerd-671124499b0c4a4201c2cfd50619646a28057d564400d356dd3a703a749fcc63.scope - libcontainer container 671124499b0c4a4201c2cfd50619646a28057d564400d356dd3a703a749fcc63. Jan 13 20:07:50.690060 systemd[1]: Started cri-containerd-68489e8e6b0357f97e5b7b64b4118d529a4c3818a3917b097d9372b7acf07f03.scope - libcontainer container 68489e8e6b0357f97e5b7b64b4118d529a4c3818a3917b097d9372b7acf07f03. Jan 13 20:07:50.711002 systemd[1]: Started cri-containerd-4c44fc89694fec028d1a5fd046253d697f24ce5c3785d1da96bbc0a92edcc9c1.scope - libcontainer container 4c44fc89694fec028d1a5fd046253d697f24ce5c3785d1da96bbc0a92edcc9c1. Jan 13 20:07:50.816908 containerd[1945]: time="2025-01-13T20:07:50.816548467Z" level=info msg="StartContainer for \"671124499b0c4a4201c2cfd50619646a28057d564400d356dd3a703a749fcc63\" returns successfully" Jan 13 20:07:50.830106 containerd[1945]: time="2025-01-13T20:07:50.828828847Z" level=info msg="StartContainer for \"68489e8e6b0357f97e5b7b64b4118d529a4c3818a3917b097d9372b7acf07f03\" returns successfully" Jan 13 20:07:50.837137 kubelet[2801]: I0113 20:07:50.837076 2801 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-103" Jan 13 20:07:50.837589 kubelet[2801]: E0113 20:07:50.837530 2801 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.103:6443/api/v1/nodes\": dial tcp 172.31.17.103:6443: connect: connection refused" node="ip-172-31-17-103" Jan 13 20:07:50.853138 containerd[1945]: time="2025-01-13T20:07:50.853061815Z" level=info msg="StartContainer for \"4c44fc89694fec028d1a5fd046253d697f24ce5c3785d1da96bbc0a92edcc9c1\" returns successfully" Jan 13 20:07:52.440624 kubelet[2801]: I0113 20:07:52.440556 2801 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-103" Jan 13 20:07:54.823370 kubelet[2801]: E0113 20:07:54.823298 2801 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-103\" not found" node="ip-172-31-17-103" Jan 13 20:07:55.055792 kubelet[2801]: I0113 20:07:55.055737 2801 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-103" Jan 13 20:07:55.144633 kubelet[2801]: I0113 20:07:55.144305 2801 apiserver.go:52] "Watching apiserver" Jan 13 20:07:55.194796 kubelet[2801]: I0113 20:07:55.194748 2801 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:07:57.318880 systemd[1]: Reloading requested from client PID 3083 ('systemctl') (unit session-7.scope)... Jan 13 20:07:57.319346 systemd[1]: Reloading... Jan 13 20:07:57.496767 zram_generator::config[3123]: No configuration found. Jan 13 20:07:57.775646 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:07:57.985127 systemd[1]: Reloading finished in 664 ms. Jan 13 20:07:58.063281 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:58.090947 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:07:58.092071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:58.092154 systemd[1]: kubelet.service: Consumed 1.517s CPU time, 117.9M memory peak, 0B memory swap peak. Jan 13 20:07:58.109070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:07:58.412620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:07:58.435368 (kubelet)[3183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:07:58.527778 kubelet[3183]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:58.527778 kubelet[3183]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:07:58.527778 kubelet[3183]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:07:58.527778 kubelet[3183]: I0113 20:07:58.527414 3183 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:07:58.540952 kubelet[3183]: I0113 20:07:58.540904 3183 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:07:58.541798 kubelet[3183]: I0113 20:07:58.541144 3183 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:07:58.541798 kubelet[3183]: I0113 20:07:58.541587 3183 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:07:58.553964 kubelet[3183]: I0113 20:07:58.553915 3183 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:07:58.560508 kubelet[3183]: I0113 20:07:58.560460 3183 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:07:58.569672 kubelet[3183]: E0113 20:07:58.569618 3183 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:07:58.570773 kubelet[3183]: I0113 20:07:58.569933 3183 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:07:58.577549 kubelet[3183]: I0113 20:07:58.577445 3183 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:07:58.577856 kubelet[3183]: I0113 20:07:58.577791 3183 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:07:58.578209 kubelet[3183]: I0113 20:07:58.578119 3183 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:07:58.579011 kubelet[3183]: I0113 20:07:58.578614 3183 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-103","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:07:58.579271 kubelet[3183]: I0113 20:07:58.579028 3183 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:07:58.579271 kubelet[3183]: I0113 20:07:58.579054 3183 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:07:58.579271 kubelet[3183]: I0113 20:07:58.579120 3183 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:07:58.580578 kubelet[3183]: I0113 20:07:58.579363 3183 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:07:58.580578 kubelet[3183]: I0113 20:07:58.579417 3183 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:07:58.580578 kubelet[3183]: I0113 20:07:58.579529 3183 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:07:58.580578 kubelet[3183]: I0113 20:07:58.579558 3183 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:07:58.587827 kubelet[3183]: I0113 20:07:58.586628 3183 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:07:58.587827 kubelet[3183]: I0113 20:07:58.587628 3183 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:07:58.589887 kubelet[3183]: I0113 20:07:58.589616 3183 server.go:1269] "Started kubelet" Jan 13 20:07:58.600263 kubelet[3183]: I0113 20:07:58.600114 3183 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:07:58.607641 sudo[3196]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:07:58.610471 sudo[3196]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:07:58.617784 kubelet[3183]: I0113 20:07:58.617691 3183 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:07:58.630987 kubelet[3183]: I0113 20:07:58.630897 3183 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:07:58.698785 kubelet[3183]: I0113 20:07:58.697154 3183 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:07:58.702848 kubelet[3183]: I0113 20:07:58.702001 3183 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:07:58.709256 kubelet[3183]: I0113 20:07:58.631156 3183 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:07:58.709475 kubelet[3183]: E0113 20:07:58.631411 3183 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-103\" not found" Jan 13 20:07:58.709760 kubelet[3183]: I0113 20:07:58.669148 3183 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:07:58.716801 kubelet[3183]: I0113 20:07:58.625103 3183 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:07:58.717858 kubelet[3183]: I0113 20:07:58.620945 3183 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:07:58.717858 kubelet[3183]: I0113 20:07:58.717576 3183 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:07:58.717858 kubelet[3183]: I0113 20:07:58.701458 3183 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:07:58.721448 kubelet[3183]: I0113 20:07:58.721386 3183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:07:58.727328 kubelet[3183]: I0113 20:07:58.727277 3183 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:07:58.727779 kubelet[3183]: I0113 20:07:58.727503 3183 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:07:58.727779 kubelet[3183]: I0113 20:07:58.727546 3183 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:07:58.727779 kubelet[3183]: E0113 20:07:58.727625 3183 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:07:58.740112 kubelet[3183]: I0113 20:07:58.740063 3183 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:07:58.762974 kubelet[3183]: E0113 20:07:58.762929 3183 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:07:58.827804 kubelet[3183]: E0113 20:07:58.827705 3183 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:07:58.881080 kubelet[3183]: I0113 20:07:58.881043 3183 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:07:58.881517 kubelet[3183]: I0113 20:07:58.881484 3183 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:07:58.881683 kubelet[3183]: I0113 20:07:58.881661 3183 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:07:58.882296 kubelet[3183]: I0113 20:07:58.882140 3183 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:07:58.882296 kubelet[3183]: I0113 20:07:58.882177 3183 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:07:58.882296 kubelet[3183]: I0113 20:07:58.882213 3183 policy_none.go:49] "None policy: Start" Jan 13 20:07:58.885915 kubelet[3183]: I0113 20:07:58.884364 3183 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:07:58.885915 kubelet[3183]: I0113 20:07:58.884410 3183 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:07:58.885915 kubelet[3183]: I0113 20:07:58.884818 3183 state_mem.go:75] "Updated machine memory state" Jan 13 20:07:58.904957 kubelet[3183]: I0113 20:07:58.903939 3183 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:07:58.904957 kubelet[3183]: I0113 20:07:58.904949 3183 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:07:58.905150 kubelet[3183]: I0113 20:07:58.904974 3183 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:07:58.906276 kubelet[3183]: I0113 20:07:58.905694 3183 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:07:59.030214 kubelet[3183]: I0113 20:07:59.027914 3183 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-103" Jan 13 20:07:59.058226 kubelet[3183]: E0113 20:07:59.058160 3183 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-17-103\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-103" Jan 13 20:07:59.072809 kubelet[3183]: I0113 20:07:59.072708 3183 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-17-103" Jan 13 20:07:59.072982 kubelet[3183]: I0113 20:07:59.072857 3183 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-103" Jan 13 20:07:59.120006 kubelet[3183]: I0113 20:07:59.119947 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6becfcf085572cf746d147b06dda5c4f-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-103\" (UID: \"6becfcf085572cf746d147b06dda5c4f\") " pod="kube-system/kube-scheduler-ip-172-31-17-103" Jan 13 20:07:59.120006 kubelet[3183]: I0113 20:07:59.120014 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16bff66dd6fa20d452dc062dbe32f215-ca-certs\") pod \"kube-apiserver-ip-172-31-17-103\" (UID: \"16bff66dd6fa20d452dc062dbe32f215\") " pod="kube-system/kube-apiserver-ip-172-31-17-103" Jan 13 20:07:59.120179 kubelet[3183]: I0113 20:07:59.120059 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16bff66dd6fa20d452dc062dbe32f215-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-103\" (UID: \"16bff66dd6fa20d452dc062dbe32f215\") " pod="kube-system/kube-apiserver-ip-172-31-17-103" Jan 13 20:07:59.120179 kubelet[3183]: I0113 20:07:59.120094 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e45072f4307f412d61a3024fc149a147-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-103\" (UID: \"e45072f4307f412d61a3024fc149a147\") " pod="kube-system/kube-controller-manager-ip-172-31-17-103" Jan 13 20:07:59.120179 kubelet[3183]: I0113 20:07:59.120137 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16bff66dd6fa20d452dc062dbe32f215-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-103\" (UID: \"16bff66dd6fa20d452dc062dbe32f215\") " pod="kube-system/kube-apiserver-ip-172-31-17-103" Jan 13 20:07:59.121748 kubelet[3183]: I0113 20:07:59.120535 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e45072f4307f412d61a3024fc149a147-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-103\" (UID: \"e45072f4307f412d61a3024fc149a147\") " pod="kube-system/kube-controller-manager-ip-172-31-17-103" Jan 13 20:07:59.121748 kubelet[3183]: I0113 20:07:59.120617 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e45072f4307f412d61a3024fc149a147-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-103\" (UID: \"e45072f4307f412d61a3024fc149a147\") " pod="kube-system/kube-controller-manager-ip-172-31-17-103" Jan 13 20:07:59.121748 kubelet[3183]: I0113 20:07:59.120852 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e45072f4307f412d61a3024fc149a147-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-103\" (UID: \"e45072f4307f412d61a3024fc149a147\") " pod="kube-system/kube-controller-manager-ip-172-31-17-103" Jan 13 20:07:59.121748 kubelet[3183]: I0113 20:07:59.120911 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e45072f4307f412d61a3024fc149a147-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-103\" (UID: \"e45072f4307f412d61a3024fc149a147\") " pod="kube-system/kube-controller-manager-ip-172-31-17-103" Jan 13 20:07:59.586123 kubelet[3183]: I0113 20:07:59.585621 3183 apiserver.go:52] "Watching apiserver" Jan 13 20:07:59.602841 sudo[3196]: pam_unix(sudo:session): session closed for user root Jan 13 20:07:59.610937 kubelet[3183]: I0113 20:07:59.610849 3183 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:07:59.819346 kubelet[3183]: E0113 20:07:59.818668 3183 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-17-103\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-103" Jan 13 20:07:59.820238 kubelet[3183]: E0113 20:07:59.820170 3183 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-103\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-103" Jan 13 20:07:59.876061 kubelet[3183]: I0113 20:07:59.875642 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-103" podStartSLOduration=2.8756167120000002 podStartE2EDuration="2.875616712s" podCreationTimestamp="2025-01-13 20:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:07:59.857587816 +0000 UTC m=+1.414622408" watchObservedRunningTime="2025-01-13 20:07:59.875616712 +0000 UTC m=+1.432651292" Jan 13 20:07:59.901136 kubelet[3183]: I0113 20:07:59.900146 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-103" podStartSLOduration=0.900122152 podStartE2EDuration="900.122152ms" podCreationTimestamp="2025-01-13 20:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:07:59.878609368 +0000 UTC m=+1.435643972" watchObservedRunningTime="2025-01-13 20:07:59.900122152 +0000 UTC m=+1.457156744" Jan 13 20:07:59.923465 kubelet[3183]: I0113 20:07:59.922448 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-103" podStartSLOduration=0.922396372 podStartE2EDuration="922.396372ms" podCreationTimestamp="2025-01-13 20:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:07:59.902921692 +0000 UTC m=+1.459956308" watchObservedRunningTime="2025-01-13 20:07:59.922396372 +0000 UTC m=+1.479430952" Jan 13 20:08:01.617518 kubelet[3183]: I0113 20:08:01.617230 3183 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:08:01.621071 containerd[1945]: time="2025-01-13T20:08:01.619703957Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:08:01.621658 kubelet[3183]: I0113 20:08:01.620161 3183 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:08:02.349320 systemd[1]: Created slice kubepods-besteffort-pod3bf360ed_61b5_4e3b_84a1_971a7d28ecf3.slice - libcontainer container kubepods-besteffort-pod3bf360ed_61b5_4e3b_84a1_971a7d28ecf3.slice. Jan 13 20:08:02.365753 kubelet[3183]: W0113 20:08:02.365688 3183 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-103" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-103' and this object Jan 13 20:08:02.365978 kubelet[3183]: E0113 20:08:02.365931 3183 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-17-103\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-103' and this object" logger="UnhandledError" Jan 13 20:08:02.366219 kubelet[3183]: W0113 20:08:02.366174 3183 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-17-103" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-103' and this object Jan 13 20:08:02.366432 kubelet[3183]: E0113 20:08:02.366226 3183 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-17-103\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-103' and this object" logger="UnhandledError" Jan 13 20:08:02.440398 sudo[2261]: pam_unix(sudo:session): session closed for user root Jan 13 20:08:02.441464 systemd[1]: Created slice kubepods-burstable-podff006b47_b526_43e6_af32_133b4ae313cd.slice - libcontainer container kubepods-burstable-podff006b47_b526_43e6_af32_133b4ae313cd.slice. Jan 13 20:08:02.446611 kubelet[3183]: I0113 20:08:02.444794 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-etc-cni-netd\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.446611 kubelet[3183]: I0113 20:08:02.444866 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bf360ed-61b5-4e3b-84a1-971a7d28ecf3-xtables-lock\") pod \"kube-proxy-27fks\" (UID: \"3bf360ed-61b5-4e3b-84a1-971a7d28ecf3\") " pod="kube-system/kube-proxy-27fks" Jan 13 20:08:02.446611 kubelet[3183]: I0113 20:08:02.444905 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-hostproc\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.446611 kubelet[3183]: I0113 20:08:02.444941 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-run\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.446611 kubelet[3183]: I0113 20:08:02.444975 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-lib-modules\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.446611 kubelet[3183]: I0113 20:08:02.445010 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-cgroup\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.447332 kubelet[3183]: I0113 20:08:02.445047 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff006b47-b526-43e6-af32-133b4ae313cd-clustermesh-secrets\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.447332 kubelet[3183]: I0113 20:08:02.445084 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-host-proc-sys-net\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.447332 kubelet[3183]: I0113 20:08:02.445125 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-bpf-maps\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.447332 kubelet[3183]: I0113 20:08:02.445159 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cni-path\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.447332 kubelet[3183]: I0113 20:08:02.445212 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3bf360ed-61b5-4e3b-84a1-971a7d28ecf3-kube-proxy\") pod \"kube-proxy-27fks\" (UID: \"3bf360ed-61b5-4e3b-84a1-971a7d28ecf3\") " pod="kube-system/kube-proxy-27fks" Jan 13 20:08:02.447332 kubelet[3183]: I0113 20:08:02.445257 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmjl6\" (UniqueName: \"kubernetes.io/projected/3bf360ed-61b5-4e3b-84a1-971a7d28ecf3-kube-api-access-gmjl6\") pod \"kube-proxy-27fks\" (UID: \"3bf360ed-61b5-4e3b-84a1-971a7d28ecf3\") " pod="kube-system/kube-proxy-27fks" Jan 13 20:08:02.447687 kubelet[3183]: I0113 20:08:02.445296 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bf360ed-61b5-4e3b-84a1-971a7d28ecf3-lib-modules\") pod \"kube-proxy-27fks\" (UID: \"3bf360ed-61b5-4e3b-84a1-971a7d28ecf3\") " pod="kube-system/kube-proxy-27fks" Jan 13 20:08:02.447687 kubelet[3183]: I0113 20:08:02.445344 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-xtables-lock\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.447687 kubelet[3183]: I0113 20:08:02.445379 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtv2l\" (UniqueName: \"kubernetes.io/projected/ff006b47-b526-43e6-af32-133b4ae313cd-kube-api-access-qtv2l\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.447687 kubelet[3183]: I0113 20:08:02.445414 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-config-path\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.447687 kubelet[3183]: I0113 20:08:02.445452 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-host-proc-sys-kernel\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.449123 kubelet[3183]: I0113 20:08:02.445488 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff006b47-b526-43e6-af32-133b4ae313cd-hubble-tls\") pod \"cilium-7pzjk\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " pod="kube-system/cilium-7pzjk" Jan 13 20:08:02.466985 sshd[2260]: Connection closed by 139.178.68.195 port 45882 Jan 13 20:08:02.467882 sshd-session[2258]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:02.480191 systemd-logind[1920]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:08:02.481144 systemd[1]: sshd@6-172.31.17.103:22-139.178.68.195:45882.service: Deactivated successfully. Jan 13 20:08:02.492423 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:08:02.492771 systemd[1]: session-7.scope: Consumed 12.394s CPU time, 154.1M memory peak, 0B memory swap peak. Jan 13 20:08:02.498151 systemd-logind[1920]: Removed session 7. Jan 13 20:08:02.747291 systemd[1]: Created slice kubepods-besteffort-poda6305281_7d60_45df_94a8_fae8f16ad031.slice - libcontainer container kubepods-besteffort-poda6305281_7d60_45df_94a8_fae8f16ad031.slice. Jan 13 20:08:02.751756 kubelet[3183]: I0113 20:08:02.751106 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6305281-7d60-45df-94a8-fae8f16ad031-cilium-config-path\") pod \"cilium-operator-5d85765b45-gd9hm\" (UID: \"a6305281-7d60-45df-94a8-fae8f16ad031\") " pod="kube-system/cilium-operator-5d85765b45-gd9hm" Jan 13 20:08:02.751756 kubelet[3183]: I0113 20:08:02.751188 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nls5\" (UniqueName: \"kubernetes.io/projected/a6305281-7d60-45df-94a8-fae8f16ad031-kube-api-access-6nls5\") pod \"cilium-operator-5d85765b45-gd9hm\" (UID: \"a6305281-7d60-45df-94a8-fae8f16ad031\") " pod="kube-system/cilium-operator-5d85765b45-gd9hm" Jan 13 20:08:03.551964 kubelet[3183]: E0113 20:08:03.551905 3183 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:08:03.552126 kubelet[3183]: E0113 20:08:03.552035 3183 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3bf360ed-61b5-4e3b-84a1-971a7d28ecf3-kube-proxy podName:3bf360ed-61b5-4e3b-84a1-971a7d28ecf3 nodeName:}" failed. No retries permitted until 2025-01-13 20:08:04.052001482 +0000 UTC m=+5.609036050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/3bf360ed-61b5-4e3b-84a1-971a7d28ecf3-kube-proxy") pod "kube-proxy-27fks" (UID: "3bf360ed-61b5-4e3b-84a1-971a7d28ecf3") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:08:03.655750 containerd[1945]: time="2025-01-13T20:08:03.655623775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7pzjk,Uid:ff006b47-b526-43e6-af32-133b4ae313cd,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:03.658306 containerd[1945]: time="2025-01-13T20:08:03.658254199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gd9hm,Uid:a6305281-7d60-45df-94a8-fae8f16ad031,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:03.728839 containerd[1945]: time="2025-01-13T20:08:03.728428207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:03.729200 containerd[1945]: time="2025-01-13T20:08:03.729088243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:03.730245 containerd[1945]: time="2025-01-13T20:08:03.730074295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:03.730592 containerd[1945]: time="2025-01-13T20:08:03.730392739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:03.735634 containerd[1945]: time="2025-01-13T20:08:03.735465499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:03.736443 containerd[1945]: time="2025-01-13T20:08:03.736350691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:03.737156 containerd[1945]: time="2025-01-13T20:08:03.736862347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:03.737433 containerd[1945]: time="2025-01-13T20:08:03.737123695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:03.771794 systemd[1]: Started cri-containerd-d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e.scope - libcontainer container d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e. Jan 13 20:08:03.786154 systemd[1]: Started cri-containerd-edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7.scope - libcontainer container edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7. Jan 13 20:08:03.843809 containerd[1945]: time="2025-01-13T20:08:03.843525368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7pzjk,Uid:ff006b47-b526-43e6-af32-133b4ae313cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\"" Jan 13 20:08:03.851799 containerd[1945]: time="2025-01-13T20:08:03.851681816Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:08:03.894654 containerd[1945]: time="2025-01-13T20:08:03.894584156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-gd9hm,Uid:a6305281-7d60-45df-94a8-fae8f16ad031,Namespace:kube-system,Attempt:0,} returns sandbox id \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\"" Jan 13 20:08:04.072596 update_engine[1922]: I20250113 20:08:04.072506 1922 update_attempter.cc:509] Updating boot flags... Jan 13 20:08:04.157377 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3355) Jan 13 20:08:04.167587 containerd[1945]: time="2025-01-13T20:08:04.166580610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27fks,Uid:3bf360ed-61b5-4e3b-84a1-971a7d28ecf3,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:04.229231 containerd[1945]: time="2025-01-13T20:08:04.228071766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:04.229499 containerd[1945]: time="2025-01-13T20:08:04.228977274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:04.229499 containerd[1945]: time="2025-01-13T20:08:04.229019730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:04.229499 containerd[1945]: time="2025-01-13T20:08:04.229164702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:04.276019 systemd[1]: Started cri-containerd-ff46fce0d41dda0509f9d19df519ab2a8c0107ccb805abe1882483053f2a5cac.scope - libcontainer container ff46fce0d41dda0509f9d19df519ab2a8c0107ccb805abe1882483053f2a5cac. Jan 13 20:08:04.415035 containerd[1945]: time="2025-01-13T20:08:04.413125207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-27fks,Uid:3bf360ed-61b5-4e3b-84a1-971a7d28ecf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff46fce0d41dda0509f9d19df519ab2a8c0107ccb805abe1882483053f2a5cac\"" Jan 13 20:08:04.433896 containerd[1945]: time="2025-01-13T20:08:04.433191979Z" level=info msg="CreateContainer within sandbox \"ff46fce0d41dda0509f9d19df519ab2a8c0107ccb805abe1882483053f2a5cac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:08:04.490961 containerd[1945]: time="2025-01-13T20:08:04.490898683Z" level=info msg="CreateContainer within sandbox \"ff46fce0d41dda0509f9d19df519ab2a8c0107ccb805abe1882483053f2a5cac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"324e483449dd250a4dea4bbca218dc5f6347a168f9c4238a8806d81dd8f8f5de\"" Jan 13 20:08:04.496047 containerd[1945]: time="2025-01-13T20:08:04.495927127Z" level=info msg="StartContainer for \"324e483449dd250a4dea4bbca218dc5f6347a168f9c4238a8806d81dd8f8f5de\"" Jan 13 20:08:04.551058 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3346) Jan 13 20:08:04.587070 systemd[1]: Started cri-containerd-324e483449dd250a4dea4bbca218dc5f6347a168f9c4238a8806d81dd8f8f5de.scope - libcontainer container 324e483449dd250a4dea4bbca218dc5f6347a168f9c4238a8806d81dd8f8f5de. Jan 13 20:08:04.746020 containerd[1945]: time="2025-01-13T20:08:04.744599444Z" level=info msg="StartContainer for \"324e483449dd250a4dea4bbca218dc5f6347a168f9c4238a8806d81dd8f8f5de\" returns successfully" Jan 13 20:08:04.963767 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3357) Jan 13 20:08:05.436635 kubelet[3183]: I0113 20:08:05.436506 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27fks" podStartSLOduration=3.43648238 podStartE2EDuration="3.43648238s" podCreationTimestamp="2025-01-13 20:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:04.887454489 +0000 UTC m=+6.444489069" watchObservedRunningTime="2025-01-13 20:08:05.43648238 +0000 UTC m=+6.993516996" Jan 13 20:08:16.868261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164751036.mount: Deactivated successfully. Jan 13 20:08:19.364209 containerd[1945]: time="2025-01-13T20:08:19.364092093Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:19.365582 containerd[1945]: time="2025-01-13T20:08:19.365500761Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650918" Jan 13 20:08:19.367493 containerd[1945]: time="2025-01-13T20:08:19.367412373Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:19.370540 containerd[1945]: time="2025-01-13T20:08:19.370318653Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 15.518557109s" Jan 13 20:08:19.370540 containerd[1945]: time="2025-01-13T20:08:19.370378221Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:08:19.374356 containerd[1945]: time="2025-01-13T20:08:19.374065545Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:08:19.378770 containerd[1945]: time="2025-01-13T20:08:19.378554325Z" level=info msg="CreateContainer within sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:08:19.399238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519116285.mount: Deactivated successfully. Jan 13 20:08:19.400566 containerd[1945]: time="2025-01-13T20:08:19.400481949Z" level=info msg="CreateContainer within sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38\"" Jan 13 20:08:19.403565 containerd[1945]: time="2025-01-13T20:08:19.402269085Z" level=info msg="StartContainer for \"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38\"" Jan 13 20:08:19.460013 systemd[1]: Started cri-containerd-e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38.scope - libcontainer container e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38. Jan 13 20:08:19.506122 containerd[1945]: time="2025-01-13T20:08:19.506045794Z" level=info msg="StartContainer for \"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38\" returns successfully" Jan 13 20:08:19.528642 systemd[1]: cri-containerd-e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38.scope: Deactivated successfully. Jan 13 20:08:20.391627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38-rootfs.mount: Deactivated successfully. Jan 13 20:08:20.621160 containerd[1945]: time="2025-01-13T20:08:20.621067847Z" level=info msg="shim disconnected" id=e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38 namespace=k8s.io Jan 13 20:08:20.621160 containerd[1945]: time="2025-01-13T20:08:20.621149711Z" level=warning msg="cleaning up after shim disconnected" id=e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38 namespace=k8s.io Jan 13 20:08:20.621921 containerd[1945]: time="2025-01-13T20:08:20.621171335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:20.918055 containerd[1945]: time="2025-01-13T20:08:20.916542553Z" level=info msg="CreateContainer within sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:08:20.968128 containerd[1945]: time="2025-01-13T20:08:20.968059369Z" level=info msg="CreateContainer within sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093\"" Jan 13 20:08:20.969491 containerd[1945]: time="2025-01-13T20:08:20.969437401Z" level=info msg="StartContainer for \"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093\"" Jan 13 20:08:21.029026 systemd[1]: Started cri-containerd-8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093.scope - libcontainer container 8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093. Jan 13 20:08:21.076292 containerd[1945]: time="2025-01-13T20:08:21.075573034Z" level=info msg="StartContainer for \"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093\" returns successfully" Jan 13 20:08:21.098320 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:08:21.099291 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:21.099986 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:21.106327 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:08:21.109525 systemd[1]: cri-containerd-8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093.scope: Deactivated successfully. Jan 13 20:08:21.151427 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:08:21.162088 containerd[1945]: time="2025-01-13T20:08:21.162007834Z" level=info msg="shim disconnected" id=8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093 namespace=k8s.io Jan 13 20:08:21.162088 containerd[1945]: time="2025-01-13T20:08:21.162083506Z" level=warning msg="cleaning up after shim disconnected" id=8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093 namespace=k8s.io Jan 13 20:08:21.162649 containerd[1945]: time="2025-01-13T20:08:21.162105286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:21.391032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093-rootfs.mount: Deactivated successfully. Jan 13 20:08:21.922326 containerd[1945]: time="2025-01-13T20:08:21.922049186Z" level=info msg="CreateContainer within sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:08:21.958381 containerd[1945]: time="2025-01-13T20:08:21.958134818Z" level=info msg="CreateContainer within sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46\"" Jan 13 20:08:21.960871 containerd[1945]: time="2025-01-13T20:08:21.959014442Z" level=info msg="StartContainer for \"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46\"" Jan 13 20:08:22.019061 systemd[1]: Started cri-containerd-acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46.scope - libcontainer container acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46. Jan 13 20:08:22.075161 containerd[1945]: time="2025-01-13T20:08:22.075084946Z" level=info msg="StartContainer for \"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46\" returns successfully" Jan 13 20:08:22.081088 systemd[1]: cri-containerd-acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46.scope: Deactivated successfully. Jan 13 20:08:22.121115 containerd[1945]: time="2025-01-13T20:08:22.121037099Z" level=info msg="shim disconnected" id=acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46 namespace=k8s.io Jan 13 20:08:22.121115 containerd[1945]: time="2025-01-13T20:08:22.121113047Z" level=warning msg="cleaning up after shim disconnected" id=acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46 namespace=k8s.io Jan 13 20:08:22.122029 containerd[1945]: time="2025-01-13T20:08:22.121135715Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:22.392114 systemd[1]: run-containerd-runc-k8s.io-acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46-runc.Y7nq9H.mount: Deactivated successfully. Jan 13 20:08:22.392295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46-rootfs.mount: Deactivated successfully. Jan 13 20:08:22.924994 containerd[1945]: time="2025-01-13T20:08:22.924916911Z" level=info msg="CreateContainer within sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:08:22.951103 containerd[1945]: time="2025-01-13T20:08:22.949356399Z" level=info msg="CreateContainer within sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae\"" Jan 13 20:08:22.952387 containerd[1945]: time="2025-01-13T20:08:22.951270723Z" level=info msg="StartContainer for \"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae\"" Jan 13 20:08:23.018030 systemd[1]: Started cri-containerd-6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae.scope - libcontainer container 6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae. Jan 13 20:08:23.062390 systemd[1]: cri-containerd-6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae.scope: Deactivated successfully. Jan 13 20:08:23.066776 containerd[1945]: time="2025-01-13T20:08:23.066210251Z" level=info msg="StartContainer for \"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae\" returns successfully" Jan 13 20:08:23.104534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae-rootfs.mount: Deactivated successfully. Jan 13 20:08:23.109458 containerd[1945]: time="2025-01-13T20:08:23.109375248Z" level=info msg="shim disconnected" id=6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae namespace=k8s.io Jan 13 20:08:23.109458 containerd[1945]: time="2025-01-13T20:08:23.109453008Z" level=warning msg="cleaning up after shim disconnected" id=6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae namespace=k8s.io Jan 13 20:08:23.109876 containerd[1945]: time="2025-01-13T20:08:23.109474860Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:08:23.932245 containerd[1945]: time="2025-01-13T20:08:23.932185792Z" level=info msg="CreateContainer within sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:08:23.964901 containerd[1945]: time="2025-01-13T20:08:23.963272116Z" level=info msg="CreateContainer within sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\"" Jan 13 20:08:23.966447 containerd[1945]: time="2025-01-13T20:08:23.966401464Z" level=info msg="StartContainer for \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\"" Jan 13 20:08:24.024048 systemd[1]: Started cri-containerd-3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a.scope - libcontainer container 3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a. Jan 13 20:08:24.078961 containerd[1945]: time="2025-01-13T20:08:24.078845292Z" level=info msg="StartContainer for \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\" returns successfully" Jan 13 20:08:24.236142 kubelet[3183]: I0113 20:08:24.235978 3183 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:08:24.312480 systemd[1]: Created slice kubepods-burstable-podb6223b56_44f3_4fdb_b899_8f4201f0b09c.slice - libcontainer container kubepods-burstable-podb6223b56_44f3_4fdb_b899_8f4201f0b09c.slice. Jan 13 20:08:24.320299 kubelet[3183]: I0113 20:08:24.320237 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbvl\" (UniqueName: \"kubernetes.io/projected/b6223b56-44f3-4fdb-b899-8f4201f0b09c-kube-api-access-qqbvl\") pod \"coredns-6f6b679f8f-q5j2d\" (UID: \"b6223b56-44f3-4fdb-b899-8f4201f0b09c\") " pod="kube-system/coredns-6f6b679f8f-q5j2d" Jan 13 20:08:24.321612 kubelet[3183]: I0113 20:08:24.320441 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6223b56-44f3-4fdb-b899-8f4201f0b09c-config-volume\") pod \"coredns-6f6b679f8f-q5j2d\" (UID: \"b6223b56-44f3-4fdb-b899-8f4201f0b09c\") " pod="kube-system/coredns-6f6b679f8f-q5j2d" Jan 13 20:08:24.321612 kubelet[3183]: I0113 20:08:24.320844 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgpc7\" (UniqueName: \"kubernetes.io/projected/6c9aa2d8-5b3f-4ed3-b1ef-796879859877-kube-api-access-tgpc7\") pod \"coredns-6f6b679f8f-vgb2j\" (UID: \"6c9aa2d8-5b3f-4ed3-b1ef-796879859877\") " pod="kube-system/coredns-6f6b679f8f-vgb2j" Jan 13 20:08:24.321612 kubelet[3183]: I0113 20:08:24.321062 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c9aa2d8-5b3f-4ed3-b1ef-796879859877-config-volume\") pod \"coredns-6f6b679f8f-vgb2j\" (UID: \"6c9aa2d8-5b3f-4ed3-b1ef-796879859877\") " pod="kube-system/coredns-6f6b679f8f-vgb2j" Jan 13 20:08:24.325698 systemd[1]: Created slice kubepods-burstable-pod6c9aa2d8_5b3f_4ed3_b1ef_796879859877.slice - libcontainer container kubepods-burstable-pod6c9aa2d8_5b3f_4ed3_b1ef_796879859877.slice. Jan 13 20:08:24.625022 containerd[1945]: time="2025-01-13T20:08:24.624918015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q5j2d,Uid:b6223b56-44f3-4fdb-b899-8f4201f0b09c,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:24.635764 containerd[1945]: time="2025-01-13T20:08:24.635673567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vgb2j,Uid:6c9aa2d8-5b3f-4ed3-b1ef-796879859877,Namespace:kube-system,Attempt:0,}" Jan 13 20:08:29.002650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434531330.mount: Deactivated successfully. Jan 13 20:08:29.575138 containerd[1945]: time="2025-01-13T20:08:29.575077364Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:29.577178 containerd[1945]: time="2025-01-13T20:08:29.577071068Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137714" Jan 13 20:08:29.577316 containerd[1945]: time="2025-01-13T20:08:29.577212788Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:08:29.581772 containerd[1945]: time="2025-01-13T20:08:29.581657708Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 10.207527399s" Jan 13 20:08:29.581772 containerd[1945]: time="2025-01-13T20:08:29.581738156Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:08:29.587174 containerd[1945]: time="2025-01-13T20:08:29.586759004Z" level=info msg="CreateContainer within sandbox \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:08:29.608209 containerd[1945]: time="2025-01-13T20:08:29.608138372Z" level=info msg="CreateContainer within sandbox \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\"" Jan 13 20:08:29.609775 containerd[1945]: time="2025-01-13T20:08:29.609389036Z" level=info msg="StartContainer for \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\"" Jan 13 20:08:29.663032 systemd[1]: Started cri-containerd-5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d.scope - libcontainer container 5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d. Jan 13 20:08:29.714057 containerd[1945]: time="2025-01-13T20:08:29.713995016Z" level=info msg="StartContainer for \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\" returns successfully" Jan 13 20:08:30.028802 kubelet[3183]: I0113 20:08:30.028587 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7pzjk" podStartSLOduration=12.506504141 podStartE2EDuration="28.028565358s" podCreationTimestamp="2025-01-13 20:08:02 +0000 UTC" firstStartedPulling="2025-01-13 20:08:03.8504315 +0000 UTC m=+5.407466080" lastFinishedPulling="2025-01-13 20:08:19.372492717 +0000 UTC m=+20.929527297" observedRunningTime="2025-01-13 20:08:24.999915053 +0000 UTC m=+26.556949657" watchObservedRunningTime="2025-01-13 20:08:30.028565358 +0000 UTC m=+31.585599938" Jan 13 20:08:32.765060 systemd-networkd[1847]: cilium_host: Link UP Jan 13 20:08:32.765417 systemd-networkd[1847]: cilium_net: Link UP Jan 13 20:08:32.768120 systemd-networkd[1847]: cilium_net: Gained carrier Jan 13 20:08:32.768465 systemd-networkd[1847]: cilium_host: Gained carrier Jan 13 20:08:32.769791 (udev-worker)[4285]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:32.771951 (udev-worker)[4284]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:32.963749 systemd-networkd[1847]: cilium_vxlan: Link UP Jan 13 20:08:32.963951 systemd-networkd[1847]: cilium_vxlan: Gained carrier Jan 13 20:08:33.096080 systemd-networkd[1847]: cilium_host: Gained IPv6LL Jan 13 20:08:33.451485 systemd[1]: Started sshd@7-172.31.17.103:22-139.178.68.195:47158.service - OpenSSH per-connection server daemon (139.178.68.195:47158). Jan 13 20:08:33.526783 kernel: NET: Registered PF_ALG protocol family Jan 13 20:08:33.643988 sshd[4375]: Accepted publickey for core from 139.178.68.195 port 47158 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:33.647034 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:33.657300 systemd-logind[1920]: New session 8 of user core. Jan 13 20:08:33.664391 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:08:33.752115 systemd-networkd[1847]: cilium_net: Gained IPv6LL Jan 13 20:08:33.975577 sshd[4393]: Connection closed by 139.178.68.195 port 47158 Jan 13 20:08:33.977748 sshd-session[4375]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:33.983757 systemd[1]: sshd@7-172.31.17.103:22-139.178.68.195:47158.service: Deactivated successfully. Jan 13 20:08:33.990900 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:08:33.994226 systemd-logind[1920]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:08:33.998575 systemd-logind[1920]: Removed session 8. Jan 13 20:08:34.456006 systemd-networkd[1847]: cilium_vxlan: Gained IPv6LL Jan 13 20:08:34.933536 (udev-worker)[4300]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:08:34.938467 systemd-networkd[1847]: lxc_health: Link UP Jan 13 20:08:34.976072 systemd-networkd[1847]: lxc_health: Gained carrier Jan 13 20:08:35.702585 kubelet[3183]: I0113 20:08:35.702423 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-gd9hm" podStartSLOduration=8.017750242 podStartE2EDuration="33.702397874s" podCreationTimestamp="2025-01-13 20:08:02 +0000 UTC" firstStartedPulling="2025-01-13 20:08:03.898435856 +0000 UTC m=+5.455470436" lastFinishedPulling="2025-01-13 20:08:29.5830835 +0000 UTC m=+31.140118068" observedRunningTime="2025-01-13 20:08:30.06554613 +0000 UTC m=+31.622581202" watchObservedRunningTime="2025-01-13 20:08:35.702397874 +0000 UTC m=+37.259432490" Jan 13 20:08:35.774766 systemd-networkd[1847]: lxc2112b22c58f4: Link UP Jan 13 20:08:35.788334 systemd-networkd[1847]: lxc9c494444bdb3: Link UP Jan 13 20:08:35.797885 kernel: eth0: renamed from tmp16b52 Jan 13 20:08:35.803817 kernel: eth0: renamed from tmp423db Jan 13 20:08:35.814629 systemd-networkd[1847]: lxc2112b22c58f4: Gained carrier Jan 13 20:08:35.822073 systemd-networkd[1847]: lxc9c494444bdb3: Gained carrier Jan 13 20:08:36.888032 systemd-networkd[1847]: lxc_health: Gained IPv6LL Jan 13 20:08:37.784625 systemd-networkd[1847]: lxc9c494444bdb3: Gained IPv6LL Jan 13 20:08:37.785203 systemd-networkd[1847]: lxc2112b22c58f4: Gained IPv6LL Jan 13 20:08:39.016238 systemd[1]: Started sshd@8-172.31.17.103:22-139.178.68.195:37146.service - OpenSSH per-connection server daemon (139.178.68.195:37146). Jan 13 20:08:39.220765 sshd[4662]: Accepted publickey for core from 139.178.68.195 port 37146 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:39.221898 sshd-session[4662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:39.234930 systemd-logind[1920]: New session 9 of user core. Jan 13 20:08:39.240381 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:08:39.514299 sshd[4664]: Connection closed by 139.178.68.195 port 37146 Jan 13 20:08:39.517360 sshd-session[4662]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:39.524207 systemd[1]: sshd@8-172.31.17.103:22-139.178.68.195:37146.service: Deactivated successfully. Jan 13 20:08:39.533589 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:08:39.537910 systemd-logind[1920]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:08:39.543355 systemd-logind[1920]: Removed session 9. Jan 13 20:08:40.099902 ntpd[1914]: Listen normally on 8 cilium_host 192.168.0.253:123 Jan 13 20:08:40.100043 ntpd[1914]: Listen normally on 9 cilium_net [fe80::e4ee:9eff:fe32:c1c0%4]:123 Jan 13 20:08:40.100568 ntpd[1914]: 13 Jan 20:08:40 ntpd[1914]: Listen normally on 8 cilium_host 192.168.0.253:123 Jan 13 20:08:40.100568 ntpd[1914]: 13 Jan 20:08:40 ntpd[1914]: Listen normally on 9 cilium_net [fe80::e4ee:9eff:fe32:c1c0%4]:123 Jan 13 20:08:40.100568 ntpd[1914]: 13 Jan 20:08:40 ntpd[1914]: Listen normally on 10 cilium_host [fe80::f0a2:49ff:fe76:3068%5]:123 Jan 13 20:08:40.100568 ntpd[1914]: 13 Jan 20:08:40 ntpd[1914]: Listen normally on 11 cilium_vxlan [fe80::a824:5aff:fe7e:ee61%6]:123 Jan 13 20:08:40.100568 ntpd[1914]: 13 Jan 20:08:40 ntpd[1914]: Listen normally on 12 lxc_health [fe80::8488:caff:fe89:70b1%8]:123 Jan 13 20:08:40.100568 ntpd[1914]: 13 Jan 20:08:40 ntpd[1914]: Listen normally on 13 lxc2112b22c58f4 [fe80::cc41:36ff:fed4:740f%10]:123 Jan 13 20:08:40.100568 ntpd[1914]: 13 Jan 20:08:40 ntpd[1914]: Listen normally on 14 lxc9c494444bdb3 [fe80::38a6:1bff:fe79:27d7%12]:123 Jan 13 20:08:40.100130 ntpd[1914]: Listen normally on 10 cilium_host [fe80::f0a2:49ff:fe76:3068%5]:123 Jan 13 20:08:40.100203 ntpd[1914]: Listen normally on 11 cilium_vxlan [fe80::a824:5aff:fe7e:ee61%6]:123 Jan 13 20:08:40.100276 ntpd[1914]: Listen normally on 12 lxc_health [fe80::8488:caff:fe89:70b1%8]:123 Jan 13 20:08:40.100347 ntpd[1914]: Listen normally on 13 lxc2112b22c58f4 [fe80::cc41:36ff:fed4:740f%10]:123 Jan 13 20:08:40.100428 ntpd[1914]: Listen normally on 14 lxc9c494444bdb3 [fe80::38a6:1bff:fe79:27d7%12]:123 Jan 13 20:08:44.128464 containerd[1945]: time="2025-01-13T20:08:44.128301440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:44.129043 containerd[1945]: time="2025-01-13T20:08:44.128570828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:44.129043 containerd[1945]: time="2025-01-13T20:08:44.128768384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:44.133898 containerd[1945]: time="2025-01-13T20:08:44.131674940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:44.212056 systemd[1]: Started cri-containerd-16b5221916441354aa6a0794c18b3909af459756ab49db0bb08516e27a20ac70.scope - libcontainer container 16b5221916441354aa6a0794c18b3909af459756ab49db0bb08516e27a20ac70. Jan 13 20:08:44.227130 containerd[1945]: time="2025-01-13T20:08:44.226196889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:08:44.227130 containerd[1945]: time="2025-01-13T20:08:44.226296549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:08:44.227130 containerd[1945]: time="2025-01-13T20:08:44.226332465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:44.227130 containerd[1945]: time="2025-01-13T20:08:44.226487445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:08:44.290061 systemd[1]: Started cri-containerd-423dbcb8ea54f2e796c0e46825c661f0dac66b7becb604a75cdf9b5f3ebc358f.scope - libcontainer container 423dbcb8ea54f2e796c0e46825c661f0dac66b7becb604a75cdf9b5f3ebc358f. Jan 13 20:08:44.395955 containerd[1945]: time="2025-01-13T20:08:44.395465601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vgb2j,Uid:6c9aa2d8-5b3f-4ed3-b1ef-796879859877,Namespace:kube-system,Attempt:0,} returns sandbox id \"16b5221916441354aa6a0794c18b3909af459756ab49db0bb08516e27a20ac70\"" Jan 13 20:08:44.405015 containerd[1945]: time="2025-01-13T20:08:44.404840193Z" level=info msg="CreateContainer within sandbox \"16b5221916441354aa6a0794c18b3909af459756ab49db0bb08516e27a20ac70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:08:44.421382 containerd[1945]: time="2025-01-13T20:08:44.421300929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q5j2d,Uid:b6223b56-44f3-4fdb-b899-8f4201f0b09c,Namespace:kube-system,Attempt:0,} returns sandbox id \"423dbcb8ea54f2e796c0e46825c661f0dac66b7becb604a75cdf9b5f3ebc358f\"" Jan 13 20:08:44.431187 containerd[1945]: time="2025-01-13T20:08:44.430787758Z" level=info msg="CreateContainer within sandbox \"423dbcb8ea54f2e796c0e46825c661f0dac66b7becb604a75cdf9b5f3ebc358f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:08:44.442495 containerd[1945]: time="2025-01-13T20:08:44.442121446Z" level=info msg="CreateContainer within sandbox \"16b5221916441354aa6a0794c18b3909af459756ab49db0bb08516e27a20ac70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91bacc149f4cd2b7a4b0c317683982d01af854db91f8fb1f83f1c11ef6c4dd00\"" Jan 13 20:08:44.447261 containerd[1945]: time="2025-01-13T20:08:44.447058714Z" level=info msg="StartContainer for \"91bacc149f4cd2b7a4b0c317683982d01af854db91f8fb1f83f1c11ef6c4dd00\"" Jan 13 20:08:44.464766 containerd[1945]: time="2025-01-13T20:08:44.464663086Z" level=info msg="CreateContainer within sandbox \"423dbcb8ea54f2e796c0e46825c661f0dac66b7becb604a75cdf9b5f3ebc358f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70794a9b41984f5d50cc9410ae8cba7ca7a9a17ffb6fe77c0d2e9eb3a87f871c\"" Jan 13 20:08:44.468598 containerd[1945]: time="2025-01-13T20:08:44.468464122Z" level=info msg="StartContainer for \"70794a9b41984f5d50cc9410ae8cba7ca7a9a17ffb6fe77c0d2e9eb3a87f871c\"" Jan 13 20:08:44.541214 systemd[1]: Started cri-containerd-91bacc149f4cd2b7a4b0c317683982d01af854db91f8fb1f83f1c11ef6c4dd00.scope - libcontainer container 91bacc149f4cd2b7a4b0c317683982d01af854db91f8fb1f83f1c11ef6c4dd00. Jan 13 20:08:44.566228 systemd[1]: Started sshd@9-172.31.17.103:22-139.178.68.195:37162.service - OpenSSH per-connection server daemon (139.178.68.195:37162). Jan 13 20:08:44.582418 systemd[1]: Started cri-containerd-70794a9b41984f5d50cc9410ae8cba7ca7a9a17ffb6fe77c0d2e9eb3a87f871c.scope - libcontainer container 70794a9b41984f5d50cc9410ae8cba7ca7a9a17ffb6fe77c0d2e9eb3a87f871c. Jan 13 20:08:44.675642 containerd[1945]: time="2025-01-13T20:08:44.674506439Z" level=info msg="StartContainer for \"91bacc149f4cd2b7a4b0c317683982d01af854db91f8fb1f83f1c11ef6c4dd00\" returns successfully" Jan 13 20:08:44.719056 containerd[1945]: time="2025-01-13T20:08:44.718984259Z" level=info msg="StartContainer for \"70794a9b41984f5d50cc9410ae8cba7ca7a9a17ffb6fe77c0d2e9eb3a87f871c\" returns successfully" Jan 13 20:08:44.792038 sshd[4805]: Accepted publickey for core from 139.178.68.195 port 37162 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:44.796492 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:44.806784 systemd-logind[1920]: New session 10 of user core. Jan 13 20:08:44.812020 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:08:45.069924 kubelet[3183]: I0113 20:08:45.069742 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-q5j2d" podStartSLOduration=43.069692385 podStartE2EDuration="43.069692385s" podCreationTimestamp="2025-01-13 20:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:45.065529081 +0000 UTC m=+46.622563673" watchObservedRunningTime="2025-01-13 20:08:45.069692385 +0000 UTC m=+46.626726977" Jan 13 20:08:45.070653 kubelet[3183]: I0113 20:08:45.069946 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vgb2j" podStartSLOduration=43.069934257 podStartE2EDuration="43.069934257s" podCreationTimestamp="2025-01-13 20:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:08:45.041688633 +0000 UTC m=+46.598723213" watchObservedRunningTime="2025-01-13 20:08:45.069934257 +0000 UTC m=+46.626968873" Jan 13 20:08:45.102480 sshd[4844]: Connection closed by 139.178.68.195 port 37162 Jan 13 20:08:45.104261 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:45.115312 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:08:45.118680 systemd[1]: sshd@9-172.31.17.103:22-139.178.68.195:37162.service: Deactivated successfully. Jan 13 20:08:45.132636 systemd-logind[1920]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:08:45.136050 systemd-logind[1920]: Removed session 10. Jan 13 20:08:50.144259 systemd[1]: Started sshd@10-172.31.17.103:22-139.178.68.195:50408.service - OpenSSH per-connection server daemon (139.178.68.195:50408). Jan 13 20:08:50.337264 sshd[4864]: Accepted publickey for core from 139.178.68.195 port 50408 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:50.339831 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:50.348903 systemd-logind[1920]: New session 11 of user core. Jan 13 20:08:50.357984 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:08:50.605918 sshd[4866]: Connection closed by 139.178.68.195 port 50408 Jan 13 20:08:50.606755 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:50.614579 systemd[1]: sshd@10-172.31.17.103:22-139.178.68.195:50408.service: Deactivated successfully. Jan 13 20:08:50.618430 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:08:50.620341 systemd-logind[1920]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:08:50.622519 systemd-logind[1920]: Removed session 11. Jan 13 20:08:55.647304 systemd[1]: Started sshd@11-172.31.17.103:22-139.178.68.195:53514.service - OpenSSH per-connection server daemon (139.178.68.195:53514). Jan 13 20:08:55.829142 sshd[4878]: Accepted publickey for core from 139.178.68.195 port 53514 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:08:55.831681 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:08:55.839337 systemd-logind[1920]: New session 12 of user core. Jan 13 20:08:55.847984 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:08:56.100284 sshd[4880]: Connection closed by 139.178.68.195 port 53514 Jan 13 20:08:56.101180 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Jan 13 20:08:56.108003 systemd[1]: sshd@11-172.31.17.103:22-139.178.68.195:53514.service: Deactivated successfully. Jan 13 20:08:56.112688 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:08:56.114568 systemd-logind[1920]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:08:56.116595 systemd-logind[1920]: Removed session 12. Jan 13 20:09:01.143292 systemd[1]: Started sshd@12-172.31.17.103:22-139.178.68.195:53524.service - OpenSSH per-connection server daemon (139.178.68.195:53524). Jan 13 20:09:01.346926 sshd[4893]: Accepted publickey for core from 139.178.68.195 port 53524 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:01.349472 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:01.357028 systemd-logind[1920]: New session 13 of user core. Jan 13 20:09:01.366014 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:09:01.614796 sshd[4895]: Connection closed by 139.178.68.195 port 53524 Jan 13 20:09:01.615612 sshd-session[4893]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:01.620801 systemd[1]: sshd@12-172.31.17.103:22-139.178.68.195:53524.service: Deactivated successfully. Jan 13 20:09:01.625482 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:09:01.628552 systemd-logind[1920]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:09:01.631536 systemd-logind[1920]: Removed session 13. Jan 13 20:09:01.656452 systemd[1]: Started sshd@13-172.31.17.103:22-139.178.68.195:53526.service - OpenSSH per-connection server daemon (139.178.68.195:53526). Jan 13 20:09:01.836877 sshd[4907]: Accepted publickey for core from 139.178.68.195 port 53526 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:01.839620 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:01.848052 systemd-logind[1920]: New session 14 of user core. Jan 13 20:09:01.860989 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:09:02.181216 sshd[4910]: Connection closed by 139.178.68.195 port 53526 Jan 13 20:09:02.180839 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:02.189397 systemd[1]: sshd@13-172.31.17.103:22-139.178.68.195:53526.service: Deactivated successfully. Jan 13 20:09:02.198418 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:09:02.206138 systemd-logind[1920]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:09:02.233232 systemd[1]: Started sshd@14-172.31.17.103:22-139.178.68.195:53530.service - OpenSSH per-connection server daemon (139.178.68.195:53530). Jan 13 20:09:02.238006 systemd-logind[1920]: Removed session 14. Jan 13 20:09:02.431548 sshd[4919]: Accepted publickey for core from 139.178.68.195 port 53530 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:02.434597 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:02.448112 systemd-logind[1920]: New session 15 of user core. Jan 13 20:09:02.453443 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:09:02.698098 sshd[4921]: Connection closed by 139.178.68.195 port 53530 Jan 13 20:09:02.698986 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:02.706880 systemd[1]: sshd@14-172.31.17.103:22-139.178.68.195:53530.service: Deactivated successfully. Jan 13 20:09:02.710186 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:09:02.713042 systemd-logind[1920]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:09:02.715157 systemd-logind[1920]: Removed session 15. Jan 13 20:09:07.742230 systemd[1]: Started sshd@15-172.31.17.103:22-139.178.68.195:59052.service - OpenSSH per-connection server daemon (139.178.68.195:59052). Jan 13 20:09:07.935337 sshd[4936]: Accepted publickey for core from 139.178.68.195 port 59052 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:07.938114 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:07.946536 systemd-logind[1920]: New session 16 of user core. Jan 13 20:09:07.959008 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:09:08.222151 sshd[4938]: Connection closed by 139.178.68.195 port 59052 Jan 13 20:09:08.223075 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:08.229575 systemd[1]: sshd@15-172.31.17.103:22-139.178.68.195:59052.service: Deactivated successfully. Jan 13 20:09:08.234150 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:09:08.235802 systemd-logind[1920]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:09:08.238653 systemd-logind[1920]: Removed session 16. Jan 13 20:09:13.268245 systemd[1]: Started sshd@16-172.31.17.103:22-139.178.68.195:59054.service - OpenSSH per-connection server daemon (139.178.68.195:59054). Jan 13 20:09:13.449644 sshd[4949]: Accepted publickey for core from 139.178.68.195 port 59054 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:13.452210 sshd-session[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:13.459510 systemd-logind[1920]: New session 17 of user core. Jan 13 20:09:13.467979 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:09:13.713258 sshd[4951]: Connection closed by 139.178.68.195 port 59054 Jan 13 20:09:13.714328 sshd-session[4949]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:13.719440 systemd-logind[1920]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:09:13.720097 systemd[1]: sshd@16-172.31.17.103:22-139.178.68.195:59054.service: Deactivated successfully. Jan 13 20:09:13.724084 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:09:13.728177 systemd-logind[1920]: Removed session 17. Jan 13 20:09:18.750263 systemd[1]: Started sshd@17-172.31.17.103:22-139.178.68.195:37302.service - OpenSSH per-connection server daemon (139.178.68.195:37302). Jan 13 20:09:18.941515 sshd[4964]: Accepted publickey for core from 139.178.68.195 port 37302 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:18.944124 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:18.951254 systemd-logind[1920]: New session 18 of user core. Jan 13 20:09:18.960119 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:09:19.214087 sshd[4966]: Connection closed by 139.178.68.195 port 37302 Jan 13 20:09:19.214972 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:19.221312 systemd[1]: sshd@17-172.31.17.103:22-139.178.68.195:37302.service: Deactivated successfully. Jan 13 20:09:19.225500 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:09:19.227390 systemd-logind[1920]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:09:19.229581 systemd-logind[1920]: Removed session 18. Jan 13 20:09:19.255306 systemd[1]: Started sshd@18-172.31.17.103:22-139.178.68.195:37306.service - OpenSSH per-connection server daemon (139.178.68.195:37306). Jan 13 20:09:19.452395 sshd[4976]: Accepted publickey for core from 139.178.68.195 port 37306 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:19.454916 sshd-session[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:19.463329 systemd-logind[1920]: New session 19 of user core. Jan 13 20:09:19.468971 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:09:19.757487 sshd[4978]: Connection closed by 139.178.68.195 port 37306 Jan 13 20:09:19.758434 sshd-session[4976]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:19.764091 systemd[1]: sshd@18-172.31.17.103:22-139.178.68.195:37306.service: Deactivated successfully. Jan 13 20:09:19.768306 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:09:19.771795 systemd-logind[1920]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:09:19.774613 systemd-logind[1920]: Removed session 19. Jan 13 20:09:19.797259 systemd[1]: Started sshd@19-172.31.17.103:22-139.178.68.195:37308.service - OpenSSH per-connection server daemon (139.178.68.195:37308). Jan 13 20:09:20.004769 sshd[4987]: Accepted publickey for core from 139.178.68.195 port 37308 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:20.007385 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:20.016052 systemd-logind[1920]: New session 20 of user core. Jan 13 20:09:20.023006 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:09:22.721041 sshd[4989]: Connection closed by 139.178.68.195 port 37308 Jan 13 20:09:22.722061 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:22.734296 systemd[1]: sshd@19-172.31.17.103:22-139.178.68.195:37308.service: Deactivated successfully. Jan 13 20:09:22.743619 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:09:22.748853 systemd-logind[1920]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:09:22.770231 systemd[1]: Started sshd@20-172.31.17.103:22-139.178.68.195:37312.service - OpenSSH per-connection server daemon (139.178.68.195:37312). Jan 13 20:09:22.772251 systemd-logind[1920]: Removed session 20. Jan 13 20:09:22.958559 sshd[5005]: Accepted publickey for core from 139.178.68.195 port 37312 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:22.961091 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:22.968809 systemd-logind[1920]: New session 21 of user core. Jan 13 20:09:22.976055 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:09:23.464033 sshd[5007]: Connection closed by 139.178.68.195 port 37312 Jan 13 20:09:23.465252 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:23.472201 systemd[1]: sshd@20-172.31.17.103:22-139.178.68.195:37312.service: Deactivated successfully. Jan 13 20:09:23.475916 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:09:23.477543 systemd-logind[1920]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:09:23.479678 systemd-logind[1920]: Removed session 21. Jan 13 20:09:23.505267 systemd[1]: Started sshd@21-172.31.17.103:22-139.178.68.195:37318.service - OpenSSH per-connection server daemon (139.178.68.195:37318). Jan 13 20:09:23.698387 sshd[5016]: Accepted publickey for core from 139.178.68.195 port 37318 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:23.701351 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:23.710088 systemd-logind[1920]: New session 22 of user core. Jan 13 20:09:23.716993 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:09:23.961624 sshd[5019]: Connection closed by 139.178.68.195 port 37318 Jan 13 20:09:23.962514 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:23.971162 systemd[1]: sshd@21-172.31.17.103:22-139.178.68.195:37318.service: Deactivated successfully. Jan 13 20:09:23.975145 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:09:23.978100 systemd-logind[1920]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:09:23.981597 systemd-logind[1920]: Removed session 22. Jan 13 20:09:29.004249 systemd[1]: Started sshd@22-172.31.17.103:22-139.178.68.195:50990.service - OpenSSH per-connection server daemon (139.178.68.195:50990). Jan 13 20:09:29.193043 sshd[5030]: Accepted publickey for core from 139.178.68.195 port 50990 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:29.195690 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:29.204210 systemd-logind[1920]: New session 23 of user core. Jan 13 20:09:29.214005 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:09:29.465791 sshd[5032]: Connection closed by 139.178.68.195 port 50990 Jan 13 20:09:29.466690 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:29.472214 systemd[1]: sshd@22-172.31.17.103:22-139.178.68.195:50990.service: Deactivated successfully. Jan 13 20:09:29.475949 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:09:29.481543 systemd-logind[1920]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:09:29.483660 systemd-logind[1920]: Removed session 23. Jan 13 20:09:34.515173 systemd[1]: Started sshd@23-172.31.17.103:22-139.178.68.195:50994.service - OpenSSH per-connection server daemon (139.178.68.195:50994). Jan 13 20:09:34.694149 sshd[5046]: Accepted publickey for core from 139.178.68.195 port 50994 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:34.696831 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:34.706855 systemd-logind[1920]: New session 24 of user core. Jan 13 20:09:34.715004 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:09:34.966162 sshd[5048]: Connection closed by 139.178.68.195 port 50994 Jan 13 20:09:34.967037 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:34.972414 systemd[1]: sshd@23-172.31.17.103:22-139.178.68.195:50994.service: Deactivated successfully. Jan 13 20:09:34.972669 systemd-logind[1920]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:09:34.978837 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:09:34.983135 systemd-logind[1920]: Removed session 24. Jan 13 20:09:40.006243 systemd[1]: Started sshd@24-172.31.17.103:22-139.178.68.195:42858.service - OpenSSH per-connection server daemon (139.178.68.195:42858). Jan 13 20:09:40.198218 sshd[5061]: Accepted publickey for core from 139.178.68.195 port 42858 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:40.200804 sshd-session[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:40.213201 systemd-logind[1920]: New session 25 of user core. Jan 13 20:09:40.220976 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 20:09:40.464147 sshd[5063]: Connection closed by 139.178.68.195 port 42858 Jan 13 20:09:40.465056 sshd-session[5061]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:40.471349 systemd[1]: sshd@24-172.31.17.103:22-139.178.68.195:42858.service: Deactivated successfully. Jan 13 20:09:40.474664 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 20:09:40.476374 systemd-logind[1920]: Session 25 logged out. Waiting for processes to exit. Jan 13 20:09:40.478337 systemd-logind[1920]: Removed session 25. Jan 13 20:09:45.504236 systemd[1]: Started sshd@25-172.31.17.103:22-139.178.68.195:33338.service - OpenSSH per-connection server daemon (139.178.68.195:33338). Jan 13 20:09:45.695608 sshd[5073]: Accepted publickey for core from 139.178.68.195 port 33338 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:45.698175 sshd-session[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:45.707903 systemd-logind[1920]: New session 26 of user core. Jan 13 20:09:45.714991 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 20:09:45.956644 sshd[5075]: Connection closed by 139.178.68.195 port 33338 Jan 13 20:09:45.957536 sshd-session[5073]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:45.963489 systemd[1]: sshd@25-172.31.17.103:22-139.178.68.195:33338.service: Deactivated successfully. Jan 13 20:09:45.968265 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 20:09:45.970449 systemd-logind[1920]: Session 26 logged out. Waiting for processes to exit. Jan 13 20:09:45.972867 systemd-logind[1920]: Removed session 26. Jan 13 20:09:45.995252 systemd[1]: Started sshd@26-172.31.17.103:22-139.178.68.195:33352.service - OpenSSH per-connection server daemon (139.178.68.195:33352). Jan 13 20:09:46.183944 sshd[5085]: Accepted publickey for core from 139.178.68.195 port 33352 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:46.186502 sshd-session[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:46.194991 systemd-logind[1920]: New session 27 of user core. Jan 13 20:09:46.205971 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 20:09:48.812824 containerd[1945]: time="2025-01-13T20:09:48.812401201Z" level=info msg="StopContainer for \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\" with timeout 30 (s)" Jan 13 20:09:48.818454 containerd[1945]: time="2025-01-13T20:09:48.818147029Z" level=info msg="Stop container \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\" with signal terminated" Jan 13 20:09:48.845776 systemd[1]: cri-containerd-5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d.scope: Deactivated successfully. Jan 13 20:09:48.862993 containerd[1945]: time="2025-01-13T20:09:48.862922426Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:09:48.880704 containerd[1945]: time="2025-01-13T20:09:48.880596926Z" level=info msg="StopContainer for \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\" with timeout 2 (s)" Jan 13 20:09:48.881390 containerd[1945]: time="2025-01-13T20:09:48.881177498Z" level=info msg="Stop container \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\" with signal terminated" Jan 13 20:09:48.895553 systemd-networkd[1847]: lxc_health: Link DOWN Jan 13 20:09:48.895575 systemd-networkd[1847]: lxc_health: Lost carrier Jan 13 20:09:48.924823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d-rootfs.mount: Deactivated successfully. Jan 13 20:09:48.937798 systemd[1]: cri-containerd-3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a.scope: Deactivated successfully. Jan 13 20:09:48.938267 systemd[1]: cri-containerd-3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a.scope: Consumed 14.378s CPU time. Jan 13 20:09:48.945770 kubelet[3183]: E0113 20:09:48.944840 3183 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:09:48.951609 containerd[1945]: time="2025-01-13T20:09:48.951479378Z" level=info msg="shim disconnected" id=5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d namespace=k8s.io Jan 13 20:09:48.951609 containerd[1945]: time="2025-01-13T20:09:48.951590858Z" level=warning msg="cleaning up after shim disconnected" id=5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d namespace=k8s.io Jan 13 20:09:48.951609 containerd[1945]: time="2025-01-13T20:09:48.951612098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:48.990358 containerd[1945]: time="2025-01-13T20:09:48.990303074Z" level=info msg="StopContainer for \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\" returns successfully" Jan 13 20:09:48.994862 containerd[1945]: time="2025-01-13T20:09:48.993141638Z" level=info msg="StopPodSandbox for \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\"" Jan 13 20:09:48.994862 containerd[1945]: time="2025-01-13T20:09:48.993223538Z" level=info msg="Container to stop \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:48.994672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a-rootfs.mount: Deactivated successfully. Jan 13 20:09:49.002085 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7-shm.mount: Deactivated successfully. Jan 13 20:09:49.006384 containerd[1945]: time="2025-01-13T20:09:49.006167134Z" level=info msg="shim disconnected" id=3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a namespace=k8s.io Jan 13 20:09:49.006384 containerd[1945]: time="2025-01-13T20:09:49.006318970Z" level=warning msg="cleaning up after shim disconnected" id=3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a namespace=k8s.io Jan 13 20:09:49.006384 containerd[1945]: time="2025-01-13T20:09:49.006340342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:49.015291 systemd[1]: cri-containerd-edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7.scope: Deactivated successfully. Jan 13 20:09:49.039702 containerd[1945]: time="2025-01-13T20:09:49.039650074Z" level=info msg="StopContainer for \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\" returns successfully" Jan 13 20:09:49.040590 containerd[1945]: time="2025-01-13T20:09:49.040533346Z" level=info msg="StopPodSandbox for \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\"" Jan 13 20:09:49.040699 containerd[1945]: time="2025-01-13T20:09:49.040593874Z" level=info msg="Container to stop \"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:49.040699 containerd[1945]: time="2025-01-13T20:09:49.040619554Z" level=info msg="Container to stop \"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:49.040699 containerd[1945]: time="2025-01-13T20:09:49.040644754Z" level=info msg="Container to stop \"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:49.040699 containerd[1945]: time="2025-01-13T20:09:49.040666558Z" level=info msg="Container to stop \"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:49.040699 containerd[1945]: time="2025-01-13T20:09:49.040686346Z" level=info msg="Container to stop \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:09:49.046046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e-shm.mount: Deactivated successfully. Jan 13 20:09:49.054921 systemd[1]: cri-containerd-d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e.scope: Deactivated successfully. Jan 13 20:09:49.082907 containerd[1945]: time="2025-01-13T20:09:49.082828175Z" level=info msg="shim disconnected" id=edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7 namespace=k8s.io Jan 13 20:09:49.083319 containerd[1945]: time="2025-01-13T20:09:49.083255039Z" level=warning msg="cleaning up after shim disconnected" id=edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7 namespace=k8s.io Jan 13 20:09:49.083543 containerd[1945]: time="2025-01-13T20:09:49.083514503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:49.113362 containerd[1945]: time="2025-01-13T20:09:49.113155811Z" level=info msg="shim disconnected" id=d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e namespace=k8s.io Jan 13 20:09:49.113599 containerd[1945]: time="2025-01-13T20:09:49.113355023Z" level=warning msg="cleaning up after shim disconnected" id=d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e namespace=k8s.io Jan 13 20:09:49.113599 containerd[1945]: time="2025-01-13T20:09:49.113491727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:49.118891 containerd[1945]: time="2025-01-13T20:09:49.118793759Z" level=info msg="TearDown network for sandbox \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\" successfully" Jan 13 20:09:49.118891 containerd[1945]: time="2025-01-13T20:09:49.118853507Z" level=info msg="StopPodSandbox for \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\" returns successfully" Jan 13 20:09:49.160448 containerd[1945]: time="2025-01-13T20:09:49.160383995Z" level=info msg="TearDown network for sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" successfully" Jan 13 20:09:49.160448 containerd[1945]: time="2025-01-13T20:09:49.160437371Z" level=info msg="StopPodSandbox for \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" returns successfully" Jan 13 20:09:49.174225 kubelet[3183]: I0113 20:09:49.172856 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nls5\" (UniqueName: \"kubernetes.io/projected/a6305281-7d60-45df-94a8-fae8f16ad031-kube-api-access-6nls5\") pod \"a6305281-7d60-45df-94a8-fae8f16ad031\" (UID: \"a6305281-7d60-45df-94a8-fae8f16ad031\") " Jan 13 20:09:49.174225 kubelet[3183]: I0113 20:09:49.172935 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6305281-7d60-45df-94a8-fae8f16ad031-cilium-config-path\") pod \"a6305281-7d60-45df-94a8-fae8f16ad031\" (UID: \"a6305281-7d60-45df-94a8-fae8f16ad031\") " Jan 13 20:09:49.180765 kubelet[3183]: I0113 20:09:49.180664 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6305281-7d60-45df-94a8-fae8f16ad031-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a6305281-7d60-45df-94a8-fae8f16ad031" (UID: "a6305281-7d60-45df-94a8-fae8f16ad031"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:09:49.187605 kubelet[3183]: I0113 20:09:49.187087 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6305281-7d60-45df-94a8-fae8f16ad031-kube-api-access-6nls5" (OuterVolumeSpecName: "kube-api-access-6nls5") pod "a6305281-7d60-45df-94a8-fae8f16ad031" (UID: "a6305281-7d60-45df-94a8-fae8f16ad031"). InnerVolumeSpecName "kube-api-access-6nls5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:09:49.189842 kubelet[3183]: I0113 20:09:49.189481 3183 scope.go:117] "RemoveContainer" containerID="5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d" Jan 13 20:09:49.194498 containerd[1945]: time="2025-01-13T20:09:49.194432123Z" level=info msg="RemoveContainer for \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\"" Jan 13 20:09:49.208002 systemd[1]: Removed slice kubepods-besteffort-poda6305281_7d60_45df_94a8_fae8f16ad031.slice - libcontainer container kubepods-besteffort-poda6305281_7d60_45df_94a8_fae8f16ad031.slice. Jan 13 20:09:49.213434 containerd[1945]: time="2025-01-13T20:09:49.210824423Z" level=info msg="RemoveContainer for \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\" returns successfully" Jan 13 20:09:49.213592 kubelet[3183]: I0113 20:09:49.213025 3183 scope.go:117] "RemoveContainer" containerID="5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d" Jan 13 20:09:49.213664 containerd[1945]: time="2025-01-13T20:09:49.213485315Z" level=error msg="ContainerStatus for \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\": not found" Jan 13 20:09:49.214043 kubelet[3183]: E0113 20:09:49.213986 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\": not found" containerID="5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d" Jan 13 20:09:49.214629 kubelet[3183]: I0113 20:09:49.214371 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d"} err="failed to get container status \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f429ada37edf8628a58a5ea8aa429ace7fcb24d6c148438ba39d51e09e5da3d\": not found" Jan 13 20:09:49.214629 kubelet[3183]: I0113 20:09:49.214585 3183 scope.go:117] "RemoveContainer" containerID="3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a" Jan 13 20:09:49.217516 containerd[1945]: time="2025-01-13T20:09:49.217461275Z" level=info msg="RemoveContainer for \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\"" Jan 13 20:09:49.227810 containerd[1945]: time="2025-01-13T20:09:49.227599799Z" level=info msg="RemoveContainer for \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\" returns successfully" Jan 13 20:09:49.228442 kubelet[3183]: I0113 20:09:49.228374 3183 scope.go:117] "RemoveContainer" containerID="6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae" Jan 13 20:09:49.231114 containerd[1945]: time="2025-01-13T20:09:49.230988119Z" level=info msg="RemoveContainer for \"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae\"" Jan 13 20:09:49.244198 containerd[1945]: time="2025-01-13T20:09:49.242891687Z" level=info msg="RemoveContainer for \"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae\" returns successfully" Jan 13 20:09:49.245326 kubelet[3183]: I0113 20:09:49.243269 3183 scope.go:117] "RemoveContainer" containerID="acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46" Jan 13 20:09:49.247657 containerd[1945]: time="2025-01-13T20:09:49.247165343Z" level=info msg="RemoveContainer for \"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46\"" Jan 13 20:09:49.255870 containerd[1945]: time="2025-01-13T20:09:49.255587616Z" level=info msg="RemoveContainer for \"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46\" returns successfully" Jan 13 20:09:49.256587 kubelet[3183]: I0113 20:09:49.256155 3183 scope.go:117] "RemoveContainer" containerID="8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093" Jan 13 20:09:49.260963 containerd[1945]: time="2025-01-13T20:09:49.260900028Z" level=info msg="RemoveContainer for \"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093\"" Jan 13 20:09:49.267963 containerd[1945]: time="2025-01-13T20:09:49.267888672Z" level=info msg="RemoveContainer for \"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093\" returns successfully" Jan 13 20:09:49.268652 kubelet[3183]: I0113 20:09:49.268598 3183 scope.go:117] "RemoveContainer" containerID="e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38" Jan 13 20:09:49.273599 kubelet[3183]: I0113 20:09:49.273533 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-etc-cni-netd\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.273953 kubelet[3183]: I0113 20:09:49.273909 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-cgroup\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.274032 kubelet[3183]: I0113 20:09:49.273866 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:49.274106 kubelet[3183]: I0113 20:09:49.274077 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-host-proc-sys-kernel\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.274178 kubelet[3183]: I0113 20:09:49.274123 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-bpf-maps\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.278751 kubelet[3183]: I0113 20:09:49.274230 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-lib-modules\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.278751 kubelet[3183]: I0113 20:09:49.274313 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-xtables-lock\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.278751 kubelet[3183]: I0113 20:09:49.274373 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-run\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.278751 kubelet[3183]: I0113 20:09:49.274432 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-host-proc-sys-net\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.278751 kubelet[3183]: I0113 20:09:49.274478 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff006b47-b526-43e6-af32-133b4ae313cd-clustermesh-secrets\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.278751 kubelet[3183]: I0113 20:09:49.274536 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-hostproc\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.279173 containerd[1945]: time="2025-01-13T20:09:49.275924508Z" level=info msg="RemoveContainer for \"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38\"" Jan 13 20:09:49.279237 kubelet[3183]: I0113 20:09:49.274574 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cni-path\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.279237 kubelet[3183]: I0113 20:09:49.274636 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtv2l\" (UniqueName: \"kubernetes.io/projected/ff006b47-b526-43e6-af32-133b4ae313cd-kube-api-access-qtv2l\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.279237 kubelet[3183]: I0113 20:09:49.274700 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-config-path\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.279237 kubelet[3183]: I0113 20:09:49.274789 3183 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff006b47-b526-43e6-af32-133b4ae313cd-hubble-tls\") pod \"ff006b47-b526-43e6-af32-133b4ae313cd\" (UID: \"ff006b47-b526-43e6-af32-133b4ae313cd\") " Jan 13 20:09:49.279237 kubelet[3183]: I0113 20:09:49.274880 3183 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6nls5\" (UniqueName: \"kubernetes.io/projected/a6305281-7d60-45df-94a8-fae8f16ad031-kube-api-access-6nls5\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.279237 kubelet[3183]: I0113 20:09:49.274930 3183 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6305281-7d60-45df-94a8-fae8f16ad031-cilium-config-path\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.279237 kubelet[3183]: I0113 20:09:49.274954 3183 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-etc-cni-netd\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.279571 kubelet[3183]: I0113 20:09:49.275682 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:49.279571 kubelet[3183]: I0113 20:09:49.275759 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:49.279571 kubelet[3183]: I0113 20:09:49.275800 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:49.279571 kubelet[3183]: I0113 20:09:49.275841 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:49.279571 kubelet[3183]: I0113 20:09:49.275876 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:49.281653 kubelet[3183]: I0113 20:09:49.275910 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:49.281653 kubelet[3183]: I0113 20:09:49.275947 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-hostproc" (OuterVolumeSpecName: "hostproc") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:49.281653 kubelet[3183]: I0113 20:09:49.275982 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:49.281653 kubelet[3183]: I0113 20:09:49.276736 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cni-path" (OuterVolumeSpecName: "cni-path") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:09:49.287488 containerd[1945]: time="2025-01-13T20:09:49.286403460Z" level=info msg="RemoveContainer for \"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38\" returns successfully" Jan 13 20:09:49.288407 kubelet[3183]: I0113 20:09:49.288296 3183 scope.go:117] "RemoveContainer" containerID="3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a" Jan 13 20:09:49.288747 containerd[1945]: time="2025-01-13T20:09:49.288663264Z" level=error msg="ContainerStatus for \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\": not found" Jan 13 20:09:49.289246 kubelet[3183]: E0113 20:09:49.289192 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\": not found" containerID="3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a" Jan 13 20:09:49.289346 kubelet[3183]: I0113 20:09:49.289251 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a"} err="failed to get container status \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3676b59299273262bb0d5615f535f614ce40ef087b7010f99aec5a3f2f0a3a7a\": not found" Jan 13 20:09:49.289346 kubelet[3183]: I0113 20:09:49.289290 3183 scope.go:117] "RemoveContainer" containerID="6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae" Jan 13 20:09:49.291758 containerd[1945]: time="2025-01-13T20:09:49.290113068Z" level=error msg="ContainerStatus for \"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae\": not found" Jan 13 20:09:49.291909 kubelet[3183]: E0113 20:09:49.291218 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae\": not found" containerID="6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae" Jan 13 20:09:49.291909 kubelet[3183]: I0113 20:09:49.291481 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae"} err="failed to get container status \"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae\": rpc error: code = NotFound desc = an error occurred when try to find container \"6065b26be43be1e706f4712657444fbdea77bc1c3491aa45b94cc64a29855eae\": not found" Jan 13 20:09:49.291909 kubelet[3183]: I0113 20:09:49.291799 3183 scope.go:117] "RemoveContainer" containerID="acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46" Jan 13 20:09:49.293236 containerd[1945]: time="2025-01-13T20:09:49.292181508Z" level=error msg="ContainerStatus for \"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46\": not found" Jan 13 20:09:49.294978 kubelet[3183]: E0113 20:09:49.294108 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46\": not found" containerID="acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46" Jan 13 20:09:49.294978 kubelet[3183]: I0113 20:09:49.294206 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46"} err="failed to get container status \"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46\": rpc error: code = NotFound desc = an error occurred when try to find container \"acaff3236e41db5b60d52b498c0d4125ba4b7aea431fdc242ae032666e90ad46\": not found" Jan 13 20:09:49.294978 kubelet[3183]: I0113 20:09:49.294278 3183 scope.go:117] "RemoveContainer" containerID="8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093" Jan 13 20:09:49.295948 containerd[1945]: time="2025-01-13T20:09:49.295380564Z" level=error msg="ContainerStatus for \"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093\": not found" Jan 13 20:09:49.296083 kubelet[3183]: E0113 20:09:49.295916 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093\": not found" containerID="8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093" Jan 13 20:09:49.296083 kubelet[3183]: I0113 20:09:49.296005 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093"} err="failed to get container status \"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093\": rpc error: code = NotFound desc = an error occurred when try to find container \"8fd190d3a8d5a39a38abea2ddfe87769883602caeff3924376f90a3429cab093\": not found" Jan 13 20:09:49.296083 kubelet[3183]: I0113 20:09:49.296082 3183 scope.go:117] "RemoveContainer" containerID="e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38" Jan 13 20:09:49.297562 containerd[1945]: time="2025-01-13T20:09:49.296876292Z" level=error msg="ContainerStatus for \"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38\": not found" Jan 13 20:09:49.297710 kubelet[3183]: I0113 20:09:49.297120 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff006b47-b526-43e6-af32-133b4ae313cd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:09:49.297710 kubelet[3183]: E0113 20:09:49.297700 3183 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38\": not found" containerID="e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38" Jan 13 20:09:49.297710 kubelet[3183]: I0113 20:09:49.297925 3183 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38"} err="failed to get container status \"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5d1348f4fa0ce83e93db6291a9bd460eab26215e15531faa8403050765e1b38\": not found" Jan 13 20:09:49.303899 kubelet[3183]: I0113 20:09:49.303838 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff006b47-b526-43e6-af32-133b4ae313cd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:09:49.304035 kubelet[3183]: I0113 20:09:49.303993 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff006b47-b526-43e6-af32-133b4ae313cd-kube-api-access-qtv2l" (OuterVolumeSpecName: "kube-api-access-qtv2l") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "kube-api-access-qtv2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:09:49.305007 kubelet[3183]: I0113 20:09:49.304954 3183 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ff006b47-b526-43e6-af32-133b4ae313cd" (UID: "ff006b47-b526-43e6-af32-133b4ae313cd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:09:49.376570 kubelet[3183]: I0113 20:09:49.376096 3183 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-lib-modules\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.376570 kubelet[3183]: I0113 20:09:49.376145 3183 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-bpf-maps\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.376570 kubelet[3183]: I0113 20:09:49.376167 3183 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-run\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.376570 kubelet[3183]: I0113 20:09:49.376189 3183 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-xtables-lock\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.376570 kubelet[3183]: I0113 20:09:49.376211 3183 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ff006b47-b526-43e6-af32-133b4ae313cd-clustermesh-secrets\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.376570 kubelet[3183]: I0113 20:09:49.376235 3183 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-host-proc-sys-net\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.376570 kubelet[3183]: I0113 20:09:49.376253 3183 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-hostproc\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.376570 kubelet[3183]: I0113 20:09:49.376273 3183 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ff006b47-b526-43e6-af32-133b4ae313cd-hubble-tls\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.377377 kubelet[3183]: I0113 20:09:49.376293 3183 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cni-path\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.377377 kubelet[3183]: I0113 20:09:49.376311 3183 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qtv2l\" (UniqueName: \"kubernetes.io/projected/ff006b47-b526-43e6-af32-133b4ae313cd-kube-api-access-qtv2l\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.377377 kubelet[3183]: I0113 20:09:49.376331 3183 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-config-path\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.377377 kubelet[3183]: I0113 20:09:49.376351 3183 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-cilium-cgroup\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.377377 kubelet[3183]: I0113 20:09:49.376370 3183 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ff006b47-b526-43e6-af32-133b4ae313cd-host-proc-sys-kernel\") on node \"ip-172-31-17-103\" DevicePath \"\"" Jan 13 20:09:49.829507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7-rootfs.mount: Deactivated successfully. Jan 13 20:09:49.829982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e-rootfs.mount: Deactivated successfully. Jan 13 20:09:49.830128 systemd[1]: var-lib-kubelet-pods-a6305281\x2d7d60\x2d45df\x2d94a8\x2dfae8f16ad031-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6nls5.mount: Deactivated successfully. Jan 13 20:09:49.830274 systemd[1]: var-lib-kubelet-pods-ff006b47\x2db526\x2d43e6\x2daf32\x2d133b4ae313cd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqtv2l.mount: Deactivated successfully. Jan 13 20:09:49.830408 systemd[1]: var-lib-kubelet-pods-ff006b47\x2db526\x2d43e6\x2daf32\x2d133b4ae313cd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:09:49.830541 systemd[1]: var-lib-kubelet-pods-ff006b47\x2db526\x2d43e6\x2daf32\x2d133b4ae313cd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:09:50.221575 systemd[1]: Removed slice kubepods-burstable-podff006b47_b526_43e6_af32_133b4ae313cd.slice - libcontainer container kubepods-burstable-podff006b47_b526_43e6_af32_133b4ae313cd.slice. Jan 13 20:09:50.222314 systemd[1]: kubepods-burstable-podff006b47_b526_43e6_af32_133b4ae313cd.slice: Consumed 14.526s CPU time. Jan 13 20:09:50.732891 kubelet[3183]: I0113 20:09:50.732832 3183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6305281-7d60-45df-94a8-fae8f16ad031" path="/var/lib/kubelet/pods/a6305281-7d60-45df-94a8-fae8f16ad031/volumes" Jan 13 20:09:50.733858 kubelet[3183]: I0113 20:09:50.733822 3183 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff006b47-b526-43e6-af32-133b4ae313cd" path="/var/lib/kubelet/pods/ff006b47-b526-43e6-af32-133b4ae313cd/volumes" Jan 13 20:09:50.738635 sshd[5087]: Connection closed by 139.178.68.195 port 33352 Jan 13 20:09:50.739258 sshd-session[5085]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:50.745308 systemd-logind[1920]: Session 27 logged out. Waiting for processes to exit. Jan 13 20:09:50.747768 systemd[1]: sshd@26-172.31.17.103:22-139.178.68.195:33352.service: Deactivated successfully. Jan 13 20:09:50.753509 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 20:09:50.754324 systemd[1]: session-27.scope: Consumed 1.856s CPU time. Jan 13 20:09:50.756346 systemd-logind[1920]: Removed session 27. Jan 13 20:09:50.778267 systemd[1]: Started sshd@27-172.31.17.103:22-139.178.68.195:33364.service - OpenSSH per-connection server daemon (139.178.68.195:33364). Jan 13 20:09:50.973293 sshd[5246]: Accepted publickey for core from 139.178.68.195 port 33364 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:50.975970 sshd-session[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:50.984209 systemd-logind[1920]: New session 28 of user core. Jan 13 20:09:50.996998 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 20:09:51.099978 ntpd[1914]: Deleting interface #12 lxc_health, fe80::8488:caff:fe89:70b1%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Jan 13 20:09:51.100939 ntpd[1914]: 13 Jan 20:09:51 ntpd[1914]: Deleting interface #12 lxc_health, fe80::8488:caff:fe89:70b1%8#123, interface stats: received=0, sent=0, dropped=0, active_time=71 secs Jan 13 20:09:52.062769 kubelet[3183]: I0113 20:09:52.061336 3183 setters.go:600] "Node became not ready" node="ip-172-31-17-103" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:09:52Z","lastTransitionTime":"2025-01-13T20:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:09:52.671777 sshd[5248]: Connection closed by 139.178.68.195 port 33364 Jan 13 20:09:52.672646 sshd-session[5246]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:52.683210 systemd-logind[1920]: Session 28 logged out. Waiting for processes to exit. Jan 13 20:09:52.685522 systemd[1]: sshd@27-172.31.17.103:22-139.178.68.195:33364.service: Deactivated successfully. Jan 13 20:09:52.693025 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 20:09:52.693536 systemd[1]: session-28.scope: Consumed 1.484s CPU time. Jan 13 20:09:52.713614 systemd-logind[1920]: Removed session 28. Jan 13 20:09:52.716136 kubelet[3183]: E0113 20:09:52.714952 3183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff006b47-b526-43e6-af32-133b4ae313cd" containerName="mount-cgroup" Jan 13 20:09:52.716136 kubelet[3183]: E0113 20:09:52.715007 3183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff006b47-b526-43e6-af32-133b4ae313cd" containerName="cilium-agent" Jan 13 20:09:52.716136 kubelet[3183]: E0113 20:09:52.715025 3183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff006b47-b526-43e6-af32-133b4ae313cd" containerName="apply-sysctl-overwrites" Jan 13 20:09:52.716136 kubelet[3183]: E0113 20:09:52.715043 3183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff006b47-b526-43e6-af32-133b4ae313cd" containerName="mount-bpf-fs" Jan 13 20:09:52.716136 kubelet[3183]: E0113 20:09:52.715058 3183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff006b47-b526-43e6-af32-133b4ae313cd" containerName="clean-cilium-state" Jan 13 20:09:52.716136 kubelet[3183]: E0113 20:09:52.715073 3183 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6305281-7d60-45df-94a8-fae8f16ad031" containerName="cilium-operator" Jan 13 20:09:52.716136 kubelet[3183]: I0113 20:09:52.715115 3183 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff006b47-b526-43e6-af32-133b4ae313cd" containerName="cilium-agent" Jan 13 20:09:52.716136 kubelet[3183]: I0113 20:09:52.715130 3183 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6305281-7d60-45df-94a8-fae8f16ad031" containerName="cilium-operator" Jan 13 20:09:52.718697 systemd[1]: Started sshd@28-172.31.17.103:22-139.178.68.195:33378.service - OpenSSH per-connection server daemon (139.178.68.195:33378). Jan 13 20:09:52.749472 systemd[1]: Created slice kubepods-burstable-pod6f55022a_3316_4c04_ae6b_f776d6ebcebc.slice - libcontainer container kubepods-burstable-pod6f55022a_3316_4c04_ae6b_f776d6ebcebc.slice. Jan 13 20:09:52.798098 kubelet[3183]: I0113 20:09:52.798050 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f55022a-3316-4c04-ae6b-f776d6ebcebc-lib-modules\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.798915 kubelet[3183]: I0113 20:09:52.798325 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f55022a-3316-4c04-ae6b-f776d6ebcebc-clustermesh-secrets\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.798915 kubelet[3183]: I0113 20:09:52.798396 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f55022a-3316-4c04-ae6b-f776d6ebcebc-etc-cni-netd\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.798915 kubelet[3183]: I0113 20:09:52.798436 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f55022a-3316-4c04-ae6b-f776d6ebcebc-cilium-config-path\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.798915 kubelet[3183]: I0113 20:09:52.798471 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f55022a-3316-4c04-ae6b-f776d6ebcebc-hubble-tls\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.798915 kubelet[3183]: I0113 20:09:52.798510 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f55022a-3316-4c04-ae6b-f776d6ebcebc-host-proc-sys-kernel\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.798915 kubelet[3183]: I0113 20:09:52.798545 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f55022a-3316-4c04-ae6b-f776d6ebcebc-cni-path\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.799300 kubelet[3183]: I0113 20:09:52.798578 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6f55022a-3316-4c04-ae6b-f776d6ebcebc-cilium-ipsec-secrets\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.799300 kubelet[3183]: I0113 20:09:52.798623 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f55022a-3316-4c04-ae6b-f776d6ebcebc-host-proc-sys-net\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.799300 kubelet[3183]: I0113 20:09:52.798667 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f55022a-3316-4c04-ae6b-f776d6ebcebc-bpf-maps\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.799300 kubelet[3183]: I0113 20:09:52.798701 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f55022a-3316-4c04-ae6b-f776d6ebcebc-hostproc\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.799300 kubelet[3183]: I0113 20:09:52.798762 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f55022a-3316-4c04-ae6b-f776d6ebcebc-xtables-lock\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.799300 kubelet[3183]: I0113 20:09:52.798807 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f55022a-3316-4c04-ae6b-f776d6ebcebc-cilium-run\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.799560 kubelet[3183]: I0113 20:09:52.798845 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f55022a-3316-4c04-ae6b-f776d6ebcebc-cilium-cgroup\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.799560 kubelet[3183]: I0113 20:09:52.798880 3183 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7ljq\" (UniqueName: \"kubernetes.io/projected/6f55022a-3316-4c04-ae6b-f776d6ebcebc-kube-api-access-d7ljq\") pod \"cilium-shpx2\" (UID: \"6f55022a-3316-4c04-ae6b-f776d6ebcebc\") " pod="kube-system/cilium-shpx2" Jan 13 20:09:52.987597 sshd[5259]: Accepted publickey for core from 139.178.68.195 port 33378 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:52.990569 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:53.002196 systemd-logind[1920]: New session 29 of user core. Jan 13 20:09:53.011064 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 20:09:53.060697 containerd[1945]: time="2025-01-13T20:09:53.060618830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-shpx2,Uid:6f55022a-3316-4c04-ae6b-f776d6ebcebc,Namespace:kube-system,Attempt:0,}" Jan 13 20:09:53.101773 containerd[1945]: time="2025-01-13T20:09:53.100932819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:09:53.101773 containerd[1945]: time="2025-01-13T20:09:53.101052243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:09:53.101773 containerd[1945]: time="2025-01-13T20:09:53.101090379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:53.101773 containerd[1945]: time="2025-01-13T20:09:53.101272887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:09:53.135052 systemd[1]: Started cri-containerd-a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df.scope - libcontainer container a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df. Jan 13 20:09:53.141109 sshd[5265]: Connection closed by 139.178.68.195 port 33378 Jan 13 20:09:53.142391 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Jan 13 20:09:53.150330 systemd[1]: sshd@28-172.31.17.103:22-139.178.68.195:33378.service: Deactivated successfully. Jan 13 20:09:53.156591 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 20:09:53.160211 systemd-logind[1920]: Session 29 logged out. Waiting for processes to exit. Jan 13 20:09:53.183239 systemd[1]: Started sshd@29-172.31.17.103:22-139.178.68.195:33382.service - OpenSSH per-connection server daemon (139.178.68.195:33382). Jan 13 20:09:53.186386 systemd-logind[1920]: Removed session 29. Jan 13 20:09:53.221937 containerd[1945]: time="2025-01-13T20:09:53.221887047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-shpx2,Uid:6f55022a-3316-4c04-ae6b-f776d6ebcebc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df\"" Jan 13 20:09:53.231416 containerd[1945]: time="2025-01-13T20:09:53.231362331Z" level=info msg="CreateContainer within sandbox \"a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:09:53.257897 containerd[1945]: time="2025-01-13T20:09:53.257761839Z" level=info msg="CreateContainer within sandbox \"a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"20cc2fae9038c0b0b104918e5feb9fb69e12fe20d92814be6436b57b2f0c03ba\"" Jan 13 20:09:53.259747 containerd[1945]: time="2025-01-13T20:09:53.259592187Z" level=info msg="StartContainer for \"20cc2fae9038c0b0b104918e5feb9fb69e12fe20d92814be6436b57b2f0c03ba\"" Jan 13 20:09:53.306059 systemd[1]: Started cri-containerd-20cc2fae9038c0b0b104918e5feb9fb69e12fe20d92814be6436b57b2f0c03ba.scope - libcontainer container 20cc2fae9038c0b0b104918e5feb9fb69e12fe20d92814be6436b57b2f0c03ba. Jan 13 20:09:53.365097 containerd[1945]: time="2025-01-13T20:09:53.365024824Z" level=info msg="StartContainer for \"20cc2fae9038c0b0b104918e5feb9fb69e12fe20d92814be6436b57b2f0c03ba\" returns successfully" Jan 13 20:09:53.378794 systemd[1]: cri-containerd-20cc2fae9038c0b0b104918e5feb9fb69e12fe20d92814be6436b57b2f0c03ba.scope: Deactivated successfully. Jan 13 20:09:53.393377 sshd[5306]: Accepted publickey for core from 139.178.68.195 port 33382 ssh2: RSA SHA256:dyUqkUNC7j9I+iFvInUHdwtzQ+aO/14TB1/ljGdoY9k Jan 13 20:09:53.398078 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:09:53.410694 systemd-logind[1920]: New session 30 of user core. Jan 13 20:09:53.418022 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 13 20:09:53.442662 containerd[1945]: time="2025-01-13T20:09:53.442584532Z" level=info msg="shim disconnected" id=20cc2fae9038c0b0b104918e5feb9fb69e12fe20d92814be6436b57b2f0c03ba namespace=k8s.io Jan 13 20:09:53.443229 containerd[1945]: time="2025-01-13T20:09:53.442785976Z" level=warning msg="cleaning up after shim disconnected" id=20cc2fae9038c0b0b104918e5feb9fb69e12fe20d92814be6436b57b2f0c03ba namespace=k8s.io Jan 13 20:09:53.443229 containerd[1945]: time="2025-01-13T20:09:53.442809628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:53.946602 kubelet[3183]: E0113 20:09:53.946459 3183 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:09:54.230264 containerd[1945]: time="2025-01-13T20:09:54.229974160Z" level=info msg="CreateContainer within sandbox \"a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:09:54.260334 containerd[1945]: time="2025-01-13T20:09:54.260271736Z" level=info msg="CreateContainer within sandbox \"a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"acf5f3a243c22a166fedea8fabe221bbd09dbc72a4e32f1f1fedafe67a8908cd\"" Jan 13 20:09:54.261864 containerd[1945]: time="2025-01-13T20:09:54.261506332Z" level=info msg="StartContainer for \"acf5f3a243c22a166fedea8fabe221bbd09dbc72a4e32f1f1fedafe67a8908cd\"" Jan 13 20:09:54.317025 systemd[1]: Started cri-containerd-acf5f3a243c22a166fedea8fabe221bbd09dbc72a4e32f1f1fedafe67a8908cd.scope - libcontainer container acf5f3a243c22a166fedea8fabe221bbd09dbc72a4e32f1f1fedafe67a8908cd. Jan 13 20:09:54.365662 containerd[1945]: time="2025-01-13T20:09:54.365587721Z" level=info msg="StartContainer for \"acf5f3a243c22a166fedea8fabe221bbd09dbc72a4e32f1f1fedafe67a8908cd\" returns successfully" Jan 13 20:09:54.377331 systemd[1]: cri-containerd-acf5f3a243c22a166fedea8fabe221bbd09dbc72a4e32f1f1fedafe67a8908cd.scope: Deactivated successfully. Jan 13 20:09:54.429271 containerd[1945]: time="2025-01-13T20:09:54.429193289Z" level=info msg="shim disconnected" id=acf5f3a243c22a166fedea8fabe221bbd09dbc72a4e32f1f1fedafe67a8908cd namespace=k8s.io Jan 13 20:09:54.431100 containerd[1945]: time="2025-01-13T20:09:54.430775801Z" level=warning msg="cleaning up after shim disconnected" id=acf5f3a243c22a166fedea8fabe221bbd09dbc72a4e32f1f1fedafe67a8908cd namespace=k8s.io Jan 13 20:09:54.431100 containerd[1945]: time="2025-01-13T20:09:54.430843397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:54.910793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acf5f3a243c22a166fedea8fabe221bbd09dbc72a4e32f1f1fedafe67a8908cd-rootfs.mount: Deactivated successfully. Jan 13 20:09:55.236660 containerd[1945]: time="2025-01-13T20:09:55.235786157Z" level=info msg="CreateContainer within sandbox \"a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:09:55.272687 containerd[1945]: time="2025-01-13T20:09:55.272569973Z" level=info msg="CreateContainer within sandbox \"a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"40333ba22259d8c38363140b369e1950c09e0a2b950860d2bbae6caa463d01b9\"" Jan 13 20:09:55.274795 containerd[1945]: time="2025-01-13T20:09:55.273785045Z" level=info msg="StartContainer for \"40333ba22259d8c38363140b369e1950c09e0a2b950860d2bbae6caa463d01b9\"" Jan 13 20:09:55.328324 systemd[1]: Started cri-containerd-40333ba22259d8c38363140b369e1950c09e0a2b950860d2bbae6caa463d01b9.scope - libcontainer container 40333ba22259d8c38363140b369e1950c09e0a2b950860d2bbae6caa463d01b9. Jan 13 20:09:55.388087 containerd[1945]: time="2025-01-13T20:09:55.388016730Z" level=info msg="StartContainer for \"40333ba22259d8c38363140b369e1950c09e0a2b950860d2bbae6caa463d01b9\" returns successfully" Jan 13 20:09:55.391520 systemd[1]: cri-containerd-40333ba22259d8c38363140b369e1950c09e0a2b950860d2bbae6caa463d01b9.scope: Deactivated successfully. Jan 13 20:09:55.444903 containerd[1945]: time="2025-01-13T20:09:55.444694770Z" level=info msg="shim disconnected" id=40333ba22259d8c38363140b369e1950c09e0a2b950860d2bbae6caa463d01b9 namespace=k8s.io Jan 13 20:09:55.445156 containerd[1945]: time="2025-01-13T20:09:55.444885186Z" level=warning msg="cleaning up after shim disconnected" id=40333ba22259d8c38363140b369e1950c09e0a2b950860d2bbae6caa463d01b9 namespace=k8s.io Jan 13 20:09:55.445156 containerd[1945]: time="2025-01-13T20:09:55.444933210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:55.910616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40333ba22259d8c38363140b369e1950c09e0a2b950860d2bbae6caa463d01b9-rootfs.mount: Deactivated successfully. Jan 13 20:09:56.242948 containerd[1945]: time="2025-01-13T20:09:56.241540362Z" level=info msg="CreateContainer within sandbox \"a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:09:56.282086 containerd[1945]: time="2025-01-13T20:09:56.281318046Z" level=info msg="CreateContainer within sandbox \"a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bef16a1cc5770094bd1d7fdf3c656150b9bdd448ca3e117c614246c16dbf23f0\"" Jan 13 20:09:56.283182 containerd[1945]: time="2025-01-13T20:09:56.282988386Z" level=info msg="StartContainer for \"bef16a1cc5770094bd1d7fdf3c656150b9bdd448ca3e117c614246c16dbf23f0\"" Jan 13 20:09:56.339072 systemd[1]: Started cri-containerd-bef16a1cc5770094bd1d7fdf3c656150b9bdd448ca3e117c614246c16dbf23f0.scope - libcontainer container bef16a1cc5770094bd1d7fdf3c656150b9bdd448ca3e117c614246c16dbf23f0. Jan 13 20:09:56.383648 systemd[1]: cri-containerd-bef16a1cc5770094bd1d7fdf3c656150b9bdd448ca3e117c614246c16dbf23f0.scope: Deactivated successfully. Jan 13 20:09:56.388760 containerd[1945]: time="2025-01-13T20:09:56.388240135Z" level=info msg="StartContainer for \"bef16a1cc5770094bd1d7fdf3c656150b9bdd448ca3e117c614246c16dbf23f0\" returns successfully" Jan 13 20:09:56.440933 containerd[1945]: time="2025-01-13T20:09:56.440841103Z" level=info msg="shim disconnected" id=bef16a1cc5770094bd1d7fdf3c656150b9bdd448ca3e117c614246c16dbf23f0 namespace=k8s.io Jan 13 20:09:56.440933 containerd[1945]: time="2025-01-13T20:09:56.440922127Z" level=warning msg="cleaning up after shim disconnected" id=bef16a1cc5770094bd1d7fdf3c656150b9bdd448ca3e117c614246c16dbf23f0 namespace=k8s.io Jan 13 20:09:56.441244 containerd[1945]: time="2025-01-13T20:09:56.440943271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:09:56.910642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bef16a1cc5770094bd1d7fdf3c656150b9bdd448ca3e117c614246c16dbf23f0-rootfs.mount: Deactivated successfully. Jan 13 20:09:57.249188 containerd[1945]: time="2025-01-13T20:09:57.248888419Z" level=info msg="CreateContainer within sandbox \"a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:09:57.299082 containerd[1945]: time="2025-01-13T20:09:57.299007775Z" level=info msg="CreateContainer within sandbox \"a94cef374e8f09b5d9d4715d589a7749d9115843feaea577f3c2806e6a4758df\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b29d97c948ffb5caa3c196bf6b8ca8250a1ab421cf8881b219f0180a42277abf\"" Jan 13 20:09:57.300177 containerd[1945]: time="2025-01-13T20:09:57.300072631Z" level=info msg="StartContainer for \"b29d97c948ffb5caa3c196bf6b8ca8250a1ab421cf8881b219f0180a42277abf\"" Jan 13 20:09:57.362046 systemd[1]: Started cri-containerd-b29d97c948ffb5caa3c196bf6b8ca8250a1ab421cf8881b219f0180a42277abf.scope - libcontainer container b29d97c948ffb5caa3c196bf6b8ca8250a1ab421cf8881b219f0180a42277abf. Jan 13 20:09:57.423083 containerd[1945]: time="2025-01-13T20:09:57.422874332Z" level=info msg="StartContainer for \"b29d97c948ffb5caa3c196bf6b8ca8250a1ab421cf8881b219f0180a42277abf\" returns successfully" Jan 13 20:09:58.216767 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:09:58.283049 kubelet[3183]: I0113 20:09:58.282958 3183 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-shpx2" podStartSLOduration=6.282933464 podStartE2EDuration="6.282933464s" podCreationTimestamp="2025-01-13 20:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:09:58.282754952 +0000 UTC m=+119.839789556" watchObservedRunningTime="2025-01-13 20:09:58.282933464 +0000 UTC m=+119.839968044" Jan 13 20:09:58.730201 containerd[1945]: time="2025-01-13T20:09:58.729740087Z" level=info msg="StopPodSandbox for \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\"" Jan 13 20:09:58.730201 containerd[1945]: time="2025-01-13T20:09:58.729884411Z" level=info msg="TearDown network for sandbox \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\" successfully" Jan 13 20:09:58.730201 containerd[1945]: time="2025-01-13T20:09:58.729905807Z" level=info msg="StopPodSandbox for \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\" returns successfully" Jan 13 20:09:58.732019 containerd[1945]: time="2025-01-13T20:09:58.731012075Z" level=info msg="RemovePodSandbox for \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\"" Jan 13 20:09:58.732019 containerd[1945]: time="2025-01-13T20:09:58.731075543Z" level=info msg="Forcibly stopping sandbox \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\"" Jan 13 20:09:58.732019 containerd[1945]: time="2025-01-13T20:09:58.731203667Z" level=info msg="TearDown network for sandbox \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\" successfully" Jan 13 20:09:58.738025 containerd[1945]: time="2025-01-13T20:09:58.737931839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:09:58.738201 containerd[1945]: time="2025-01-13T20:09:58.738069887Z" level=info msg="RemovePodSandbox \"edd516c331e5455227eb0fb0af69d1235773604262f74ab5061f6660ead94ec7\" returns successfully" Jan 13 20:09:58.739110 containerd[1945]: time="2025-01-13T20:09:58.739023683Z" level=info msg="StopPodSandbox for \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\"" Jan 13 20:09:58.739286 containerd[1945]: time="2025-01-13T20:09:58.739170791Z" level=info msg="TearDown network for sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" successfully" Jan 13 20:09:58.739286 containerd[1945]: time="2025-01-13T20:09:58.739197467Z" level=info msg="StopPodSandbox for \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" returns successfully" Jan 13 20:09:58.739836 containerd[1945]: time="2025-01-13T20:09:58.739795283Z" level=info msg="RemovePodSandbox for \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\"" Jan 13 20:09:58.739942 containerd[1945]: time="2025-01-13T20:09:58.739841951Z" level=info msg="Forcibly stopping sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\"" Jan 13 20:09:58.740001 containerd[1945]: time="2025-01-13T20:09:58.739931435Z" level=info msg="TearDown network for sandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" successfully" Jan 13 20:09:58.747357 containerd[1945]: time="2025-01-13T20:09:58.747251915Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:09:58.747357 containerd[1945]: time="2025-01-13T20:09:58.747342107Z" level=info msg="RemovePodSandbox \"d01dacad13705fc4ff19d11ac0fdff33aceeaac2dcfb1134c715f7f68ac1fd2e\" returns successfully" Jan 13 20:10:02.643780 systemd-networkd[1847]: lxc_health: Link UP Jan 13 20:10:02.655329 (udev-worker)[6110]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:10:02.664402 systemd-networkd[1847]: lxc_health: Gained carrier Jan 13 20:10:03.800039 systemd-networkd[1847]: lxc_health: Gained IPv6LL Jan 13 20:10:04.708064 systemd[1]: run-containerd-runc-k8s.io-b29d97c948ffb5caa3c196bf6b8ca8250a1ab421cf8881b219f0180a42277abf-runc.NMekiR.mount: Deactivated successfully. Jan 13 20:10:06.100487 ntpd[1914]: Listen normally on 15 lxc_health [fe80::245b:a7ff:fef9:c1c6%14]:123 Jan 13 20:10:06.101028 ntpd[1914]: 13 Jan 20:10:06 ntpd[1914]: Listen normally on 15 lxc_health [fe80::245b:a7ff:fef9:c1c6%14]:123 Jan 13 20:10:09.489061 systemd[1]: run-containerd-runc-k8s.io-b29d97c948ffb5caa3c196bf6b8ca8250a1ab421cf8881b219f0180a42277abf-runc.xQfa2M.mount: Deactivated successfully. Jan 13 20:10:09.630503 sshd[5364]: Connection closed by 139.178.68.195 port 33382 Jan 13 20:10:09.633055 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Jan 13 20:10:09.640695 systemd[1]: sshd@29-172.31.17.103:22-139.178.68.195:33382.service: Deactivated successfully. Jan 13 20:10:09.647357 systemd[1]: session-30.scope: Deactivated successfully. Jan 13 20:10:09.654548 systemd-logind[1920]: Session 30 logged out. Waiting for processes to exit. Jan 13 20:10:09.657454 systemd-logind[1920]: Removed session 30. Jan 13 20:10:23.416963 systemd[1]: cri-containerd-4c44fc89694fec028d1a5fd046253d697f24ce5c3785d1da96bbc0a92edcc9c1.scope: Deactivated successfully. Jan 13 20:10:23.417439 systemd[1]: cri-containerd-4c44fc89694fec028d1a5fd046253d697f24ce5c3785d1da96bbc0a92edcc9c1.scope: Consumed 3.912s CPU time, 19.8M memory peak, 0B memory swap peak. Jan 13 20:10:23.457079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c44fc89694fec028d1a5fd046253d697f24ce5c3785d1da96bbc0a92edcc9c1-rootfs.mount: Deactivated successfully. Jan 13 20:10:23.466370 containerd[1945]: time="2025-01-13T20:10:23.466247337Z" level=info msg="shim disconnected" id=4c44fc89694fec028d1a5fd046253d697f24ce5c3785d1da96bbc0a92edcc9c1 namespace=k8s.io Jan 13 20:10:23.467013 containerd[1945]: time="2025-01-13T20:10:23.466922361Z" level=warning msg="cleaning up after shim disconnected" id=4c44fc89694fec028d1a5fd046253d697f24ce5c3785d1da96bbc0a92edcc9c1 namespace=k8s.io Jan 13 20:10:23.467013 containerd[1945]: time="2025-01-13T20:10:23.466950777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:24.329556 kubelet[3183]: I0113 20:10:24.329217 3183 scope.go:117] "RemoveContainer" containerID="4c44fc89694fec028d1a5fd046253d697f24ce5c3785d1da96bbc0a92edcc9c1" Jan 13 20:10:24.333039 containerd[1945]: time="2025-01-13T20:10:24.332985682Z" level=info msg="CreateContainer within sandbox \"66644a51cff52db31a99770dc11e278bc3fb342513374d830c5c32d7e3ccfa8c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 20:10:24.349105 containerd[1945]: time="2025-01-13T20:10:24.349042366Z" level=info msg="CreateContainer within sandbox \"66644a51cff52db31a99770dc11e278bc3fb342513374d830c5c32d7e3ccfa8c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b43806ab378cd11ce44c5a1bc15d1c76a944f5dbebe2f094cc73dd201da531f7\"" Jan 13 20:10:24.350192 containerd[1945]: time="2025-01-13T20:10:24.349777786Z" level=info msg="StartContainer for \"b43806ab378cd11ce44c5a1bc15d1c76a944f5dbebe2f094cc73dd201da531f7\"" Jan 13 20:10:24.405028 systemd[1]: Started cri-containerd-b43806ab378cd11ce44c5a1bc15d1c76a944f5dbebe2f094cc73dd201da531f7.scope - libcontainer container b43806ab378cd11ce44c5a1bc15d1c76a944f5dbebe2f094cc73dd201da531f7. Jan 13 20:10:24.456902 systemd[1]: run-containerd-runc-k8s.io-b43806ab378cd11ce44c5a1bc15d1c76a944f5dbebe2f094cc73dd201da531f7-runc.gJMxsm.mount: Deactivated successfully. Jan 13 20:10:24.480983 containerd[1945]: time="2025-01-13T20:10:24.480769582Z" level=info msg="StartContainer for \"b43806ab378cd11ce44c5a1bc15d1c76a944f5dbebe2f094cc73dd201da531f7\" returns successfully" Jan 13 20:10:29.122169 systemd[1]: cri-containerd-68489e8e6b0357f97e5b7b64b4118d529a4c3818a3917b097d9372b7acf07f03.scope: Deactivated successfully. Jan 13 20:10:29.122618 systemd[1]: cri-containerd-68489e8e6b0357f97e5b7b64b4118d529a4c3818a3917b097d9372b7acf07f03.scope: Consumed 3.605s CPU time, 15.6M memory peak, 0B memory swap peak. Jan 13 20:10:29.163141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68489e8e6b0357f97e5b7b64b4118d529a4c3818a3917b097d9372b7acf07f03-rootfs.mount: Deactivated successfully. Jan 13 20:10:29.167695 containerd[1945]: time="2025-01-13T20:10:29.167580902Z" level=info msg="shim disconnected" id=68489e8e6b0357f97e5b7b64b4118d529a4c3818a3917b097d9372b7acf07f03 namespace=k8s.io Jan 13 20:10:29.167695 containerd[1945]: time="2025-01-13T20:10:29.167675222Z" level=warning msg="cleaning up after shim disconnected" id=68489e8e6b0357f97e5b7b64b4118d529a4c3818a3917b097d9372b7acf07f03 namespace=k8s.io Jan 13 20:10:29.167695 containerd[1945]: time="2025-01-13T20:10:29.167696066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:10:29.347966 kubelet[3183]: I0113 20:10:29.347912 3183 scope.go:117] "RemoveContainer" containerID="68489e8e6b0357f97e5b7b64b4118d529a4c3818a3917b097d9372b7acf07f03" Jan 13 20:10:29.351098 containerd[1945]: time="2025-01-13T20:10:29.351031815Z" level=info msg="CreateContainer within sandbox \"b373230340c9022df89211c63a68ff9443f1379796c4e716e3edbf9acea6552c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 20:10:29.373087 containerd[1945]: time="2025-01-13T20:10:29.372827583Z" level=info msg="CreateContainer within sandbox \"b373230340c9022df89211c63a68ff9443f1379796c4e716e3edbf9acea6552c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"02bf8f74551c699588d7d1c189fc92f6d31d09d7ebe535078c467a17f5a90267\"" Jan 13 20:10:29.374143 containerd[1945]: time="2025-01-13T20:10:29.373805007Z" level=info msg="StartContainer for \"02bf8f74551c699588d7d1c189fc92f6d31d09d7ebe535078c467a17f5a90267\"" Jan 13 20:10:29.428028 systemd[1]: Started cri-containerd-02bf8f74551c699588d7d1c189fc92f6d31d09d7ebe535078c467a17f5a90267.scope - libcontainer container 02bf8f74551c699588d7d1c189fc92f6d31d09d7ebe535078c467a17f5a90267. Jan 13 20:10:29.491364 containerd[1945]: time="2025-01-13T20:10:29.491178675Z" level=info msg="StartContainer for \"02bf8f74551c699588d7d1c189fc92f6d31d09d7ebe535078c467a17f5a90267\" returns successfully" Jan 13 20:10:31.503648 kubelet[3183]: E0113 20:10:31.502707 3183 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-103?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"