Jan 17 12:00:32.244577 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 17 12:00:32.244630 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:00:32.244657 kernel: KASLR disabled due to lack of seed Jan 17 12:00:32.244675 kernel: efi: EFI v2.7 by EDK II Jan 17 12:00:32.244691 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 17 12:00:32.244708 kernel: ACPI: Early table checksum verification disabled Jan 17 12:00:32.244727 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 17 12:00:32.244743 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 12:00:32.244759 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 12:00:32.244775 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 17 12:00:32.244798 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 12:00:32.244814 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 17 12:00:32.244883 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 17 12:00:32.244904 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 17 12:00:32.244926 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 12:00:32.244951 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 17 12:00:32.244970 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 17 12:00:32.244988 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 17 12:00:32.245006 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 17 12:00:32.245022 kernel: printk: bootconsole [uart0] enabled Jan 17 12:00:32.245039 kernel: NUMA: Failed to initialise from firmware Jan 17 12:00:32.245058 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 12:00:32.245075 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 17 12:00:32.245092 kernel: Zone ranges: Jan 17 12:00:32.245110 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 17 12:00:32.245127 kernel: DMA32 empty Jan 17 12:00:32.245151 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 17 12:00:32.245169 kernel: Movable zone start for each node Jan 17 12:00:32.245186 kernel: Early memory node ranges Jan 17 12:00:32.245204 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 17 12:00:32.245221 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 17 12:00:32.245238 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 17 12:00:32.245255 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 17 12:00:32.245273 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 17 12:00:32.245290 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 17 12:00:32.245307 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 17 12:00:32.245324 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 17 12:00:32.245341 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 12:00:32.245365 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 17 12:00:32.245383 kernel: psci: probing for conduit method from ACPI. Jan 17 12:00:32.245407 kernel: psci: PSCIv1.0 detected in firmware. Jan 17 12:00:32.245425 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:00:32.245443 kernel: psci: Trusted OS migration not required Jan 17 12:00:32.245466 kernel: psci: SMC Calling Convention v1.1 Jan 17 12:00:32.245484 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:00:32.245501 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:00:32.245521 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 12:00:32.245539 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:00:32.245558 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:00:32.245576 kernel: CPU features: detected: Spectre-v2 Jan 17 12:00:32.245594 kernel: CPU features: detected: Spectre-v3a Jan 17 12:00:32.245612 kernel: CPU features: detected: Spectre-BHB Jan 17 12:00:32.245631 kernel: CPU features: detected: ARM erratum 1742098 Jan 17 12:00:32.245649 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 17 12:00:32.245678 kernel: alternatives: applying boot alternatives Jan 17 12:00:32.245701 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:00:32.245721 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:00:32.245740 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:00:32.245758 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:00:32.245777 kernel: Fallback order for Node 0: 0 Jan 17 12:00:32.245795 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 17 12:00:32.245814 kernel: Policy zone: Normal Jan 17 12:00:32.247911 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:00:32.247948 kernel: software IO TLB: area num 2. Jan 17 12:00:32.247968 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 17 12:00:32.248001 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 17 12:00:32.248020 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:00:32.248039 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:00:32.248058 kernel: rcu: RCU event tracing is enabled. Jan 17 12:00:32.248077 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:00:32.248096 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:00:32.248114 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:00:32.248133 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:00:32.248153 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:00:32.248171 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:00:32.248189 kernel: GICv3: 96 SPIs implemented Jan 17 12:00:32.248216 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:00:32.248235 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:00:32.248253 kernel: GICv3: GICv3 features: 16 PPIs Jan 17 12:00:32.248271 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 17 12:00:32.248289 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 17 12:00:32.248307 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 12:00:32.248327 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 17 12:00:32.248345 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 17 12:00:32.248363 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 17 12:00:32.248382 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 17 12:00:32.248400 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:00:32.248418 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 17 12:00:32.248444 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 17 12:00:32.248463 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 17 12:00:32.248482 kernel: Console: colour dummy device 80x25 Jan 17 12:00:32.248502 kernel: printk: console [tty1] enabled Jan 17 12:00:32.248522 kernel: ACPI: Core revision 20230628 Jan 17 12:00:32.248541 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 17 12:00:32.248561 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:00:32.248580 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:00:32.248599 kernel: landlock: Up and running. Jan 17 12:00:32.248625 kernel: SELinux: Initializing. Jan 17 12:00:32.248645 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:00:32.248664 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:00:32.248682 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:00:32.248701 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:00:32.248720 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:00:32.248740 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:00:32.248759 kernel: Platform MSI: ITS@0x10080000 domain created Jan 17 12:00:32.248778 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 17 12:00:32.248803 kernel: Remapping and enabling EFI services. Jan 17 12:00:32.248822 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:00:32.248903 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:00:32.248924 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 17 12:00:32.248943 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 17 12:00:32.248962 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 17 12:00:32.248981 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:00:32.249000 kernel: SMP: Total of 2 processors activated. Jan 17 12:00:32.249019 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:00:32.249047 kernel: CPU features: detected: 32-bit EL1 Support Jan 17 12:00:32.249067 kernel: CPU features: detected: CRC32 instructions Jan 17 12:00:32.249086 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:00:32.249121 kernel: alternatives: applying system-wide alternatives Jan 17 12:00:32.249146 kernel: devtmpfs: initialized Jan 17 12:00:32.249165 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:00:32.249184 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:00:32.249205 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:00:32.249225 kernel: SMBIOS 3.0.0 present. Jan 17 12:00:32.249244 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 17 12:00:32.249271 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:00:32.249290 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:00:32.249310 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:00:32.249329 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:00:32.249348 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:00:32.249367 kernel: audit: type=2000 audit(0.327:1): state=initialized audit_enabled=0 res=1 Jan 17 12:00:32.249387 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:00:32.249412 kernel: cpuidle: using governor menu Jan 17 12:00:32.249431 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:00:32.249450 kernel: ASID allocator initialised with 65536 entries Jan 17 12:00:32.249469 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:00:32.249489 kernel: Serial: AMBA PL011 UART driver Jan 17 12:00:32.249508 kernel: Modules: 17520 pages in range for non-PLT usage Jan 17 12:00:32.249527 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:00:32.249547 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:00:32.249566 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:00:32.249590 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:00:32.249610 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:00:32.249630 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:00:32.249649 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:00:32.249669 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:00:32.249689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:00:32.249708 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:00:32.249726 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:00:32.249746 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:00:32.249771 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:00:32.249792 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:00:32.249811 kernel: ACPI: Interpreter enabled Jan 17 12:00:32.251906 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:00:32.251942 kernel: ACPI: MCFG table detected, 1 entries Jan 17 12:00:32.251962 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 17 12:00:32.252342 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:00:32.252631 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 12:00:32.252940 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 12:00:32.253191 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 17 12:00:32.253430 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 17 12:00:32.253465 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 17 12:00:32.253486 kernel: acpiphp: Slot [1] registered Jan 17 12:00:32.253506 kernel: acpiphp: Slot [2] registered Jan 17 12:00:32.253526 kernel: acpiphp: Slot [3] registered Jan 17 12:00:32.253546 kernel: acpiphp: Slot [4] registered Jan 17 12:00:32.253577 kernel: acpiphp: Slot [5] registered Jan 17 12:00:32.253600 kernel: acpiphp: Slot [6] registered Jan 17 12:00:32.253620 kernel: acpiphp: Slot [7] registered Jan 17 12:00:32.253640 kernel: acpiphp: Slot [8] registered Jan 17 12:00:32.253663 kernel: acpiphp: Slot [9] registered Jan 17 12:00:32.253683 kernel: acpiphp: Slot [10] registered Jan 17 12:00:32.253705 kernel: acpiphp: Slot [11] registered Jan 17 12:00:32.253723 kernel: acpiphp: Slot [12] registered Jan 17 12:00:32.253742 kernel: acpiphp: Slot [13] registered Jan 17 12:00:32.253762 kernel: acpiphp: Slot [14] registered Jan 17 12:00:32.253789 kernel: acpiphp: Slot [15] registered Jan 17 12:00:32.253808 kernel: acpiphp: Slot [16] registered Jan 17 12:00:32.255901 kernel: acpiphp: Slot [17] registered Jan 17 12:00:32.255948 kernel: acpiphp: Slot [18] registered Jan 17 12:00:32.255968 kernel: acpiphp: Slot [19] registered Jan 17 12:00:32.255989 kernel: acpiphp: Slot [20] registered Jan 17 12:00:32.256009 kernel: acpiphp: Slot [21] registered Jan 17 12:00:32.256030 kernel: acpiphp: Slot [22] registered Jan 17 12:00:32.256050 kernel: acpiphp: Slot [23] registered Jan 17 12:00:32.256082 kernel: acpiphp: Slot [24] registered Jan 17 12:00:32.256103 kernel: acpiphp: Slot [25] registered Jan 17 12:00:32.256122 kernel: acpiphp: Slot [26] registered Jan 17 12:00:32.256144 kernel: acpiphp: Slot [27] registered Jan 17 12:00:32.256164 kernel: acpiphp: Slot [28] registered Jan 17 12:00:32.256185 kernel: acpiphp: Slot [29] registered Jan 17 12:00:32.256204 kernel: acpiphp: Slot [30] registered Jan 17 12:00:32.256224 kernel: acpiphp: Slot [31] registered Jan 17 12:00:32.256244 kernel: PCI host bridge to bus 0000:00 Jan 17 12:00:32.256599 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 17 12:00:32.256978 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 12:00:32.257204 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 17 12:00:32.257414 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 17 12:00:32.257712 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 17 12:00:32.260228 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 17 12:00:32.260500 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 17 12:00:32.260782 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 12:00:32.261136 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 17 12:00:32.261422 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:00:32.261750 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 12:00:32.264192 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 17 12:00:32.264449 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 17 12:00:32.264697 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 17 12:00:32.265001 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:00:32.265236 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 17 12:00:32.265454 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 17 12:00:32.265676 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 17 12:00:32.269963 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 17 12:00:32.270388 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 17 12:00:32.270651 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 17 12:00:32.271071 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 12:00:32.271318 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 17 12:00:32.271352 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 12:00:32.271373 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 12:00:32.271392 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 12:00:32.271415 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 12:00:32.271435 kernel: iommu: Default domain type: Translated Jan 17 12:00:32.271454 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:00:32.271487 kernel: efivars: Registered efivars operations Jan 17 12:00:32.271507 kernel: vgaarb: loaded Jan 17 12:00:32.271527 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:00:32.271546 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:00:32.271566 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:00:32.271585 kernel: pnp: PnP ACPI init Jan 17 12:00:32.271947 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 17 12:00:32.271990 kernel: pnp: PnP ACPI: found 1 devices Jan 17 12:00:32.272023 kernel: NET: Registered PF_INET protocol family Jan 17 12:00:32.272043 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:00:32.272063 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:00:32.272082 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:00:32.272103 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:00:32.272122 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:00:32.272142 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:00:32.272163 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:00:32.272183 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:00:32.272210 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:00:32.272231 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:00:32.272252 kernel: kvm [1]: HYP mode not available Jan 17 12:00:32.272272 kernel: Initialise system trusted keyrings Jan 17 12:00:32.272292 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:00:32.272312 kernel: Key type asymmetric registered Jan 17 12:00:32.272333 kernel: Asymmetric key parser 'x509' registered Jan 17 12:00:32.272354 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:00:32.272374 kernel: io scheduler mq-deadline registered Jan 17 12:00:32.272403 kernel: io scheduler kyber registered Jan 17 12:00:32.272422 kernel: io scheduler bfq registered Jan 17 12:00:32.272757 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 17 12:00:32.272804 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 12:00:32.272878 kernel: ACPI: button: Power Button [PWRB] Jan 17 12:00:32.272906 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 17 12:00:32.272928 kernel: ACPI: button: Sleep Button [SLPB] Jan 17 12:00:32.272948 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:00:32.272983 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 17 12:00:32.273293 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 17 12:00:32.273336 kernel: printk: console [ttyS0] disabled Jan 17 12:00:32.273356 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 17 12:00:32.273376 kernel: printk: console [ttyS0] enabled Jan 17 12:00:32.273400 kernel: printk: bootconsole [uart0] disabled Jan 17 12:00:32.273422 kernel: thunder_xcv, ver 1.0 Jan 17 12:00:32.273441 kernel: thunder_bgx, ver 1.0 Jan 17 12:00:32.273460 kernel: nicpf, ver 1.0 Jan 17 12:00:32.273491 kernel: nicvf, ver 1.0 Jan 17 12:00:32.273768 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:00:32.276163 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:00:31 UTC (1737115231) Jan 17 12:00:32.276216 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:00:32.276236 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 17 12:00:32.276256 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:00:32.276276 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:00:32.276295 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:00:32.276329 kernel: Segment Routing with IPv6 Jan 17 12:00:32.276349 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:00:32.276368 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:00:32.276387 kernel: Key type dns_resolver registered Jan 17 12:00:32.276406 kernel: registered taskstats version 1 Jan 17 12:00:32.276425 kernel: Loading compiled-in X.509 certificates Jan 17 12:00:32.276446 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:00:32.276465 kernel: Key type .fscrypt registered Jan 17 12:00:32.276483 kernel: Key type fscrypt-provisioning registered Jan 17 12:00:32.276509 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:00:32.276530 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:00:32.276549 kernel: ima: No architecture policies found Jan 17 12:00:32.276568 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:00:32.276588 kernel: clk: Disabling unused clocks Jan 17 12:00:32.276607 kernel: Freeing unused kernel memory: 39360K Jan 17 12:00:32.276625 kernel: Run /init as init process Jan 17 12:00:32.276644 kernel: with arguments: Jan 17 12:00:32.276663 kernel: /init Jan 17 12:00:32.276682 kernel: with environment: Jan 17 12:00:32.276706 kernel: HOME=/ Jan 17 12:00:32.276725 kernel: TERM=linux Jan 17 12:00:32.276743 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:00:32.276767 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:00:32.276794 systemd[1]: Detected virtualization amazon. Jan 17 12:00:32.276816 systemd[1]: Detected architecture arm64. Jan 17 12:00:32.276872 systemd[1]: Running in initrd. Jan 17 12:00:32.276905 systemd[1]: No hostname configured, using default hostname. Jan 17 12:00:32.276927 systemd[1]: Hostname set to . Jan 17 12:00:32.276949 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:00:32.276970 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:00:32.276991 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:00:32.277012 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:00:32.277034 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:00:32.277056 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:00:32.277084 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:00:32.277106 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:00:32.277131 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:00:32.277153 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:00:32.277174 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:00:32.277195 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:00:32.277216 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:00:32.277245 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:00:32.277266 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:00:32.277286 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:00:32.277307 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:00:32.277327 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:00:32.277348 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:00:32.277368 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:00:32.277389 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:00:32.277410 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:00:32.277436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:00:32.277457 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:00:32.277477 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:00:32.277498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:00:32.277520 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:00:32.277541 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:00:32.277561 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:00:32.277582 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:00:32.277609 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:00:32.277630 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:00:32.277652 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:00:32.277672 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:00:32.277694 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:00:32.277784 systemd-journald[250]: Collecting audit messages is disabled. Jan 17 12:00:32.286049 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:00:32.286098 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:00:32.286136 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:00:32.286159 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:00:32.286183 systemd-journald[250]: Journal started Jan 17 12:00:32.286226 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2be875c84e99277a2e8f928b4bf013) is 8.0M, max 75.3M, 67.3M free. Jan 17 12:00:32.227182 systemd-modules-load[251]: Inserted module 'overlay' Jan 17 12:00:32.291483 systemd-modules-load[251]: Inserted module 'br_netfilter' Jan 17 12:00:32.293969 kernel: Bridge firewalling registered Jan 17 12:00:32.299921 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:00:32.305584 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:00:32.311790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:00:32.334238 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:00:32.349071 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:00:32.357403 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:00:32.367698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:00:32.381234 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:00:32.394914 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:00:32.406932 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:00:32.422231 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:00:32.444448 dracut-cmdline[284]: dracut-dracut-053 Jan 17 12:00:32.461432 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:00:32.527672 systemd-resolved[289]: Positive Trust Anchors: Jan 17 12:00:32.527718 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:00:32.527785 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:00:32.619868 kernel: SCSI subsystem initialized Jan 17 12:00:32.626873 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:00:32.639867 kernel: iscsi: registered transport (tcp) Jan 17 12:00:32.663308 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:00:32.663385 kernel: QLogic iSCSI HBA Driver Jan 17 12:00:32.762867 kernel: random: crng init done Jan 17 12:00:32.763206 systemd-resolved[289]: Defaulting to hostname 'linux'. Jan 17 12:00:32.769955 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:00:32.779935 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:00:32.796712 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:00:32.811401 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:00:32.849186 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:00:32.849284 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:00:32.849314 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:00:32.921898 kernel: raid6: neonx8 gen() 6590 MB/s Jan 17 12:00:32.938894 kernel: raid6: neonx4 gen() 6373 MB/s Jan 17 12:00:32.955889 kernel: raid6: neonx2 gen() 5329 MB/s Jan 17 12:00:32.972878 kernel: raid6: neonx1 gen() 3911 MB/s Jan 17 12:00:32.989884 kernel: raid6: int64x8 gen() 3791 MB/s Jan 17 12:00:33.006872 kernel: raid6: int64x4 gen() 3670 MB/s Jan 17 12:00:33.023884 kernel: raid6: int64x2 gen() 3541 MB/s Jan 17 12:00:33.041668 kernel: raid6: int64x1 gen() 2750 MB/s Jan 17 12:00:33.041761 kernel: raid6: using algorithm neonx8 gen() 6590 MB/s Jan 17 12:00:33.059663 kernel: raid6: .... xor() 4902 MB/s, rmw enabled Jan 17 12:00:33.059745 kernel: raid6: using neon recovery algorithm Jan 17 12:00:33.068284 kernel: xor: measuring software checksum speed Jan 17 12:00:33.068352 kernel: 8regs : 11035 MB/sec Jan 17 12:00:33.069414 kernel: 32regs : 11987 MB/sec Jan 17 12:00:33.070603 kernel: arm64_neon : 9610 MB/sec Jan 17 12:00:33.070636 kernel: xor: using function: 32regs (11987 MB/sec) Jan 17 12:00:33.157889 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:00:33.180302 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:00:33.204245 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:00:33.243724 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jan 17 12:00:33.252256 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:00:33.269191 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:00:33.304799 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jan 17 12:00:33.367666 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:00:33.379347 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:00:33.494984 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:00:33.523514 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:00:33.549957 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:00:33.573580 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:00:33.589450 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:00:33.603194 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:00:33.618148 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:00:33.670168 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:00:33.720474 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 12:00:33.720575 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 17 12:00:33.747597 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 12:00:33.747949 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 12:00:33.748216 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:f0:d0:c3:a4:cb Jan 17 12:00:33.748248 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:00:33.748486 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:00:33.754417 (udev-worker)[530]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:00:33.769151 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:00:33.773990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:00:33.784858 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:00:33.802234 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:00:33.811377 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 17 12:00:33.811417 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 12:00:33.815886 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 12:00:33.828500 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:00:33.828581 kernel: GPT:9289727 != 16777215 Jan 17 12:00:33.828610 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:00:33.828637 kernel: GPT:9289727 != 16777215 Jan 17 12:00:33.828663 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:00:33.828689 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:00:33.821440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:00:33.854297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:00:33.873336 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:00:33.910413 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:00:33.990465 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 12:00:34.004472 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (531) Jan 17 12:00:34.017421 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (523) Jan 17 12:00:34.128334 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:00:34.166770 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 12:00:34.184958 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 12:00:34.191265 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 12:00:34.210303 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:00:34.219908 disk-uuid[662]: Primary Header is updated. Jan 17 12:00:34.219908 disk-uuid[662]: Secondary Entries is updated. Jan 17 12:00:34.219908 disk-uuid[662]: Secondary Header is updated. Jan 17 12:00:34.229875 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:00:34.242891 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:00:34.253952 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:00:35.249960 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:00:35.251911 disk-uuid[663]: The operation has completed successfully. Jan 17 12:00:35.429642 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:00:35.429895 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:00:35.484077 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:00:35.491683 sh[1004]: Success Jan 17 12:00:35.520868 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:00:35.631382 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:00:35.655166 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:00:35.665035 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:00:35.686427 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:00:35.686512 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:00:35.686542 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:00:35.689495 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:00:35.689574 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:00:35.826867 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:00:35.847478 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:00:35.848342 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:00:35.863167 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:00:35.887919 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:00:35.888015 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:00:35.883672 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:00:35.894982 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:00:35.899868 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:00:35.919706 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:00:35.922572 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:00:35.945767 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:00:35.958220 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:00:36.071511 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:00:36.083143 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:00:36.142597 systemd-networkd[1196]: lo: Link UP Jan 17 12:00:36.142620 systemd-networkd[1196]: lo: Gained carrier Jan 17 12:00:36.148320 systemd-networkd[1196]: Enumeration completed Jan 17 12:00:36.148493 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:00:36.151280 systemd[1]: Reached target network.target - Network. Jan 17 12:00:36.160051 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:00:36.160070 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:00:36.172083 systemd-networkd[1196]: eth0: Link UP Jan 17 12:00:36.172099 systemd-networkd[1196]: eth0: Gained carrier Jan 17 12:00:36.172120 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:00:36.193997 systemd-networkd[1196]: eth0: DHCPv4 address 172.31.18.162/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:00:36.407268 ignition[1107]: Ignition 2.19.0 Jan 17 12:00:36.407289 ignition[1107]: Stage: fetch-offline Jan 17 12:00:36.411689 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:00:36.407864 ignition[1107]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:36.407895 ignition[1107]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:36.408377 ignition[1107]: Ignition finished successfully Jan 17 12:00:36.431297 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:00:36.458211 ignition[1206]: Ignition 2.19.0 Jan 17 12:00:36.458929 ignition[1206]: Stage: fetch Jan 17 12:00:36.460015 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:36.460041 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:36.460217 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:36.471108 ignition[1206]: PUT result: OK Jan 17 12:00:36.474294 ignition[1206]: parsed url from cmdline: "" Jan 17 12:00:36.474313 ignition[1206]: no config URL provided Jan 17 12:00:36.474332 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:00:36.474677 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:00:36.474751 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:36.480274 ignition[1206]: PUT result: OK Jan 17 12:00:36.480371 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 12:00:36.491615 ignition[1206]: GET result: OK Jan 17 12:00:36.491780 ignition[1206]: parsing config with SHA512: 732759d9d8c58546d4b8e75f5a0091776452c8caa9eec911a5a10781b6fb19a78f1a07cb0f79356fbb87fe98b81ba294a603e5d3ad065964f8e53d5626ffeb3d Jan 17 12:00:36.506739 unknown[1206]: fetched base config from "system" Jan 17 12:00:36.507258 unknown[1206]: fetched base config from "system" Jan 17 12:00:36.509989 ignition[1206]: fetch: fetch complete Jan 17 12:00:36.507275 unknown[1206]: fetched user config from "aws" Jan 17 12:00:36.510087 ignition[1206]: fetch: fetch passed Jan 17 12:00:36.510575 ignition[1206]: Ignition finished successfully Jan 17 12:00:36.517762 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:00:36.533162 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:00:36.568648 ignition[1212]: Ignition 2.19.0 Jan 17 12:00:36.569265 ignition[1212]: Stage: kargs Jan 17 12:00:36.569973 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:36.569999 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:36.570163 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:36.576255 ignition[1212]: PUT result: OK Jan 17 12:00:36.587314 ignition[1212]: kargs: kargs passed Jan 17 12:00:36.587723 ignition[1212]: Ignition finished successfully Jan 17 12:00:36.594914 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:00:36.607286 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:00:36.638920 ignition[1219]: Ignition 2.19.0 Jan 17 12:00:36.638942 ignition[1219]: Stage: disks Jan 17 12:00:36.639548 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:36.639573 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:36.639726 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:36.644470 ignition[1219]: PUT result: OK Jan 17 12:00:36.662743 ignition[1219]: disks: disks passed Jan 17 12:00:36.662914 ignition[1219]: Ignition finished successfully Jan 17 12:00:36.666022 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:00:36.669975 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:00:36.673126 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:00:36.678279 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:00:36.680882 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:00:36.683854 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:00:36.704323 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:00:36.756611 systemd-fsck[1229]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:00:36.764234 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:00:36.778197 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:00:36.871885 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:00:36.873811 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:00:36.880018 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:00:36.898249 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:00:36.909165 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:00:36.919170 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:00:36.919280 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:00:36.919338 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:00:36.938067 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:00:36.945115 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:00:36.958160 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1248) Jan 17 12:00:36.962644 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:00:36.962723 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:00:36.964109 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:00:36.968857 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:00:36.971723 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:00:37.490865 initrd-setup-root[1272]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:00:37.500170 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:00:37.511033 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:00:37.534866 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:00:37.926309 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:00:37.936067 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:00:37.940618 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:00:37.981800 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:00:37.986741 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:00:37.998332 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:00:38.035657 ignition[1363]: INFO : Ignition 2.19.0 Jan 17 12:00:38.035657 ignition[1363]: INFO : Stage: mount Jan 17 12:00:38.040188 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:38.040188 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:38.040188 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:38.040188 ignition[1363]: INFO : PUT result: OK Jan 17 12:00:38.053349 ignition[1363]: INFO : mount: mount passed Jan 17 12:00:38.053349 ignition[1363]: INFO : Ignition finished successfully Jan 17 12:00:38.066529 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:00:38.081085 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:00:38.112274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:00:38.137497 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1372) Jan 17 12:00:38.137563 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:00:38.139198 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:00:38.139265 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:00:38.144873 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:00:38.148810 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:00:38.177231 systemd-networkd[1196]: eth0: Gained IPv6LL Jan 17 12:00:38.191145 ignition[1389]: INFO : Ignition 2.19.0 Jan 17 12:00:38.191145 ignition[1389]: INFO : Stage: files Jan 17 12:00:38.197125 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:38.197125 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:38.197125 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:38.197125 ignition[1389]: INFO : PUT result: OK Jan 17 12:00:38.213197 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:00:38.217659 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:00:38.217659 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:00:38.271848 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:00:38.276505 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:00:38.276505 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:00:38.276505 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:00:38.276505 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:00:38.276505 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:00:38.276505 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:00:38.272617 unknown[1389]: wrote ssh authorized keys file for user: core Jan 17 12:00:38.394141 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:00:38.898448 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:00:38.898448 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:00:38.907259 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 17 12:00:39.253163 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 17 12:00:39.406978 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 17 12:00:39.406978 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:00:39.421656 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 17 12:00:39.823364 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 17 12:00:40.166084 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 17 12:00:40.166084 ignition[1389]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:00:40.175200 ignition[1389]: INFO : files: files passed Jan 17 12:00:40.175200 ignition[1389]: INFO : Ignition finished successfully Jan 17 12:00:40.231090 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:00:40.253340 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:00:40.259344 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:00:40.276251 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:00:40.277977 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:00:40.301208 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:00:40.301208 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:00:40.310861 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:00:40.315897 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:00:40.322386 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:00:40.344222 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:00:40.405705 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:00:40.408146 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:00:40.415209 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:00:40.417440 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:00:40.419644 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:00:40.434216 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:00:40.464988 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:00:40.476276 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:00:40.502785 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:00:40.508044 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:00:40.511037 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:00:40.513188 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:00:40.513421 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:00:40.516738 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:00:40.519291 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:00:40.521564 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:00:40.524211 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:00:40.527160 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:00:40.530039 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:00:40.532752 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:00:40.535978 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:00:40.538624 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:00:40.541183 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:00:40.543309 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:00:40.543555 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:00:40.546707 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:00:40.549499 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:00:40.552472 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:00:40.598189 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:00:40.601287 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:00:40.601521 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:00:40.611755 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:00:40.612064 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:00:40.615153 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:00:40.615360 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:00:40.639024 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:00:40.657278 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:00:40.664289 ignition[1442]: INFO : Ignition 2.19.0 Jan 17 12:00:40.666145 ignition[1442]: INFO : Stage: umount Jan 17 12:00:40.667657 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:00:40.667657 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:00:40.667657 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:00:40.676198 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:00:40.676539 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:00:40.688897 ignition[1442]: INFO : PUT result: OK Jan 17 12:00:40.681559 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:00:40.681809 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:00:40.699436 ignition[1442]: INFO : umount: umount passed Jan 17 12:00:40.699436 ignition[1442]: INFO : Ignition finished successfully Jan 17 12:00:40.703536 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:00:40.711107 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:00:40.718997 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:00:40.719197 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:00:40.728213 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:00:40.729130 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:00:40.729235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:00:40.732288 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:00:40.732396 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:00:40.733140 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:00:40.733227 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:00:40.733753 systemd[1]: Stopped target network.target - Network. Jan 17 12:00:40.734396 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:00:40.734477 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:00:40.734801 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:00:40.735429 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:00:40.750105 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:00:40.751203 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:00:40.754045 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:00:40.754668 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:00:40.754763 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:00:40.755369 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:00:40.755458 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:00:40.756366 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:00:40.756482 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:00:40.757415 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:00:40.757528 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:00:40.758737 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:00:40.759565 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:00:40.773943 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:00:40.774178 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:00:40.774587 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:00:40.774666 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:00:40.848267 systemd-networkd[1196]: eth0: DHCPv6 lease lost Jan 17 12:00:40.853408 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:00:40.853727 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:00:40.863305 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:00:40.863400 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:00:40.878142 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:00:40.881538 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:00:40.881663 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:00:40.891748 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:00:40.893557 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:00:40.893773 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:00:40.918422 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:00:40.920937 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:00:40.923729 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:00:40.923840 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:00:40.926704 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:00:40.926784 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:00:40.945355 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:00:40.946324 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:00:40.960527 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:00:40.960678 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:00:40.965175 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:00:40.965262 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:00:40.968344 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:00:40.968470 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:00:40.985659 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:00:40.985818 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:00:40.990053 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:00:40.990206 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:00:41.021284 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:00:41.030444 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:00:41.030593 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:00:41.033700 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:00:41.033814 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:00:41.037148 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:00:41.037260 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:00:41.040349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:00:41.040460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:00:41.044138 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:00:41.045559 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:00:41.086755 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:00:41.087252 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:00:41.094976 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:00:41.116316 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:00:41.159401 systemd[1]: Switching root. Jan 17 12:00:41.196798 systemd-journald[250]: Journal stopped Jan 17 12:00:44.488527 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jan 17 12:00:44.488669 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:00:44.488727 kernel: SELinux: policy capability open_perms=1 Jan 17 12:00:44.488774 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:00:44.488810 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:00:44.493055 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:00:44.493122 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:00:44.493157 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:00:44.493189 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:00:44.493221 kernel: audit: type=1403 audit(1737115242.415:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:00:44.493267 systemd[1]: Successfully loaded SELinux policy in 51.984ms. Jan 17 12:00:44.493330 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.173ms. Jan 17 12:00:44.493378 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:00:44.493411 systemd[1]: Detected virtualization amazon. Jan 17 12:00:44.493446 systemd[1]: Detected architecture arm64. Jan 17 12:00:44.493477 systemd[1]: Detected first boot. Jan 17 12:00:44.493512 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:00:44.493547 zram_generator::config[1501]: No configuration found. Jan 17 12:00:44.493583 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:00:44.493622 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:00:44.493657 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 12:00:44.493695 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:00:44.493731 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:00:44.493768 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:00:44.493799 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:00:44.500971 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:00:44.501041 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:00:44.501077 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:00:44.501125 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:00:44.501161 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:00:44.501197 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:00:44.501229 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:00:44.501267 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:00:44.501303 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:00:44.501338 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:00:44.501373 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:00:44.501410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:00:44.501450 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:00:44.501483 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:00:44.501518 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:00:44.501552 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:00:44.501586 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:00:44.501620 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:00:44.501651 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:00:44.501685 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:00:44.501725 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:00:44.501760 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:00:44.501792 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:00:44.504579 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:00:44.504659 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:00:44.504692 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:00:44.504725 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:00:44.504756 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:00:44.504795 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:00:44.504860 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:00:44.504928 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:00:44.504960 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:00:44.505005 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:00:44.505038 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:00:44.505068 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:00:44.505100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:00:44.505131 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:00:44.505161 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:00:44.505197 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:00:44.505227 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:00:44.505258 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:00:44.505289 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:00:44.505324 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:00:44.505355 kernel: fuse: init (API version 7.39) Jan 17 12:00:44.505389 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:00:44.505420 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:00:44.505456 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:00:44.505491 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:00:44.505522 kernel: ACPI: bus type drm_connector registered Jan 17 12:00:44.505550 kernel: loop: module loaded Jan 17 12:00:44.505579 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:00:44.505611 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:00:44.505644 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:00:44.505677 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:00:44.505708 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:00:44.505746 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:00:44.505776 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:00:44.505808 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:00:44.511886 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:00:44.511960 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:00:44.512062 systemd-journald[1605]: Collecting audit messages is disabled. Jan 17 12:00:44.512136 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:00:44.512178 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:00:44.512210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:00:44.512245 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:00:44.512276 systemd-journald[1605]: Journal started Jan 17 12:00:44.512332 systemd-journald[1605]: Runtime Journal (/run/log/journal/ec2be875c84e99277a2e8f928b4bf013) is 8.0M, max 75.3M, 67.3M free. Jan 17 12:00:44.514993 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:00:44.532255 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:00:44.540134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:00:44.540547 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:00:44.544711 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:00:44.545167 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:00:44.548169 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:00:44.548662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:00:44.552007 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:00:44.555396 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:00:44.559061 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:00:44.597638 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:00:44.610102 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:00:44.624141 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:00:44.629687 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:00:44.642390 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:00:44.654305 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:00:44.662394 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:00:44.667205 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:00:44.669767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:00:44.674259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:00:44.690163 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:00:44.693741 systemd-journald[1605]: Time spent on flushing to /var/log/journal/ec2be875c84e99277a2e8f928b4bf013 is 45.510ms for 898 entries. Jan 17 12:00:44.693741 systemd-journald[1605]: System Journal (/var/log/journal/ec2be875c84e99277a2e8f928b4bf013) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:00:44.750999 systemd-journald[1605]: Received client request to flush runtime journal. Jan 17 12:00:44.704989 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:00:44.723409 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:00:44.726207 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:00:44.769355 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:00:44.776096 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:00:44.800544 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:00:44.820553 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:00:44.846449 udevadm[1660]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:00:44.886350 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Jan 17 12:00:44.886403 systemd-tmpfiles[1654]: ACLs are not supported, ignoring. Jan 17 12:00:44.902663 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:00:44.923611 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:00:44.927712 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:00:45.027392 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:00:45.041380 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:00:45.098996 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Jan 17 12:00:45.099661 systemd-tmpfiles[1675]: ACLs are not supported, ignoring. Jan 17 12:00:45.113600 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:00:45.821797 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:00:45.837176 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:00:45.906709 systemd-udevd[1681]: Using default interface naming scheme 'v255'. Jan 17 12:00:45.996137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:00:46.010254 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:00:46.052305 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:00:46.164275 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:00:46.186671 (udev-worker)[1694]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:00:46.220543 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:00:46.427944 systemd-networkd[1685]: lo: Link UP Jan 17 12:00:46.427967 systemd-networkd[1685]: lo: Gained carrier Jan 17 12:00:46.432604 systemd-networkd[1685]: Enumeration completed Jan 17 12:00:46.432946 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:00:46.434783 systemd-networkd[1685]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:00:46.434888 systemd-networkd[1685]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:00:46.439606 systemd-networkd[1685]: eth0: Link UP Jan 17 12:00:46.441319 systemd-networkd[1685]: eth0: Gained carrier Jan 17 12:00:46.441365 systemd-networkd[1685]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:00:46.443448 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:00:46.453151 systemd-networkd[1685]: eth0: DHCPv4 address 172.31.18.162/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:00:46.512866 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1698) Jan 17 12:00:46.584228 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:00:46.757931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:00:46.794047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:00:46.798152 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:00:46.810443 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:00:46.843336 lvm[1810]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:00:46.884022 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:00:46.890638 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:00:46.902220 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:00:46.915266 lvm[1813]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:00:46.954804 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:00:46.959504 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:00:46.963194 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:00:46.963453 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:00:46.966135 systemd[1]: Reached target machines.target - Containers. Jan 17 12:00:46.971036 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:00:46.982253 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:00:46.995198 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:00:46.999248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:00:47.009398 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:00:47.019779 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:00:47.037407 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:00:47.045173 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:00:47.075049 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:00:47.090889 kernel: loop0: detected capacity change from 0 to 114328 Jan 17 12:00:47.119314 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:00:47.120877 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:00:47.199900 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:00:47.245914 kernel: loop1: detected capacity change from 0 to 114432 Jan 17 12:00:47.351888 kernel: loop2: detected capacity change from 0 to 52536 Jan 17 12:00:47.437187 kernel: loop3: detected capacity change from 0 to 194512 Jan 17 12:00:47.477883 kernel: loop4: detected capacity change from 0 to 114328 Jan 17 12:00:47.492902 kernel: loop5: detected capacity change from 0 to 114432 Jan 17 12:00:47.505884 kernel: loop6: detected capacity change from 0 to 52536 Jan 17 12:00:47.520106 kernel: loop7: detected capacity change from 0 to 194512 Jan 17 12:00:47.520496 systemd-networkd[1685]: eth0: Gained IPv6LL Jan 17 12:00:47.525687 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:00:47.549506 (sd-merge)[1835]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 12:00:47.550595 (sd-merge)[1835]: Merged extensions into '/usr'. Jan 17 12:00:47.560392 systemd[1]: Reloading requested from client PID 1821 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:00:47.560436 systemd[1]: Reloading... Jan 17 12:00:47.688871 zram_generator::config[1864]: No configuration found. Jan 17 12:00:47.979755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:00:48.134280 systemd[1]: Reloading finished in 572 ms. Jan 17 12:00:48.163798 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:00:48.179371 systemd[1]: Starting ensure-sysext.service... Jan 17 12:00:48.188179 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:00:48.203529 systemd[1]: Reloading requested from client PID 1921 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:00:48.203567 systemd[1]: Reloading... Jan 17 12:00:48.253169 systemd-tmpfiles[1922]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:00:48.256587 systemd-tmpfiles[1922]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:00:48.259163 systemd-tmpfiles[1922]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:00:48.259783 systemd-tmpfiles[1922]: ACLs are not supported, ignoring. Jan 17 12:00:48.260011 systemd-tmpfiles[1922]: ACLs are not supported, ignoring. Jan 17 12:00:48.267759 systemd-tmpfiles[1922]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:00:48.267792 systemd-tmpfiles[1922]: Skipping /boot Jan 17 12:00:48.304504 systemd-tmpfiles[1922]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:00:48.304539 systemd-tmpfiles[1922]: Skipping /boot Jan 17 12:00:48.382924 zram_generator::config[1948]: No configuration found. Jan 17 12:00:48.432550 ldconfig[1817]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:00:48.689172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:00:48.835141 systemd[1]: Reloading finished in 630 ms. Jan 17 12:00:48.865160 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:00:48.876930 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:00:48.895263 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:00:48.911142 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:00:48.928567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:00:48.948303 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:00:48.955282 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:00:48.995967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:00:49.007065 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:00:49.027983 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:00:49.038371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:00:49.045483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:00:49.053925 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:00:49.061440 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:00:49.064468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:00:49.092106 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:00:49.092587 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:00:49.108909 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:00:49.117712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:00:49.122232 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:00:49.147436 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:00:49.161603 augenrules[2047]: No rules Jan 17 12:00:49.166820 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:00:49.182556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:00:49.195625 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:00:49.206201 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:00:49.216034 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:00:49.229157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:00:49.232329 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:00:49.235917 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:00:49.249713 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:00:49.253395 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:00:49.259439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:00:49.260638 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:00:49.267857 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:00:49.268235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:00:49.275726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:00:49.277182 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:00:49.281545 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:00:49.283268 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:00:49.299601 systemd[1]: Finished ensure-sysext.service. Jan 17 12:00:49.320424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:00:49.320773 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:00:49.334678 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:00:49.355595 systemd-resolved[2016]: Positive Trust Anchors: Jan 17 12:00:49.355632 systemd-resolved[2016]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:00:49.355697 systemd-resolved[2016]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:00:49.363867 systemd-resolved[2016]: Defaulting to hostname 'linux'. Jan 17 12:00:49.367237 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:00:49.370101 systemd[1]: Reached target network.target - Network. Jan 17 12:00:49.372309 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:00:49.374919 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:00:49.377747 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:00:49.380406 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:00:49.383425 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:00:49.386932 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:00:49.389623 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:00:49.392618 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:00:49.395891 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:00:49.395950 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:00:49.397960 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:00:49.401264 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:00:49.406920 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:00:49.411635 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:00:49.416953 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:00:49.419673 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:00:49.422083 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:00:49.424597 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:00:49.424672 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:00:49.424716 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:00:49.428034 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:00:49.442239 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:00:49.452547 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:00:49.459047 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:00:49.472272 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:00:49.475330 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:00:49.490101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:00:49.495012 jq[2079]: false Jan 17 12:00:49.517853 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:00:49.543109 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 12:00:49.556149 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:00:49.568048 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:00:49.583074 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 12:00:49.586609 dbus-daemon[2078]: [system] SELinux support is enabled Jan 17 12:00:49.595330 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:00:49.608238 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:00:49.616172 dbus-daemon[2078]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1685 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:00:49.620009 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:00:49.626818 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:00:49.635519 extend-filesystems[2081]: Found loop4 Jan 17 12:00:49.643287 extend-filesystems[2081]: Found loop5 Jan 17 12:00:49.643287 extend-filesystems[2081]: Found loop6 Jan 17 12:00:49.643287 extend-filesystems[2081]: Found loop7 Jan 17 12:00:49.643287 extend-filesystems[2081]: Found nvme0n1 Jan 17 12:00:49.643287 extend-filesystems[2081]: Found nvme0n1p1 Jan 17 12:00:49.643287 extend-filesystems[2081]: Found nvme0n1p2 Jan 17 12:00:49.643287 extend-filesystems[2081]: Found nvme0n1p3 Jan 17 12:00:49.643287 extend-filesystems[2081]: Found usr Jan 17 12:00:49.643287 extend-filesystems[2081]: Found nvme0n1p4 Jan 17 12:00:49.664136 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:00:49.666066 extend-filesystems[2081]: Found nvme0n1p6 Jan 17 12:00:49.666066 extend-filesystems[2081]: Found nvme0n1p7 Jan 17 12:00:49.666066 extend-filesystems[2081]: Found nvme0n1p9 Jan 17 12:00:49.666066 extend-filesystems[2081]: Checking size of /dev/nvme0n1p9 Jan 17 12:00:49.721058 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:00:49.727486 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:00:49.738569 ntpd[2088]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:43 UTC 2025 (1): Starting Jan 17 12:00:49.743179 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:43 UTC 2025 (1): Starting Jan 17 12:00:49.743179 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:00:49.743179 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: ---------------------------------------------------- Jan 17 12:00:49.743179 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:00:49.743179 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:00:49.743179 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: corporation. Support and training for ntp-4 are Jan 17 12:00:49.743179 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: available at https://www.nwtime.org/support Jan 17 12:00:49.743179 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: ---------------------------------------------------- Jan 17 12:00:49.738636 ntpd[2088]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:00:49.738658 ntpd[2088]: ---------------------------------------------------- Jan 17 12:00:49.738678 ntpd[2088]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:00:49.738698 ntpd[2088]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:00:49.738717 ntpd[2088]: corporation. Support and training for ntp-4 are Jan 17 12:00:49.738735 ntpd[2088]: available at https://www.nwtime.org/support Jan 17 12:00:49.738754 ntpd[2088]: ---------------------------------------------------- Jan 17 12:00:49.757727 ntpd[2088]: proto: precision = 0.108 usec (-23) Jan 17 12:00:49.761520 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:00:49.763140 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: proto: precision = 0.108 usec (-23) Jan 17 12:00:49.763140 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: basedate set to 2025-01-05 Jan 17 12:00:49.763140 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: gps base set to 2025-01-05 (week 2348) Jan 17 12:00:49.758647 ntpd[2088]: basedate set to 2025-01-05 Jan 17 12:00:49.762112 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:00:49.758681 ntpd[2088]: gps base set to 2025-01-05 (week 2348) Jan 17 12:00:49.777268 ntpd[2088]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:00:49.790692 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:00:49.792012 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:00:49.792012 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:00:49.792012 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:00:49.792012 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: Listen normally on 3 eth0 172.31.18.162:123 Jan 17 12:00:49.792012 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: Listen normally on 4 lo [::1]:123 Jan 17 12:00:49.792012 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: Listen normally on 5 eth0 [fe80::4f0:d0ff:fec3:a4cb%2]:123 Jan 17 12:00:49.792012 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: Listening on routing socket on fd #22 for interface updates Jan 17 12:00:49.787997 ntpd[2088]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:00:49.788272 ntpd[2088]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:00:49.788334 ntpd[2088]: Listen normally on 3 eth0 172.31.18.162:123 Jan 17 12:00:49.788401 ntpd[2088]: Listen normally on 4 lo [::1]:123 Jan 17 12:00:49.788479 ntpd[2088]: Listen normally on 5 eth0 [fe80::4f0:d0ff:fec3:a4cb%2]:123 Jan 17 12:00:49.788542 ntpd[2088]: Listening on routing socket on fd #22 for interface updates Jan 17 12:00:49.806520 jq[2111]: true Jan 17 12:00:49.856295 update_engine[2099]: I20250117 12:00:49.839329 2099 main.cc:92] Flatcar Update Engine starting Jan 17 12:00:49.856295 update_engine[2099]: I20250117 12:00:49.847182 2099 update_check_scheduler.cc:74] Next update check in 8m13s Jan 17 12:00:49.815000 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:00:49.857260 extend-filesystems[2081]: Resized partition /dev/nvme0n1p9 Jan 17 12:00:49.884311 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:00:49.884311 ntpd[2088]: 17 Jan 12:00:49 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:00:49.856378 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:00:49.815053 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:00:49.885728 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 17 12:00:49.885994 extend-filesystems[2124]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:00:49.882190 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:00:49.882751 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:00:49.921051 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:00:49.970974 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:00:49.975497 dbus-daemon[2078]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:00:49.971079 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:00:49.975388 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:00:49.975440 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:00:49.984721 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 17 12:00:49.979415 (ntainerd)[2133]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:00:49.979968 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:00:50.017446 coreos-metadata[2077]: Jan 17 12:00:50.007 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:00:50.017446 coreos-metadata[2077]: Jan 17 12:00:50.009 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 12:00:50.029162 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:00:50.037158 jq[2131]: true Jan 17 12:00:50.037522 extend-filesystems[2124]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 12:00:50.037522 extend-filesystems[2124]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:00:50.037522 extend-filesystems[2124]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 17 12:00:50.050208 coreos-metadata[2077]: Jan 17 12:00:50.019 INFO Fetch successful Jan 17 12:00:50.050208 coreos-metadata[2077]: Jan 17 12:00:50.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 12:00:50.050208 coreos-metadata[2077]: Jan 17 12:00:50.019 INFO Fetch successful Jan 17 12:00:50.050208 coreos-metadata[2077]: Jan 17 12:00:50.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 12:00:50.050208 coreos-metadata[2077]: Jan 17 12:00:50.026 INFO Fetch successful Jan 17 12:00:50.050208 coreos-metadata[2077]: Jan 17 12:00:50.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 12:00:50.050208 coreos-metadata[2077]: Jan 17 12:00:50.034 INFO Fetch successful Jan 17 12:00:50.050208 coreos-metadata[2077]: Jan 17 12:00:50.034 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 12:00:50.054326 extend-filesystems[2081]: Resized filesystem in /dev/nvme0n1p9 Jan 17 12:00:50.039353 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:00:50.064067 coreos-metadata[2077]: Jan 17 12:00:50.063 INFO Fetch failed with 404: resource not found Jan 17 12:00:50.064067 coreos-metadata[2077]: Jan 17 12:00:50.063 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 12:00:50.064067 coreos-metadata[2077]: Jan 17 12:00:50.063 INFO Fetch successful Jan 17 12:00:50.064067 coreos-metadata[2077]: Jan 17 12:00:50.064 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 12:00:50.064067 coreos-metadata[2077]: Jan 17 12:00:50.064 INFO Fetch successful Jan 17 12:00:50.064429 coreos-metadata[2077]: Jan 17 12:00:50.064 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 12:00:50.077141 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:00:50.081455 coreos-metadata[2077]: Jan 17 12:00:50.081 INFO Fetch successful Jan 17 12:00:50.081455 coreos-metadata[2077]: Jan 17 12:00:50.081 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 12:00:50.081455 coreos-metadata[2077]: Jan 17 12:00:50.081 INFO Fetch successful Jan 17 12:00:50.081455 coreos-metadata[2077]: Jan 17 12:00:50.081 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 12:00:50.081455 coreos-metadata[2077]: Jan 17 12:00:50.081 INFO Fetch successful Jan 17 12:00:50.083259 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:00:50.091757 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:00:50.128215 systemd-logind[2098]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 12:00:50.128250 systemd-logind[2098]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 17 12:00:50.130902 systemd-logind[2098]: New seat seat0. Jan 17 12:00:50.159153 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:00:50.315275 tar[2129]: linux-arm64/helm Jan 17 12:00:50.363638 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:00:50.368169 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:00:50.378865 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 12:00:50.389501 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 12:00:50.449107 bash[2200]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:00:50.451692 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:00:50.487514 systemd[1]: Starting sshkeys.service... Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: Initializing new seelog logger Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: New Seelog Logger Creation Complete Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: 2025/01/17 12:00:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: 2025/01/17 12:00:50 processing appconfig overrides Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: 2025/01/17 12:00:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: 2025/01/17 12:00:50 processing appconfig overrides Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: 2025/01/17 12:00:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: 2025/01/17 12:00:50 processing appconfig overrides Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO Proxy environment variables: Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: 2025/01/17 12:00:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:00:50.523909 amazon-ssm-agent[2205]: 2025/01/17 12:00:50 processing appconfig overrides Jan 17 12:00:50.525708 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:00:50.569551 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:00:50.616669 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2177) Jan 17 12:00:50.620450 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO https_proxy: Jan 17 12:00:50.720386 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO http_proxy: Jan 17 12:00:50.761634 locksmithd[2152]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:00:50.818887 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO no_proxy: Jan 17 12:00:50.921675 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO Checking if agent identity type OnPrem can be assumed Jan 17 12:00:50.937361 containerd[2133]: time="2025-01-17T12:00:50.936773668Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:00:50.970446 coreos-metadata[2215]: Jan 17 12:00:50.969 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:00:50.976495 coreos-metadata[2215]: Jan 17 12:00:50.973 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 12:00:50.976629 coreos-metadata[2215]: Jan 17 12:00:50.976 INFO Fetch successful Jan 17 12:00:50.976870 coreos-metadata[2215]: Jan 17 12:00:50.976 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:00:50.984180 coreos-metadata[2215]: Jan 17 12:00:50.983 INFO Fetch successful Jan 17 12:00:50.990006 unknown[2215]: wrote ssh authorized keys file for user: core Jan 17 12:00:51.025944 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO Checking if agent identity type EC2 can be assumed Jan 17 12:00:51.050863 update-ssh-keys[2276]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:00:51.058351 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:00:51.067894 systemd[1]: Finished sshkeys.service. Jan 17 12:00:51.084674 dbus-daemon[2078]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:00:51.084997 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:00:51.098392 dbus-daemon[2078]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2148 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:00:51.109358 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:00:51.126599 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO Agent will take identity from EC2 Jan 17 12:00:51.209646 polkitd[2289]: Started polkitd version 121 Jan 17 12:00:51.232857 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:00:51.239958 containerd[2133]: time="2025-01-17T12:00:51.237898838Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:51.251766 polkitd[2289]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:00:51.254991 containerd[2133]: time="2025-01-17T12:00:51.254919614Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:00:51.255163 containerd[2133]: time="2025-01-17T12:00:51.255132266Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:00:51.255285 containerd[2133]: time="2025-01-17T12:00:51.255252698Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:00:51.264348 containerd[2133]: time="2025-01-17T12:00:51.262377506Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:00:51.264348 containerd[2133]: time="2025-01-17T12:00:51.262459298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:51.264348 containerd[2133]: time="2025-01-17T12:00:51.262669214Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:00:51.264348 containerd[2133]: time="2025-01-17T12:00:51.262703210Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:51.264933 polkitd[2289]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:00:51.265133 containerd[2133]: time="2025-01-17T12:00:51.265063526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:00:51.267128 containerd[2133]: time="2025-01-17T12:00:51.266904542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:51.267128 containerd[2133]: time="2025-01-17T12:00:51.266999642Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:00:51.267128 containerd[2133]: time="2025-01-17T12:00:51.267057818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:51.278176 containerd[2133]: time="2025-01-17T12:00:51.270187082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:51.277914 polkitd[2289]: Finished loading, compiling and executing 2 rules Jan 17 12:00:51.278787 containerd[2133]: time="2025-01-17T12:00:51.278570114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:00:51.281181 containerd[2133]: time="2025-01-17T12:00:51.281102474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:00:51.281364 containerd[2133]: time="2025-01-17T12:00:51.281336210Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:00:51.287623 containerd[2133]: time="2025-01-17T12:00:51.284307806Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:00:51.291426 containerd[2133]: time="2025-01-17T12:00:51.290045738Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:00:51.292680 dbus-daemon[2078]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:00:51.293224 polkitd[2289]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:00:51.293515 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.322369922Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.322617578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.322694522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.322868906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.322911326Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.323199698Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.323801798Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.324107462Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.324145490Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.324190838Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.324225338Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.324256010Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.324287606Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:00:51.324861 containerd[2133]: time="2025-01-17T12:00:51.324319238Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324353114Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324390722Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324422486Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324492698Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324541298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324577094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324608078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324638618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324668066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324700094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324728606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324762266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.325542 containerd[2133]: time="2025-01-17T12:00:51.324793514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.333722006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.333909098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.334033430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.334106930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.334151222Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.334225922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.334285406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.334319006Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.336993686Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.337085786Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.337139954Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.337175078Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:00:51.338272 containerd[2133]: time="2025-01-17T12:00:51.337223054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.339109 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:00:51.339172 containerd[2133]: time="2025-01-17T12:00:51.337258022Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:00:51.339172 containerd[2133]: time="2025-01-17T12:00:51.337533926Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:00:51.339172 containerd[2133]: time="2025-01-17T12:00:51.337620326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:00:51.350592 containerd[2133]: time="2025-01-17T12:00:51.344327810Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:00:51.350592 containerd[2133]: time="2025-01-17T12:00:51.345058466Z" level=info msg="Connect containerd service" Jan 17 12:00:51.350592 containerd[2133]: time="2025-01-17T12:00:51.345170942Z" level=info msg="using legacy CRI server" Jan 17 12:00:51.350592 containerd[2133]: time="2025-01-17T12:00:51.345193430Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:00:51.350592 containerd[2133]: time="2025-01-17T12:00:51.349736162Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:00:51.358466 containerd[2133]: time="2025-01-17T12:00:51.357320690Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:00:51.358466 containerd[2133]: time="2025-01-17T12:00:51.357535514Z" level=info msg="Start subscribing containerd event" Jan 17 12:00:51.358466 containerd[2133]: time="2025-01-17T12:00:51.357640502Z" level=info msg="Start recovering state" Jan 17 12:00:51.358466 containerd[2133]: time="2025-01-17T12:00:51.357807494Z" level=info msg="Start event monitor" Jan 17 12:00:51.358466 containerd[2133]: time="2025-01-17T12:00:51.357864278Z" level=info msg="Start snapshots syncer" Jan 17 12:00:51.358466 containerd[2133]: time="2025-01-17T12:00:51.357891458Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:00:51.358466 containerd[2133]: time="2025-01-17T12:00:51.357912926Z" level=info msg="Start streaming server" Jan 17 12:00:51.362101 containerd[2133]: time="2025-01-17T12:00:51.361334138Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:00:51.362876 containerd[2133]: time="2025-01-17T12:00:51.362501222Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:00:51.377416 containerd[2133]: time="2025-01-17T12:00:51.372105854Z" level=info msg="containerd successfully booted in 0.441392s" Jan 17 12:00:51.372289 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:00:51.383515 systemd-hostnamed[2148]: Hostname set to (transient) Jan 17 12:00:51.383698 systemd-resolved[2016]: System hostname changed to 'ip-172-31-18-162'. Jan 17 12:00:51.435011 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:00:51.537927 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 12:00:51.646612 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 17 12:00:51.745050 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 12:00:51.845472 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 12:00:51.945670 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO [Registrar] Starting registrar module Jan 17 12:00:52.051860 amazon-ssm-agent[2205]: 2025-01-17 12:00:50 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 12:00:52.210365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:00:52.226562 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:00:52.617948 tar[2129]: linux-arm64/LICENSE Jan 17 12:00:52.617948 tar[2129]: linux-arm64/README.md Jan 17 12:00:52.660246 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:00:52.813772 amazon-ssm-agent[2205]: 2025-01-17 12:00:52 INFO [EC2Identity] EC2 registration was successful. Jan 17 12:00:52.845423 amazon-ssm-agent[2205]: 2025-01-17 12:00:52 INFO [CredentialRefresher] credentialRefresher has started Jan 17 12:00:52.845423 amazon-ssm-agent[2205]: 2025-01-17 12:00:52 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 12:00:52.845423 amazon-ssm-agent[2205]: 2025-01-17 12:00:52 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 12:00:52.864712 sshd_keygen[2125]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:00:52.911176 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:00:52.914166 amazon-ssm-agent[2205]: 2025-01-17 12:00:52 INFO [CredentialRefresher] Next credential rotation will be in 31.2999817432 minutes Jan 17 12:00:52.930426 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:00:52.949210 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:00:52.949764 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:00:52.964496 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:00:52.989123 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:00:53.005617 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:00:53.017479 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:00:53.023526 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:00:53.026616 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:00:53.030294 systemd[1]: Startup finished in 11.837s (kernel) + 10.664s (userspace) = 22.501s. Jan 17 12:00:53.366061 kubelet[2344]: E0117 12:00:53.365876 2344 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:00:53.371333 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:00:53.373084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:00:53.870926 amazon-ssm-agent[2205]: 2025-01-17 12:00:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 12:00:53.971270 amazon-ssm-agent[2205]: 2025-01-17 12:00:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2384) started Jan 17 12:00:54.072597 amazon-ssm-agent[2205]: 2025-01-17 12:00:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 12:00:56.242693 systemd-resolved[2016]: Clock change detected. Flushing caches. Jan 17 12:00:56.741057 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:00:56.747077 systemd[1]: Started sshd@0-172.31.18.162:22-139.178.68.195:42640.service - OpenSSH per-connection server daemon (139.178.68.195:42640). Jan 17 12:00:56.957145 sshd[2395]: Accepted publickey for core from 139.178.68.195 port 42640 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:56.960948 sshd[2395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:56.976263 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:00:56.987994 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:00:56.992353 systemd-logind[2098]: New session 1 of user core. Jan 17 12:00:57.012314 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:00:57.023810 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:00:57.038961 (systemd)[2401]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:00:57.254162 systemd[2401]: Queued start job for default target default.target. Jan 17 12:00:57.255379 systemd[2401]: Created slice app.slice - User Application Slice. Jan 17 12:00:57.255631 systemd[2401]: Reached target paths.target - Paths. Jan 17 12:00:57.255767 systemd[2401]: Reached target timers.target - Timers. Jan 17 12:00:57.265709 systemd[2401]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:00:57.278006 systemd[2401]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:00:57.278123 systemd[2401]: Reached target sockets.target - Sockets. Jan 17 12:00:57.278156 systemd[2401]: Reached target basic.target - Basic System. Jan 17 12:00:57.278236 systemd[2401]: Reached target default.target - Main User Target. Jan 17 12:00:57.278296 systemd[2401]: Startup finished in 228ms. Jan 17 12:00:57.278461 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:00:57.289145 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:00:57.437049 systemd[1]: Started sshd@1-172.31.18.162:22-139.178.68.195:42648.service - OpenSSH per-connection server daemon (139.178.68.195:42648). Jan 17 12:00:57.618555 sshd[2413]: Accepted publickey for core from 139.178.68.195 port 42648 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:57.621133 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:57.629743 systemd-logind[2098]: New session 2 of user core. Jan 17 12:00:57.637152 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:00:57.763871 sshd[2413]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:57.769549 systemd[1]: sshd@1-172.31.18.162:22-139.178.68.195:42648.service: Deactivated successfully. Jan 17 12:00:57.776173 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:00:57.777558 systemd-logind[2098]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:00:57.779332 systemd-logind[2098]: Removed session 2. Jan 17 12:00:57.794066 systemd[1]: Started sshd@2-172.31.18.162:22-139.178.68.195:42650.service - OpenSSH per-connection server daemon (139.178.68.195:42650). Jan 17 12:00:57.972413 sshd[2421]: Accepted publickey for core from 139.178.68.195 port 42650 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:57.975965 sshd[2421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:57.985720 systemd-logind[2098]: New session 3 of user core. Jan 17 12:00:57.993254 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:00:58.114796 sshd[2421]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:58.121797 systemd-logind[2098]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:00:58.123027 systemd[1]: sshd@2-172.31.18.162:22-139.178.68.195:42650.service: Deactivated successfully. Jan 17 12:00:58.128412 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:00:58.130429 systemd-logind[2098]: Removed session 3. Jan 17 12:00:58.143030 systemd[1]: Started sshd@3-172.31.18.162:22-139.178.68.195:42664.service - OpenSSH per-connection server daemon (139.178.68.195:42664). Jan 17 12:00:58.316718 sshd[2429]: Accepted publickey for core from 139.178.68.195 port 42664 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:58.319449 sshd[2429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:58.326473 systemd-logind[2098]: New session 4 of user core. Jan 17 12:00:58.338133 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:00:58.467899 sshd[2429]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:58.473062 systemd[1]: sshd@3-172.31.18.162:22-139.178.68.195:42664.service: Deactivated successfully. Jan 17 12:00:58.480134 systemd-logind[2098]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:00:58.480180 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:00:58.483228 systemd-logind[2098]: Removed session 4. Jan 17 12:00:58.497100 systemd[1]: Started sshd@4-172.31.18.162:22-139.178.68.195:42666.service - OpenSSH per-connection server daemon (139.178.68.195:42666). Jan 17 12:00:58.675122 sshd[2437]: Accepted publickey for core from 139.178.68.195 port 42666 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:58.677569 sshd[2437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:58.685348 systemd-logind[2098]: New session 5 of user core. Jan 17 12:00:58.695154 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:00:58.830816 sudo[2441]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:00:58.831489 sudo[2441]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:00:58.851461 sudo[2441]: pam_unix(sudo:session): session closed for user root Jan 17 12:00:58.875027 sshd[2437]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:58.883425 systemd[1]: sshd@4-172.31.18.162:22-139.178.68.195:42666.service: Deactivated successfully. Jan 17 12:00:58.889172 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:00:58.889461 systemd-logind[2098]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:00:58.893234 systemd-logind[2098]: Removed session 5. Jan 17 12:00:58.910046 systemd[1]: Started sshd@5-172.31.18.162:22-139.178.68.195:42668.service - OpenSSH per-connection server daemon (139.178.68.195:42668). Jan 17 12:00:59.083152 sshd[2446]: Accepted publickey for core from 139.178.68.195 port 42668 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:59.086553 sshd[2446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:59.096291 systemd-logind[2098]: New session 6 of user core. Jan 17 12:00:59.102305 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:00:59.210502 sudo[2451]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:00:59.211259 sudo[2451]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:00:59.218142 sudo[2451]: pam_unix(sudo:session): session closed for user root Jan 17 12:00:59.229202 sudo[2450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:00:59.230505 sudo[2450]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:00:59.257105 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:00:59.262606 auditctl[2454]: No rules Jan 17 12:00:59.263501 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:00:59.264105 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:00:59.286304 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:00:59.332455 augenrules[2473]: No rules Jan 17 12:00:59.336310 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:00:59.340214 sudo[2450]: pam_unix(sudo:session): session closed for user root Jan 17 12:00:59.366896 sshd[2446]: pam_unix(sshd:session): session closed for user core Jan 17 12:00:59.375006 systemd[1]: sshd@5-172.31.18.162:22-139.178.68.195:42668.service: Deactivated successfully. Jan 17 12:00:59.380737 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:00:59.381857 systemd-logind[2098]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:00:59.384156 systemd-logind[2098]: Removed session 6. Jan 17 12:00:59.403134 systemd[1]: Started sshd@6-172.31.18.162:22-139.178.68.195:42672.service - OpenSSH per-connection server daemon (139.178.68.195:42672). Jan 17 12:00:59.572653 sshd[2482]: Accepted publickey for core from 139.178.68.195 port 42672 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:00:59.576122 sshd[2482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:00:59.588043 systemd-logind[2098]: New session 7 of user core. Jan 17 12:00:59.597731 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:00:59.710198 sudo[2486]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:00:59.711007 sudo[2486]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:00.258056 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:01:00.275461 (dockerd)[2501]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:01:00.740337 dockerd[2501]: time="2025-01-17T12:01:00.739836534Z" level=info msg="Starting up" Jan 17 12:01:01.463619 dockerd[2501]: time="2025-01-17T12:01:01.463515809Z" level=info msg="Loading containers: start." Jan 17 12:01:01.650623 kernel: Initializing XFRM netlink socket Jan 17 12:01:01.705879 (udev-worker)[2523]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:01:01.786471 systemd-networkd[1685]: docker0: Link UP Jan 17 12:01:01.811167 dockerd[2501]: time="2025-01-17T12:01:01.811095751Z" level=info msg="Loading containers: done." Jan 17 12:01:01.833652 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3588675632-merged.mount: Deactivated successfully. Jan 17 12:01:01.846123 dockerd[2501]: time="2025-01-17T12:01:01.846062131Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:01:01.846357 dockerd[2501]: time="2025-01-17T12:01:01.846289783Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:01:01.846537 dockerd[2501]: time="2025-01-17T12:01:01.846499495Z" level=info msg="Daemon has completed initialization" Jan 17 12:01:01.900385 dockerd[2501]: time="2025-01-17T12:01:01.899526428Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:01:01.900172 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:01:03.122904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:01:03.134998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:03.229436 containerd[2133]: time="2025-01-17T12:01:03.229371846Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:01:03.735954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:03.751176 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:03.848973 kubelet[2659]: E0117 12:01:03.848842 2659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:03.859967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:03.860427 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:01:04.001237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070427269.mount: Deactivated successfully. Jan 17 12:01:05.845634 containerd[2133]: time="2025-01-17T12:01:05.845410739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:05.847628 containerd[2133]: time="2025-01-17T12:01:05.847555487Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=32202457" Jan 17 12:01:05.849010 containerd[2133]: time="2025-01-17T12:01:05.848924975Z" level=info msg="ImageCreate event name:\"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:05.854727 containerd[2133]: time="2025-01-17T12:01:05.854547611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:05.857761 containerd[2133]: time="2025-01-17T12:01:05.857030507Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"32199257\" in 2.627584753s" Jan 17 12:01:05.857761 containerd[2133]: time="2025-01-17T12:01:05.857100683Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:5c8d3b261565d9e15723d572fb33e6ec92ceb342312c9418457857eb57d1ae9a\"" Jan 17 12:01:05.896822 containerd[2133]: time="2025-01-17T12:01:05.896748707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:01:07.683778 containerd[2133]: time="2025-01-17T12:01:07.682782468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:07.685117 containerd[2133]: time="2025-01-17T12:01:07.685030524Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=29381102" Jan 17 12:01:07.686902 containerd[2133]: time="2025-01-17T12:01:07.686827020Z" level=info msg="ImageCreate event name:\"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:07.693487 containerd[2133]: time="2025-01-17T12:01:07.693374856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:07.695919 containerd[2133]: time="2025-01-17T12:01:07.695841564Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"30784892\" in 1.799022285s" Jan 17 12:01:07.696429 containerd[2133]: time="2025-01-17T12:01:07.696121488Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:bcc4e3c2095eb1aad9487d6679a8871f05390959aaf5091f391510033742cf7c\"" Jan 17 12:01:07.735822 containerd[2133]: time="2025-01-17T12:01:07.735729205Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:01:08.856808 containerd[2133]: time="2025-01-17T12:01:08.856409414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:08.858691 containerd[2133]: time="2025-01-17T12:01:08.858627482Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=15765672" Jan 17 12:01:08.862605 containerd[2133]: time="2025-01-17T12:01:08.861256970Z" level=info msg="ImageCreate event name:\"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:08.868706 containerd[2133]: time="2025-01-17T12:01:08.868644266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:08.870528 containerd[2133]: time="2025-01-17T12:01:08.870461582Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"17169480\" in 1.134674333s" Jan 17 12:01:08.870692 containerd[2133]: time="2025-01-17T12:01:08.870523082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:09e2786faf24867b706964cc8c35c296f197dc7a57806a75388efa13868bf50c\"" Jan 17 12:01:08.909184 containerd[2133]: time="2025-01-17T12:01:08.909126218Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:01:10.394158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968145234.mount: Deactivated successfully. Jan 17 12:01:10.943610 containerd[2133]: time="2025-01-17T12:01:10.943522865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:10.945066 containerd[2133]: time="2025-01-17T12:01:10.944985473Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=25274682" Jan 17 12:01:10.946776 containerd[2133]: time="2025-01-17T12:01:10.946689077Z" level=info msg="ImageCreate event name:\"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:10.951103 containerd[2133]: time="2025-01-17T12:01:10.951000269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:10.952549 containerd[2133]: time="2025-01-17T12:01:10.952344533Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"25273701\" in 2.043156923s" Jan 17 12:01:10.952549 containerd[2133]: time="2025-01-17T12:01:10.952403321Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:e3bc26919d7c787204f912c4bc2584bac5686761ae4da96585475c68dcc57181\"" Jan 17 12:01:10.992661 containerd[2133]: time="2025-01-17T12:01:10.992474501Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:01:11.557782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355595822.mount: Deactivated successfully. Jan 17 12:01:12.876737 containerd[2133]: time="2025-01-17T12:01:12.876647118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:12.891563 containerd[2133]: time="2025-01-17T12:01:12.891486138Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 17 12:01:12.913334 containerd[2133]: time="2025-01-17T12:01:12.913228878Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:12.945524 containerd[2133]: time="2025-01-17T12:01:12.945418974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:12.948206 containerd[2133]: time="2025-01-17T12:01:12.947980410Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.955446065s" Jan 17 12:01:12.948206 containerd[2133]: time="2025-01-17T12:01:12.948040806Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 12:01:12.986807 containerd[2133]: time="2025-01-17T12:01:12.986747239Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:01:13.995208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:01:14.004904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:14.014215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332900270.mount: Deactivated successfully. Jan 17 12:01:14.023440 containerd[2133]: time="2025-01-17T12:01:14.023379676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:14.028753 containerd[2133]: time="2025-01-17T12:01:14.028512784Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 17 12:01:14.032091 containerd[2133]: time="2025-01-17T12:01:14.031990252Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:14.037907 containerd[2133]: time="2025-01-17T12:01:14.037800688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:14.041421 containerd[2133]: time="2025-01-17T12:01:14.040948012Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.054133285s" Jan 17 12:01:14.041421 containerd[2133]: time="2025-01-17T12:01:14.041008276Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 17 12:01:14.087537 containerd[2133]: time="2025-01-17T12:01:14.087485752Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:01:14.469874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:14.480174 (kubelet)[2825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:14.565917 kubelet[2825]: E0117 12:01:14.565825 2825 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:14.571180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:14.572822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:01:14.786079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333847483.mount: Deactivated successfully. Jan 17 12:01:16.828541 containerd[2133]: time="2025-01-17T12:01:16.828019534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:16.830343 containerd[2133]: time="2025-01-17T12:01:16.830257510Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jan 17 12:01:16.831755 containerd[2133]: time="2025-01-17T12:01:16.831698134Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:16.837991 containerd[2133]: time="2025-01-17T12:01:16.837908818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:01:16.840510 containerd[2133]: time="2025-01-17T12:01:16.840458674Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.752912142s" Jan 17 12:01:16.840831 containerd[2133]: time="2025-01-17T12:01:16.840693022Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 17 12:01:20.923243 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:01:23.769250 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:23.786046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:23.834408 systemd[1]: Reloading requested from client PID 2950 ('systemctl') (unit session-7.scope)... Jan 17 12:01:23.834627 systemd[1]: Reloading... Jan 17 12:01:24.033638 zram_generator::config[2990]: No configuration found. Jan 17 12:01:24.308059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:01:24.475376 systemd[1]: Reloading finished in 640 ms. Jan 17 12:01:24.580691 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:01:24.581132 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:01:24.582092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:24.592229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:25.032903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:25.055315 (kubelet)[3065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:01:25.139120 kubelet[3065]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:01:25.139120 kubelet[3065]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:01:25.139120 kubelet[3065]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:01:25.139797 kubelet[3065]: I0117 12:01:25.139256 3065 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:01:26.814200 kubelet[3065]: I0117 12:01:26.814146 3065 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:01:26.814200 kubelet[3065]: I0117 12:01:26.814201 3065 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:01:26.815272 kubelet[3065]: I0117 12:01:26.814674 3065 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:01:26.855496 kubelet[3065]: E0117 12:01:26.855456 3065 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:26.860215 kubelet[3065]: I0117 12:01:26.860127 3065 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:01:27.000633 kubelet[3065]: I0117 12:01:27.000569 3065 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:01:27.001346 kubelet[3065]: I0117 12:01:27.001313 3065 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:01:27.001693 kubelet[3065]: I0117 12:01:27.001661 3065 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:01:27.001875 kubelet[3065]: I0117 12:01:27.001697 3065 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:01:27.001875 kubelet[3065]: I0117 12:01:27.001719 3065 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:01:27.003300 kubelet[3065]: I0117 12:01:27.003237 3065 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:01:27.007880 kubelet[3065]: I0117 12:01:27.007814 3065 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:01:27.007880 kubelet[3065]: I0117 12:01:27.007871 3065 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:01:27.008800 kubelet[3065]: I0117 12:01:27.007915 3065 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:01:27.008800 kubelet[3065]: I0117 12:01:27.007949 3065 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:01:27.012248 kubelet[3065]: I0117 12:01:27.011507 3065 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:01:27.012248 kubelet[3065]: I0117 12:01:27.012037 3065 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:01:27.013195 kubelet[3065]: W0117 12:01:27.013153 3065 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:01:27.014458 kubelet[3065]: I0117 12:01:27.014416 3065 server.go:1256] "Started kubelet" Jan 17 12:01:27.014871 kubelet[3065]: W0117 12:01:27.014815 3065 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.18.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:27.015027 kubelet[3065]: E0117 12:01:27.015005 3065 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:27.043462 kubelet[3065]: W0117 12:01:27.042810 3065 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.18.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-162&limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:27.043462 kubelet[3065]: E0117 12:01:27.042897 3065 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-162&limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:27.052470 kubelet[3065]: E0117 12:01:27.052381 3065 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.162:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.162:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-162.181b792660649030 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-162,UID:ip-172-31-18-162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-162,},FirstTimestamp:2025-01-17 12:01:27.014379568 +0000 UTC m=+1.951317166,LastTimestamp:2025-01-17 12:01:27.014379568 +0000 UTC m=+1.951317166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-162,}" Jan 17 12:01:27.054022 kubelet[3065]: I0117 12:01:27.052784 3065 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:01:27.055538 kubelet[3065]: I0117 12:01:27.055488 3065 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:01:27.056628 kubelet[3065]: I0117 12:01:27.056338 3065 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:01:27.057563 kubelet[3065]: I0117 12:01:27.053700 3065 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:01:27.058636 kubelet[3065]: I0117 12:01:27.058046 3065 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:01:27.065542 kubelet[3065]: E0117 12:01:27.065395 3065 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-18-162\" not found" Jan 17 12:01:27.065542 kubelet[3065]: I0117 12:01:27.065463 3065 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:01:27.067689 kubelet[3065]: I0117 12:01:27.067083 3065 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:01:27.067689 kubelet[3065]: I0117 12:01:27.067244 3065 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:01:27.067897 kubelet[3065]: W0117 12:01:27.067825 3065 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.18.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:27.067970 kubelet[3065]: E0117 12:01:27.067907 3065 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:27.069446 kubelet[3065]: E0117 12:01:27.069220 3065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-162?timeout=10s\": dial tcp 172.31.18.162:6443: connect: connection refused" interval="200ms" Jan 17 12:01:27.070474 kubelet[3065]: I0117 12:01:27.070433 3065 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:01:27.070647 kubelet[3065]: I0117 12:01:27.070602 3065 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:01:27.072772 kubelet[3065]: E0117 12:01:27.072172 3065 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:01:27.073357 kubelet[3065]: I0117 12:01:27.073309 3065 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:01:27.109538 kubelet[3065]: I0117 12:01:27.109473 3065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:01:27.115097 kubelet[3065]: I0117 12:01:27.115040 3065 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:01:27.115303 kubelet[3065]: I0117 12:01:27.115280 3065 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:01:27.115489 kubelet[3065]: I0117 12:01:27.115467 3065 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:01:27.115758 kubelet[3065]: E0117 12:01:27.115734 3065 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:01:27.117395 kubelet[3065]: W0117 12:01:27.117261 3065 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.18.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:27.117395 kubelet[3065]: E0117 12:01:27.117365 3065 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:27.118975 kubelet[3065]: I0117 12:01:27.118939 3065 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:01:27.119513 kubelet[3065]: I0117 12:01:27.119366 3065 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:01:27.120148 kubelet[3065]: I0117 12:01:27.119747 3065 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:01:27.124111 kubelet[3065]: I0117 12:01:27.124052 3065 policy_none.go:49] "None policy: Start" Jan 17 12:01:27.126415 kubelet[3065]: I0117 12:01:27.126378 3065 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:01:27.127051 kubelet[3065]: I0117 12:01:27.127023 3065 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:01:27.137615 kubelet[3065]: I0117 12:01:27.136619 3065 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:01:27.137615 kubelet[3065]: I0117 12:01:27.137051 3065 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:01:27.143220 kubelet[3065]: E0117 12:01:27.143186 3065 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-162\" not found" Jan 17 12:01:27.170361 kubelet[3065]: I0117 12:01:27.170316 3065 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-162" Jan 17 12:01:27.170882 kubelet[3065]: E0117 12:01:27.170849 3065 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.162:6443/api/v1/nodes\": dial tcp 172.31.18.162:6443: connect: connection refused" node="ip-172-31-18-162" Jan 17 12:01:27.216113 kubelet[3065]: I0117 12:01:27.215985 3065 topology_manager.go:215] "Topology Admit Handler" podUID="bd193353477d0c42fb849a50e0ffb957" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-162" Jan 17 12:01:27.218219 kubelet[3065]: I0117 12:01:27.218157 3065 topology_manager.go:215] "Topology Admit Handler" podUID="b17140aeb15d79c8398208be21226f46" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:27.220711 kubelet[3065]: I0117 12:01:27.220362 3065 topology_manager.go:215] "Topology Admit Handler" podUID="704b321a4e58557f8dd1c7db04ec054c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-162" Jan 17 12:01:27.268567 kubelet[3065]: I0117 12:01:27.268459 3065 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd193353477d0c42fb849a50e0ffb957-ca-certs\") pod \"kube-apiserver-ip-172-31-18-162\" (UID: \"bd193353477d0c42fb849a50e0ffb957\") " pod="kube-system/kube-apiserver-ip-172-31-18-162" Jan 17 12:01:27.268567 kubelet[3065]: I0117 12:01:27.268596 3065 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd193353477d0c42fb849a50e0ffb957-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-162\" (UID: \"bd193353477d0c42fb849a50e0ffb957\") " pod="kube-system/kube-apiserver-ip-172-31-18-162" Jan 17 12:01:27.269055 kubelet[3065]: I0117 12:01:27.268667 3065 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd193353477d0c42fb849a50e0ffb957-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-162\" (UID: \"bd193353477d0c42fb849a50e0ffb957\") " pod="kube-system/kube-apiserver-ip-172-31-18-162" Jan 17 12:01:27.269055 kubelet[3065]: I0117 12:01:27.268720 3065 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b17140aeb15d79c8398208be21226f46-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-162\" (UID: \"b17140aeb15d79c8398208be21226f46\") " pod="kube-system/kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:27.269055 kubelet[3065]: I0117 12:01:27.268776 3065 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/704b321a4e58557f8dd1c7db04ec054c-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-162\" (UID: \"704b321a4e58557f8dd1c7db04ec054c\") " pod="kube-system/kube-scheduler-ip-172-31-18-162" Jan 17 12:01:27.269055 kubelet[3065]: I0117 12:01:27.268821 3065 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b17140aeb15d79c8398208be21226f46-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-162\" (UID: \"b17140aeb15d79c8398208be21226f46\") " pod="kube-system/kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:27.269055 kubelet[3065]: I0117 12:01:27.268880 3065 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b17140aeb15d79c8398208be21226f46-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-162\" (UID: \"b17140aeb15d79c8398208be21226f46\") " pod="kube-system/kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:27.269309 kubelet[3065]: I0117 12:01:27.268948 3065 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b17140aeb15d79c8398208be21226f46-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-162\" (UID: \"b17140aeb15d79c8398208be21226f46\") " pod="kube-system/kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:27.269309 kubelet[3065]: I0117 12:01:27.269012 3065 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b17140aeb15d79c8398208be21226f46-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-162\" (UID: \"b17140aeb15d79c8398208be21226f46\") " pod="kube-system/kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:27.269950 kubelet[3065]: E0117 12:01:27.269890 3065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-162?timeout=10s\": dial tcp 172.31.18.162:6443: connect: connection refused" interval="400ms" Jan 17 12:01:27.374738 kubelet[3065]: I0117 12:01:27.374341 3065 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-162" Jan 17 12:01:27.375881 kubelet[3065]: E0117 12:01:27.375831 3065 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.162:6443/api/v1/nodes\": dial tcp 172.31.18.162:6443: connect: connection refused" node="ip-172-31-18-162" Jan 17 12:01:27.532304 containerd[2133]: time="2025-01-17T12:01:27.531903763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-162,Uid:b17140aeb15d79c8398208be21226f46,Namespace:kube-system,Attempt:0,}" Jan 17 12:01:27.532984 containerd[2133]: time="2025-01-17T12:01:27.532362355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-162,Uid:bd193353477d0c42fb849a50e0ffb957,Namespace:kube-system,Attempt:0,}" Jan 17 12:01:27.538837 containerd[2133]: time="2025-01-17T12:01:27.538746799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-162,Uid:704b321a4e58557f8dd1c7db04ec054c,Namespace:kube-system,Attempt:0,}" Jan 17 12:01:27.671136 kubelet[3065]: E0117 12:01:27.670994 3065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-162?timeout=10s\": dial tcp 172.31.18.162:6443: connect: connection refused" interval="800ms" Jan 17 12:01:27.779096 kubelet[3065]: I0117 12:01:27.779036 3065 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-162" Jan 17 12:01:27.779632 kubelet[3065]: E0117 12:01:27.779564 3065 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.162:6443/api/v1/nodes\": dial tcp 172.31.18.162:6443: connect: connection refused" node="ip-172-31-18-162" Jan 17 12:01:27.943221 kubelet[3065]: W0117 12:01:27.943026 3065 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.18.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:27.943221 kubelet[3065]: E0117 12:01:27.943129 3065 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.162:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:27.980364 kubelet[3065]: W0117 12:01:27.980286 3065 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.18.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-162&limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:27.980364 kubelet[3065]: E0117 12:01:27.980362 3065 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-162&limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:28.038016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1142004341.mount: Deactivated successfully. Jan 17 12:01:28.047822 containerd[2133]: time="2025-01-17T12:01:28.047725673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:01:28.053231 containerd[2133]: time="2025-01-17T12:01:28.053141094Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 12:01:28.054648 containerd[2133]: time="2025-01-17T12:01:28.054497958Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:01:28.056446 containerd[2133]: time="2025-01-17T12:01:28.056373462Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:01:28.060020 containerd[2133]: time="2025-01-17T12:01:28.059857146Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:01:28.060020 containerd[2133]: time="2025-01-17T12:01:28.060005622Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:01:28.061215 containerd[2133]: time="2025-01-17T12:01:28.061110210Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:01:28.068203 containerd[2133]: time="2025-01-17T12:01:28.068027778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:01:28.071802 containerd[2133]: time="2025-01-17T12:01:28.071375094Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.513263ms" Jan 17 12:01:28.075099 containerd[2133]: time="2025-01-17T12:01:28.075032334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.998599ms" Jan 17 12:01:28.076483 containerd[2133]: time="2025-01-17T12:01:28.076405434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.953919ms" Jan 17 12:01:28.310610 containerd[2133]: time="2025-01-17T12:01:28.309532687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:28.310843 containerd[2133]: time="2025-01-17T12:01:28.310560811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:28.310843 containerd[2133]: time="2025-01-17T12:01:28.310641571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:28.311322 containerd[2133]: time="2025-01-17T12:01:28.310969003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:28.321197 containerd[2133]: time="2025-01-17T12:01:28.320559799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:28.321197 containerd[2133]: time="2025-01-17T12:01:28.320718799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:28.321197 containerd[2133]: time="2025-01-17T12:01:28.320771395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:28.321197 containerd[2133]: time="2025-01-17T12:01:28.320943547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:28.326630 containerd[2133]: time="2025-01-17T12:01:28.326133703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:28.329659 containerd[2133]: time="2025-01-17T12:01:28.326834491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:28.329659 containerd[2133]: time="2025-01-17T12:01:28.326870647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:28.329659 containerd[2133]: time="2025-01-17T12:01:28.327065143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:28.361505 kubelet[3065]: W0117 12:01:28.361229 3065 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.18.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:28.367558 kubelet[3065]: E0117 12:01:28.367366 3065 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:28.436850 kubelet[3065]: W0117 12:01:28.436335 3065 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.18.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:28.436850 kubelet[3065]: E0117 12:01:28.436438 3065 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.162:6443: connect: connection refused Jan 17 12:01:28.449532 containerd[2133]: time="2025-01-17T12:01:28.449342455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-162,Uid:bd193353477d0c42fb849a50e0ffb957,Namespace:kube-system,Attempt:0,} returns sandbox id \"393e9d3bef36bb72f8d10bda69f06b37f9ceac248e2be7c3c96b380650997665\"" Jan 17 12:01:28.463012 containerd[2133]: time="2025-01-17T12:01:28.462942236Z" level=info msg="CreateContainer within sandbox \"393e9d3bef36bb72f8d10bda69f06b37f9ceac248e2be7c3c96b380650997665\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:01:28.473949 kubelet[3065]: E0117 12:01:28.473569 3065 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-162?timeout=10s\": dial tcp 172.31.18.162:6443: connect: connection refused" interval="1.6s" Jan 17 12:01:28.500828 containerd[2133]: time="2025-01-17T12:01:28.500340704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-162,Uid:704b321a4e58557f8dd1c7db04ec054c,Namespace:kube-system,Attempt:0,} returns sandbox id \"b89cb0b08426d914cde5b26defc548d47050e3620d958e5fcc310e6bbb87dd24\"" Jan 17 12:01:28.506563 containerd[2133]: time="2025-01-17T12:01:28.506335112Z" level=info msg="CreateContainer within sandbox \"393e9d3bef36bb72f8d10bda69f06b37f9ceac248e2be7c3c96b380650997665\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"78436601d6f8b6b96afed518cca001c0c151afb78a2a420432f4ec6f2b049f5d\"" Jan 17 12:01:28.507953 containerd[2133]: time="2025-01-17T12:01:28.507904628Z" level=info msg="StartContainer for \"78436601d6f8b6b96afed518cca001c0c151afb78a2a420432f4ec6f2b049f5d\"" Jan 17 12:01:28.509720 containerd[2133]: time="2025-01-17T12:01:28.509079512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-162,Uid:b17140aeb15d79c8398208be21226f46,Namespace:kube-system,Attempt:0,} returns sandbox id \"118b6b453ae826f0a08ba25ec8ecb15b8250ddb45d059ae16756dc77997381fc\"" Jan 17 12:01:28.510971 containerd[2133]: time="2025-01-17T12:01:28.510905708Z" level=info msg="CreateContainer within sandbox \"b89cb0b08426d914cde5b26defc548d47050e3620d958e5fcc310e6bbb87dd24\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:01:28.541296 containerd[2133]: time="2025-01-17T12:01:28.541225964Z" level=info msg="CreateContainer within sandbox \"118b6b453ae826f0a08ba25ec8ecb15b8250ddb45d059ae16756dc77997381fc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:01:28.563980 containerd[2133]: time="2025-01-17T12:01:28.563760824Z" level=info msg="CreateContainer within sandbox \"b89cb0b08426d914cde5b26defc548d47050e3620d958e5fcc310e6bbb87dd24\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a0384011aa11269e80410d7d705e67b52179bbba470ba435b4ee588fa6645941\"" Jan 17 12:01:28.568297 containerd[2133]: time="2025-01-17T12:01:28.568235588Z" level=info msg="StartContainer for \"a0384011aa11269e80410d7d705e67b52179bbba470ba435b4ee588fa6645941\"" Jan 17 12:01:28.589027 kubelet[3065]: I0117 12:01:28.588976 3065 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-162" Jan 17 12:01:28.589744 kubelet[3065]: E0117 12:01:28.589482 3065 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.162:6443/api/v1/nodes\": dial tcp 172.31.18.162:6443: connect: connection refused" node="ip-172-31-18-162" Jan 17 12:01:28.610886 containerd[2133]: time="2025-01-17T12:01:28.610801916Z" level=info msg="CreateContainer within sandbox \"118b6b453ae826f0a08ba25ec8ecb15b8250ddb45d059ae16756dc77997381fc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7117c4ca9388ffd702766b3cd662971e7ac0ae92168556ccb78c8662afaa6ee3\"" Jan 17 12:01:28.613530 containerd[2133]: time="2025-01-17T12:01:28.613350668Z" level=info msg="StartContainer for \"7117c4ca9388ffd702766b3cd662971e7ac0ae92168556ccb78c8662afaa6ee3\"" Jan 17 12:01:28.683723 containerd[2133]: time="2025-01-17T12:01:28.682795929Z" level=info msg="StartContainer for \"78436601d6f8b6b96afed518cca001c0c151afb78a2a420432f4ec6f2b049f5d\" returns successfully" Jan 17 12:01:28.816387 containerd[2133]: time="2025-01-17T12:01:28.815384049Z" level=info msg="StartContainer for \"a0384011aa11269e80410d7d705e67b52179bbba470ba435b4ee588fa6645941\" returns successfully" Jan 17 12:01:28.856605 containerd[2133]: time="2025-01-17T12:01:28.856525365Z" level=info msg="StartContainer for \"7117c4ca9388ffd702766b3cd662971e7ac0ae92168556ccb78c8662afaa6ee3\" returns successfully" Jan 17 12:01:30.196769 kubelet[3065]: I0117 12:01:30.196285 3065 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-162" Jan 17 12:01:32.468307 kubelet[3065]: E0117 12:01:32.468232 3065 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-162\" not found" node="ip-172-31-18-162" Jan 17 12:01:32.511734 kubelet[3065]: I0117 12:01:32.511658 3065 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-162" Jan 17 12:01:33.015136 kubelet[3065]: I0117 12:01:33.014983 3065 apiserver.go:52] "Watching apiserver" Jan 17 12:01:33.067424 kubelet[3065]: I0117 12:01:33.067320 3065 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:01:35.038633 update_engine[2099]: I20250117 12:01:35.038324 2099 update_attempter.cc:509] Updating boot flags... Jan 17 12:01:35.131638 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3352) Jan 17 12:01:35.381655 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3354) Jan 17 12:01:35.730295 systemd[1]: Reloading requested from client PID 3521 ('systemctl') (unit session-7.scope)... Jan 17 12:01:35.730331 systemd[1]: Reloading... Jan 17 12:01:35.972626 zram_generator::config[3570]: No configuration found. Jan 17 12:01:36.261838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:01:36.445207 systemd[1]: Reloading finished in 714 ms. Jan 17 12:01:36.512940 kubelet[3065]: I0117 12:01:36.512406 3065 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:01:36.512563 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:36.530810 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:01:36.532611 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:36.541342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:36.969922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:36.986683 (kubelet)[3631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:01:37.095611 kubelet[3631]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:01:37.095611 kubelet[3631]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:01:37.095611 kubelet[3631]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:01:37.095611 kubelet[3631]: I0117 12:01:37.094637 3631 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:01:37.108871 kubelet[3631]: I0117 12:01:37.108826 3631 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:01:37.109053 kubelet[3631]: I0117 12:01:37.109033 3631 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:01:37.109549 kubelet[3631]: I0117 12:01:37.109521 3631 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:01:37.113037 kubelet[3631]: I0117 12:01:37.112993 3631 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:01:37.117168 kubelet[3631]: I0117 12:01:37.117114 3631 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:01:37.127328 kubelet[3631]: I0117 12:01:37.127281 3631 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:01:37.128704 kubelet[3631]: I0117 12:01:37.128669 3631 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:01:37.130679 kubelet[3631]: I0117 12:01:37.129112 3631 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:01:37.130679 kubelet[3631]: I0117 12:01:37.129161 3631 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:01:37.130679 kubelet[3631]: I0117 12:01:37.129182 3631 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:01:37.130679 kubelet[3631]: I0117 12:01:37.129239 3631 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:01:37.130679 kubelet[3631]: I0117 12:01:37.129420 3631 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:01:37.130679 kubelet[3631]: I0117 12:01:37.129445 3631 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:01:37.130679 kubelet[3631]: I0117 12:01:37.129482 3631 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:01:37.129291 sudo[3644]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 17 12:01:37.131749 kubelet[3631]: I0117 12:01:37.129504 3631 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:01:37.130073 sudo[3644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 17 12:01:37.137621 kubelet[3631]: I0117 12:01:37.136316 3631 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:01:37.137621 kubelet[3631]: I0117 12:01:37.136967 3631 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:01:37.141011 kubelet[3631]: I0117 12:01:37.140960 3631 server.go:1256] "Started kubelet" Jan 17 12:01:37.148430 kubelet[3631]: I0117 12:01:37.148368 3631 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:01:37.166740 kubelet[3631]: I0117 12:01:37.164919 3631 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:01:37.166889 kubelet[3631]: I0117 12:01:37.166841 3631 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:01:37.171505 kubelet[3631]: I0117 12:01:37.170924 3631 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:01:37.171505 kubelet[3631]: I0117 12:01:37.171317 3631 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:01:37.177655 kubelet[3631]: I0117 12:01:37.177604 3631 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:01:37.187892 kubelet[3631]: I0117 12:01:37.187428 3631 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:01:37.189681 kubelet[3631]: I0117 12:01:37.189647 3631 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:01:37.223665 kubelet[3631]: I0117 12:01:37.221317 3631 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:01:37.223665 kubelet[3631]: I0117 12:01:37.222713 3631 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:01:37.231479 kubelet[3631]: I0117 12:01:37.229646 3631 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:01:37.233175 kubelet[3631]: I0117 12:01:37.232396 3631 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:01:37.233175 kubelet[3631]: I0117 12:01:37.232442 3631 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:01:37.233175 kubelet[3631]: I0117 12:01:37.232471 3631 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:01:37.233175 kubelet[3631]: E0117 12:01:37.232554 3631 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:01:37.243236 kubelet[3631]: I0117 12:01:37.243202 3631 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:01:37.248444 kubelet[3631]: E0117 12:01:37.248158 3631 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:01:37.294017 kubelet[3631]: I0117 12:01:37.293926 3631 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-162" Jan 17 12:01:37.314968 kubelet[3631]: I0117 12:01:37.314638 3631 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-162" Jan 17 12:01:37.314968 kubelet[3631]: I0117 12:01:37.314754 3631 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-162" Jan 17 12:01:37.338494 kubelet[3631]: E0117 12:01:37.338355 3631 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:01:37.443105 kubelet[3631]: I0117 12:01:37.442895 3631 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:01:37.443105 kubelet[3631]: I0117 12:01:37.442968 3631 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:01:37.443804 kubelet[3631]: I0117 12:01:37.443004 3631 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:01:37.444162 kubelet[3631]: I0117 12:01:37.444084 3631 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:01:37.444607 kubelet[3631]: I0117 12:01:37.444309 3631 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:01:37.444607 kubelet[3631]: I0117 12:01:37.444342 3631 policy_none.go:49] "None policy: Start" Jan 17 12:01:37.447337 kubelet[3631]: I0117 12:01:37.446770 3631 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:01:37.447337 kubelet[3631]: I0117 12:01:37.446823 3631 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:01:37.447337 kubelet[3631]: I0117 12:01:37.447112 3631 state_mem.go:75] "Updated machine memory state" Jan 17 12:01:37.452261 kubelet[3631]: I0117 12:01:37.452210 3631 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:01:37.460062 kubelet[3631]: I0117 12:01:37.457994 3631 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:01:37.539591 kubelet[3631]: I0117 12:01:37.539424 3631 topology_manager.go:215] "Topology Admit Handler" podUID="bd193353477d0c42fb849a50e0ffb957" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-162" Jan 17 12:01:37.539949 kubelet[3631]: I0117 12:01:37.539907 3631 topology_manager.go:215] "Topology Admit Handler" podUID="b17140aeb15d79c8398208be21226f46" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:37.541834 kubelet[3631]: I0117 12:01:37.541201 3631 topology_manager.go:215] "Topology Admit Handler" podUID="704b321a4e58557f8dd1c7db04ec054c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-162" Jan 17 12:01:37.550653 kubelet[3631]: E0117 12:01:37.550559 3631 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-18-162\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-162" Jan 17 12:01:37.555265 kubelet[3631]: E0117 12:01:37.554910 3631 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-18-162\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:37.597204 kubelet[3631]: I0117 12:01:37.597127 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b17140aeb15d79c8398208be21226f46-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-162\" (UID: \"b17140aeb15d79c8398208be21226f46\") " pod="kube-system/kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:37.597342 kubelet[3631]: I0117 12:01:37.597233 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b17140aeb15d79c8398208be21226f46-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-162\" (UID: \"b17140aeb15d79c8398208be21226f46\") " pod="kube-system/kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:37.597342 kubelet[3631]: I0117 12:01:37.597287 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/704b321a4e58557f8dd1c7db04ec054c-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-162\" (UID: \"704b321a4e58557f8dd1c7db04ec054c\") " pod="kube-system/kube-scheduler-ip-172-31-18-162" Jan 17 12:01:37.597342 kubelet[3631]: I0117 12:01:37.597334 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd193353477d0c42fb849a50e0ffb957-ca-certs\") pod \"kube-apiserver-ip-172-31-18-162\" (UID: \"bd193353477d0c42fb849a50e0ffb957\") " pod="kube-system/kube-apiserver-ip-172-31-18-162" Jan 17 12:01:37.597537 kubelet[3631]: I0117 12:01:37.597380 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd193353477d0c42fb849a50e0ffb957-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-162\" (UID: \"bd193353477d0c42fb849a50e0ffb957\") " pod="kube-system/kube-apiserver-ip-172-31-18-162" Jan 17 12:01:37.597537 kubelet[3631]: I0117 12:01:37.597428 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd193353477d0c42fb849a50e0ffb957-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-162\" (UID: \"bd193353477d0c42fb849a50e0ffb957\") " pod="kube-system/kube-apiserver-ip-172-31-18-162" Jan 17 12:01:37.597537 kubelet[3631]: I0117 12:01:37.597473 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b17140aeb15d79c8398208be21226f46-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-162\" (UID: \"b17140aeb15d79c8398208be21226f46\") " pod="kube-system/kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:37.597537 kubelet[3631]: I0117 12:01:37.597530 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b17140aeb15d79c8398208be21226f46-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-162\" (UID: \"b17140aeb15d79c8398208be21226f46\") " pod="kube-system/kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:37.597802 kubelet[3631]: I0117 12:01:37.597612 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b17140aeb15d79c8398208be21226f46-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-162\" (UID: \"b17140aeb15d79c8398208be21226f46\") " pod="kube-system/kube-controller-manager-ip-172-31-18-162" Jan 17 12:01:38.051397 sudo[3644]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:38.142940 kubelet[3631]: I0117 12:01:38.142861 3631 apiserver.go:52] "Watching apiserver" Jan 17 12:01:38.189547 kubelet[3631]: I0117 12:01:38.189424 3631 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:01:38.307628 kubelet[3631]: E0117 12:01:38.307553 3631 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-18-162\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-162" Jan 17 12:01:38.309336 kubelet[3631]: E0117 12:01:38.309299 3631 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-18-162\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-162" Jan 17 12:01:38.334120 kubelet[3631]: I0117 12:01:38.333026 3631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-162" podStartSLOduration=3.332938421 podStartE2EDuration="3.332938421s" podCreationTimestamp="2025-01-17 12:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:01:38.331916453 +0000 UTC m=+1.337070128" watchObservedRunningTime="2025-01-17 12:01:38.332938421 +0000 UTC m=+1.338092072" Jan 17 12:01:38.360950 kubelet[3631]: I0117 12:01:38.360602 3631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-162" podStartSLOduration=2.360529865 podStartE2EDuration="2.360529865s" podCreationTimestamp="2025-01-17 12:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:01:38.360439973 +0000 UTC m=+1.365593624" watchObservedRunningTime="2025-01-17 12:01:38.360529865 +0000 UTC m=+1.365683516" Jan 17 12:01:38.360950 kubelet[3631]: I0117 12:01:38.360765 3631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-162" podStartSLOduration=1.360727313 podStartE2EDuration="1.360727313s" podCreationTimestamp="2025-01-17 12:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:01:38.346509341 +0000 UTC m=+1.351663004" watchObservedRunningTime="2025-01-17 12:01:38.360727313 +0000 UTC m=+1.365880988" Jan 17 12:01:41.439238 sudo[2486]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:41.463110 sshd[2482]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:41.469258 systemd[1]: sshd@6-172.31.18.162:22-139.178.68.195:42672.service: Deactivated successfully. Jan 17 12:01:41.475075 systemd-logind[2098]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:01:41.477083 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:01:41.481078 systemd-logind[2098]: Removed session 7. Jan 17 12:01:50.962251 kubelet[3631]: I0117 12:01:50.962175 3631 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:01:50.965505 kubelet[3631]: I0117 12:01:50.964463 3631 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:01:50.967379 containerd[2133]: time="2025-01-17T12:01:50.963134443Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:01:51.049132 kubelet[3631]: I0117 12:01:51.049051 3631 topology_manager.go:215] "Topology Admit Handler" podUID="815fcb73-65e6-423d-922b-0122b94029a5" podNamespace="kube-system" podName="kube-proxy-nk76b" Jan 17 12:01:51.085735 kubelet[3631]: I0117 12:01:51.085129 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwhv6\" (UniqueName: \"kubernetes.io/projected/815fcb73-65e6-423d-922b-0122b94029a5-kube-api-access-qwhv6\") pod \"kube-proxy-nk76b\" (UID: \"815fcb73-65e6-423d-922b-0122b94029a5\") " pod="kube-system/kube-proxy-nk76b" Jan 17 12:01:51.090638 kubelet[3631]: I0117 12:01:51.088189 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/815fcb73-65e6-423d-922b-0122b94029a5-kube-proxy\") pod \"kube-proxy-nk76b\" (UID: \"815fcb73-65e6-423d-922b-0122b94029a5\") " pod="kube-system/kube-proxy-nk76b" Jan 17 12:01:51.090638 kubelet[3631]: I0117 12:01:51.088831 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/815fcb73-65e6-423d-922b-0122b94029a5-xtables-lock\") pod \"kube-proxy-nk76b\" (UID: \"815fcb73-65e6-423d-922b-0122b94029a5\") " pod="kube-system/kube-proxy-nk76b" Jan 17 12:01:51.090638 kubelet[3631]: I0117 12:01:51.089683 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/815fcb73-65e6-423d-922b-0122b94029a5-lib-modules\") pod \"kube-proxy-nk76b\" (UID: \"815fcb73-65e6-423d-922b-0122b94029a5\") " pod="kube-system/kube-proxy-nk76b" Jan 17 12:01:51.120038 kubelet[3631]: I0117 12:01:51.115695 3631 topology_manager.go:215] "Topology Admit Handler" podUID="282dfb05-c014-4ec4-85d0-786f0e08acc4" podNamespace="kube-system" podName="cilium-kxmck" Jan 17 12:01:51.192068 kubelet[3631]: I0117 12:01:51.191999 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-hostproc\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192248 kubelet[3631]: I0117 12:01:51.192103 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-run\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192248 kubelet[3631]: I0117 12:01:51.192158 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-bpf-maps\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192248 kubelet[3631]: I0117 12:01:51.192208 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-host-proc-sys-kernel\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192426 kubelet[3631]: I0117 12:01:51.192301 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-etc-cni-netd\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192426 kubelet[3631]: I0117 12:01:51.192349 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cni-path\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192426 kubelet[3631]: I0117 12:01:51.192394 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-host-proc-sys-net\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192645 kubelet[3631]: I0117 12:01:51.192437 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/282dfb05-c014-4ec4-85d0-786f0e08acc4-hubble-tls\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192645 kubelet[3631]: I0117 12:01:51.192484 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-xtables-lock\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192645 kubelet[3631]: I0117 12:01:51.192530 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-config-path\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192645 kubelet[3631]: I0117 12:01:51.192606 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxlzd\" (UniqueName: \"kubernetes.io/projected/282dfb05-c014-4ec4-85d0-786f0e08acc4-kube-api-access-zxlzd\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192903 kubelet[3631]: I0117 12:01:51.192661 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-cgroup\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192903 kubelet[3631]: I0117 12:01:51.192707 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-lib-modules\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.192903 kubelet[3631]: I0117 12:01:51.192781 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/282dfb05-c014-4ec4-85d0-786f0e08acc4-clustermesh-secrets\") pod \"cilium-kxmck\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " pod="kube-system/cilium-kxmck" Jan 17 12:01:51.204893 kubelet[3631]: E0117 12:01:51.204787 3631 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:01:51.204893 kubelet[3631]: E0117 12:01:51.204860 3631 projected.go:200] Error preparing data for projected volume kube-api-access-qwhv6 for pod kube-system/kube-proxy-nk76b: configmap "kube-root-ca.crt" not found Jan 17 12:01:51.205364 kubelet[3631]: E0117 12:01:51.204988 3631 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/815fcb73-65e6-423d-922b-0122b94029a5-kube-api-access-qwhv6 podName:815fcb73-65e6-423d-922b-0122b94029a5 nodeName:}" failed. No retries permitted until 2025-01-17 12:01:51.704949877 +0000 UTC m=+14.710103504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qwhv6" (UniqueName: "kubernetes.io/projected/815fcb73-65e6-423d-922b-0122b94029a5-kube-api-access-qwhv6") pod "kube-proxy-nk76b" (UID: "815fcb73-65e6-423d-922b-0122b94029a5") : configmap "kube-root-ca.crt" not found Jan 17 12:01:51.314557 kubelet[3631]: E0117 12:01:51.313994 3631 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:01:51.314557 kubelet[3631]: E0117 12:01:51.314101 3631 projected.go:200] Error preparing data for projected volume kube-api-access-zxlzd for pod kube-system/cilium-kxmck: configmap "kube-root-ca.crt" not found Jan 17 12:01:51.314557 kubelet[3631]: E0117 12:01:51.314199 3631 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/282dfb05-c014-4ec4-85d0-786f0e08acc4-kube-api-access-zxlzd podName:282dfb05-c014-4ec4-85d0-786f0e08acc4 nodeName:}" failed. No retries permitted until 2025-01-17 12:01:51.814170049 +0000 UTC m=+14.819323688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zxlzd" (UniqueName: "kubernetes.io/projected/282dfb05-c014-4ec4-85d0-786f0e08acc4-kube-api-access-zxlzd") pod "cilium-kxmck" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4") : configmap "kube-root-ca.crt" not found Jan 17 12:01:51.797813 kubelet[3631]: E0117 12:01:51.797673 3631 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:01:51.798639 kubelet[3631]: E0117 12:01:51.798499 3631 projected.go:200] Error preparing data for projected volume kube-api-access-qwhv6 for pod kube-system/kube-proxy-nk76b: configmap "kube-root-ca.crt" not found Jan 17 12:01:51.801316 kubelet[3631]: E0117 12:01:51.798848 3631 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/815fcb73-65e6-423d-922b-0122b94029a5-kube-api-access-qwhv6 podName:815fcb73-65e6-423d-922b-0122b94029a5 nodeName:}" failed. No retries permitted until 2025-01-17 12:01:52.798813619 +0000 UTC m=+15.803967270 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwhv6" (UniqueName: "kubernetes.io/projected/815fcb73-65e6-423d-922b-0122b94029a5-kube-api-access-qwhv6") pod "kube-proxy-nk76b" (UID: "815fcb73-65e6-423d-922b-0122b94029a5") : configmap "kube-root-ca.crt" not found Jan 17 12:01:52.046633 kubelet[3631]: I0117 12:01:52.040978 3631 topology_manager.go:215] "Topology Admit Handler" podUID="b20fbcd7-c41f-4767-aba0-b40b37cfd576" podNamespace="kube-system" podName="cilium-operator-5cc964979-n9swn" Jan 17 12:01:52.054017 containerd[2133]: time="2025-01-17T12:01:52.053939261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kxmck,Uid:282dfb05-c014-4ec4-85d0-786f0e08acc4,Namespace:kube-system,Attempt:0,}" Jan 17 12:01:52.106644 kubelet[3631]: I0117 12:01:52.103504 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg77h\" (UniqueName: \"kubernetes.io/projected/b20fbcd7-c41f-4767-aba0-b40b37cfd576-kube-api-access-mg77h\") pod \"cilium-operator-5cc964979-n9swn\" (UID: \"b20fbcd7-c41f-4767-aba0-b40b37cfd576\") " pod="kube-system/cilium-operator-5cc964979-n9swn" Jan 17 12:01:52.106644 kubelet[3631]: I0117 12:01:52.103621 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b20fbcd7-c41f-4767-aba0-b40b37cfd576-cilium-config-path\") pod \"cilium-operator-5cc964979-n9swn\" (UID: \"b20fbcd7-c41f-4767-aba0-b40b37cfd576\") " pod="kube-system/cilium-operator-5cc964979-n9swn" Jan 17 12:01:52.187213 containerd[2133]: time="2025-01-17T12:01:52.173002073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:52.187213 containerd[2133]: time="2025-01-17T12:01:52.186728393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:52.187213 containerd[2133]: time="2025-01-17T12:01:52.186780317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:52.187213 containerd[2133]: time="2025-01-17T12:01:52.186993365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:52.318437 containerd[2133]: time="2025-01-17T12:01:52.318268026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kxmck,Uid:282dfb05-c014-4ec4-85d0-786f0e08acc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\"" Jan 17 12:01:52.323120 containerd[2133]: time="2025-01-17T12:01:52.322986522Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 17 12:01:52.374530 containerd[2133]: time="2025-01-17T12:01:52.374480010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-n9swn,Uid:b20fbcd7-c41f-4767-aba0-b40b37cfd576,Namespace:kube-system,Attempt:0,}" Jan 17 12:01:52.422609 containerd[2133]: time="2025-01-17T12:01:52.422358103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:52.422849 containerd[2133]: time="2025-01-17T12:01:52.422537467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:52.422849 containerd[2133]: time="2025-01-17T12:01:52.422627155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:52.423164 containerd[2133]: time="2025-01-17T12:01:52.422946283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:52.514857 containerd[2133]: time="2025-01-17T12:01:52.514726603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-n9swn,Uid:b20fbcd7-c41f-4767-aba0-b40b37cfd576,Namespace:kube-system,Attempt:0,} returns sandbox id \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\"" Jan 17 12:01:52.868609 containerd[2133]: time="2025-01-17T12:01:52.868089849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nk76b,Uid:815fcb73-65e6-423d-922b-0122b94029a5,Namespace:kube-system,Attempt:0,}" Jan 17 12:01:52.915641 containerd[2133]: time="2025-01-17T12:01:52.914429253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:01:52.915641 containerd[2133]: time="2025-01-17T12:01:52.915507801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:01:52.916001 containerd[2133]: time="2025-01-17T12:01:52.915660741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:52.916404 containerd[2133]: time="2025-01-17T12:01:52.916035453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:01:52.978870 containerd[2133]: time="2025-01-17T12:01:52.978804705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nk76b,Uid:815fcb73-65e6-423d-922b-0122b94029a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b404e9840b73f4ce4db0bea13782f91cfb072b30e33e6501eec110b051b6ce97\"" Jan 17 12:01:52.984796 containerd[2133]: time="2025-01-17T12:01:52.984720729Z" level=info msg="CreateContainer within sandbox \"b404e9840b73f4ce4db0bea13782f91cfb072b30e33e6501eec110b051b6ce97\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:01:53.011867 containerd[2133]: time="2025-01-17T12:01:53.011748989Z" level=info msg="CreateContainer within sandbox \"b404e9840b73f4ce4db0bea13782f91cfb072b30e33e6501eec110b051b6ce97\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f2b7426a7b32f9136c8ecd2db57ce129d01a2646612494e8061e8e40e08f54b3\"" Jan 17 12:01:53.017084 containerd[2133]: time="2025-01-17T12:01:53.015127277Z" level=info msg="StartContainer for \"f2b7426a7b32f9136c8ecd2db57ce129d01a2646612494e8061e8e40e08f54b3\"" Jan 17 12:01:53.129524 containerd[2133]: time="2025-01-17T12:01:53.129340914Z" level=info msg="StartContainer for \"f2b7426a7b32f9136c8ecd2db57ce129d01a2646612494e8061e8e40e08f54b3\" returns successfully" Jan 17 12:01:53.364247 kubelet[3631]: I0117 12:01:53.363949 3631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nk76b" podStartSLOduration=2.363883135 podStartE2EDuration="2.363883135s" podCreationTimestamp="2025-01-17 12:01:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:01:53.361716643 +0000 UTC m=+16.366870318" watchObservedRunningTime="2025-01-17 12:01:53.363883135 +0000 UTC m=+16.369036786" Jan 17 12:01:59.231307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554675723.mount: Deactivated successfully. Jan 17 12:02:01.920439 containerd[2133]: time="2025-01-17T12:02:01.920329710Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:01.922307 containerd[2133]: time="2025-01-17T12:02:01.922234782Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651514" Jan 17 12:02:01.924209 containerd[2133]: time="2025-01-17T12:02:01.924126210Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:01.928223 containerd[2133]: time="2025-01-17T12:02:01.927898146Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.60482196s" Jan 17 12:02:01.928223 containerd[2133]: time="2025-01-17T12:02:01.927991026Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 17 12:02:01.931566 containerd[2133]: time="2025-01-17T12:02:01.930158418Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 17 12:02:01.932306 containerd[2133]: time="2025-01-17T12:02:01.932240922Z" level=info msg="CreateContainer within sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:02:01.958495 containerd[2133]: time="2025-01-17T12:02:01.958442874Z" level=info msg="CreateContainer within sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187\"" Jan 17 12:02:01.961483 containerd[2133]: time="2025-01-17T12:02:01.961412514Z" level=info msg="StartContainer for \"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187\"" Jan 17 12:02:02.076624 containerd[2133]: time="2025-01-17T12:02:02.074296058Z" level=info msg="StartContainer for \"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187\" returns successfully" Jan 17 12:02:02.952825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187-rootfs.mount: Deactivated successfully. Jan 17 12:02:02.958714 containerd[2133]: time="2025-01-17T12:02:02.958555447Z" level=info msg="shim disconnected" id=180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187 namespace=k8s.io Jan 17 12:02:02.958714 containerd[2133]: time="2025-01-17T12:02:02.958661359Z" level=warning msg="cleaning up after shim disconnected" id=180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187 namespace=k8s.io Jan 17 12:02:02.958714 containerd[2133]: time="2025-01-17T12:02:02.958682455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:02:03.402139 containerd[2133]: time="2025-01-17T12:02:03.402021425Z" level=info msg="CreateContainer within sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:02:03.431017 containerd[2133]: time="2025-01-17T12:02:03.430804085Z" level=info msg="CreateContainer within sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4\"" Jan 17 12:02:03.434617 containerd[2133]: time="2025-01-17T12:02:03.433254005Z" level=info msg="StartContainer for \"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4\"" Jan 17 12:02:03.534617 containerd[2133]: time="2025-01-17T12:02:03.534530886Z" level=info msg="StartContainer for \"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4\" returns successfully" Jan 17 12:02:03.558653 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:02:03.559295 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:02:03.559476 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:02:03.572184 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:02:03.619986 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:02:03.626902 containerd[2133]: time="2025-01-17T12:02:03.626801730Z" level=info msg="shim disconnected" id=5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4 namespace=k8s.io Jan 17 12:02:03.626902 containerd[2133]: time="2025-01-17T12:02:03.626895810Z" level=warning msg="cleaning up after shim disconnected" id=5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4 namespace=k8s.io Jan 17 12:02:03.627854 containerd[2133]: time="2025-01-17T12:02:03.626918862Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:02:03.952507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4-rootfs.mount: Deactivated successfully. Jan 17 12:02:04.406943 containerd[2133]: time="2025-01-17T12:02:04.406885518Z" level=info msg="CreateContainer within sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:02:04.446648 containerd[2133]: time="2025-01-17T12:02:04.446530578Z" level=info msg="CreateContainer within sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51\"" Jan 17 12:02:04.447738 containerd[2133]: time="2025-01-17T12:02:04.447673626Z" level=info msg="StartContainer for \"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51\"" Jan 17 12:02:04.559010 containerd[2133]: time="2025-01-17T12:02:04.558947167Z" level=info msg="StartContainer for \"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51\" returns successfully" Jan 17 12:02:04.604250 containerd[2133]: time="2025-01-17T12:02:04.604092415Z" level=info msg="shim disconnected" id=d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51 namespace=k8s.io Jan 17 12:02:04.604250 containerd[2133]: time="2025-01-17T12:02:04.604168243Z" level=warning msg="cleaning up after shim disconnected" id=d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51 namespace=k8s.io Jan 17 12:02:04.604250 containerd[2133]: time="2025-01-17T12:02:04.604188403Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:02:04.952086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51-rootfs.mount: Deactivated successfully. Jan 17 12:02:05.412249 containerd[2133]: time="2025-01-17T12:02:05.412141975Z" level=info msg="CreateContainer within sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:02:05.446394 containerd[2133]: time="2025-01-17T12:02:05.446220499Z" level=info msg="CreateContainer within sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976\"" Jan 17 12:02:05.447758 containerd[2133]: time="2025-01-17T12:02:05.447636667Z" level=info msg="StartContainer for \"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976\"" Jan 17 12:02:05.557892 containerd[2133]: time="2025-01-17T12:02:05.555769280Z" level=info msg="StartContainer for \"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976\" returns successfully" Jan 17 12:02:05.593145 containerd[2133]: time="2025-01-17T12:02:05.593054768Z" level=info msg="shim disconnected" id=706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976 namespace=k8s.io Jan 17 12:02:05.593145 containerd[2133]: time="2025-01-17T12:02:05.593134436Z" level=warning msg="cleaning up after shim disconnected" id=706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976 namespace=k8s.io Jan 17 12:02:05.593777 containerd[2133]: time="2025-01-17T12:02:05.593158484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:02:05.952478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976-rootfs.mount: Deactivated successfully. Jan 17 12:02:06.424396 containerd[2133]: time="2025-01-17T12:02:06.424297844Z" level=info msg="CreateContainer within sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:02:06.460971 containerd[2133]: time="2025-01-17T12:02:06.460883228Z" level=info msg="CreateContainer within sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\"" Jan 17 12:02:06.462610 containerd[2133]: time="2025-01-17T12:02:06.462430400Z" level=info msg="StartContainer for \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\"" Jan 17 12:02:06.464502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088199168.mount: Deactivated successfully. Jan 17 12:02:06.583058 containerd[2133]: time="2025-01-17T12:02:06.582940149Z" level=info msg="StartContainer for \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\" returns successfully" Jan 17 12:02:06.791446 kubelet[3631]: I0117 12:02:06.791160 3631 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:02:06.847611 kubelet[3631]: I0117 12:02:06.842357 3631 topology_manager.go:215] "Topology Admit Handler" podUID="12e74d01-c322-4434-a3d2-b18e7891d5df" podNamespace="kube-system" podName="coredns-76f75df574-nl7gc" Jan 17 12:02:06.853755 kubelet[3631]: I0117 12:02:06.853711 3631 topology_manager.go:215] "Topology Admit Handler" podUID="d719f70c-10f9-4b72-ad56-55d7dcb47d42" podNamespace="kube-system" podName="coredns-76f75df574-l4fxn" Jan 17 12:02:06.912776 kubelet[3631]: I0117 12:02:06.912733 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12e74d01-c322-4434-a3d2-b18e7891d5df-config-volume\") pod \"coredns-76f75df574-nl7gc\" (UID: \"12e74d01-c322-4434-a3d2-b18e7891d5df\") " pod="kube-system/coredns-76f75df574-nl7gc" Jan 17 12:02:06.913235 kubelet[3631]: I0117 12:02:06.913182 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d719f70c-10f9-4b72-ad56-55d7dcb47d42-config-volume\") pod \"coredns-76f75df574-l4fxn\" (UID: \"d719f70c-10f9-4b72-ad56-55d7dcb47d42\") " pod="kube-system/coredns-76f75df574-l4fxn" Jan 17 12:02:06.913628 kubelet[3631]: I0117 12:02:06.913565 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6nhg\" (UniqueName: \"kubernetes.io/projected/d719f70c-10f9-4b72-ad56-55d7dcb47d42-kube-api-access-x6nhg\") pod \"coredns-76f75df574-l4fxn\" (UID: \"d719f70c-10f9-4b72-ad56-55d7dcb47d42\") " pod="kube-system/coredns-76f75df574-l4fxn" Jan 17 12:02:06.913979 kubelet[3631]: I0117 12:02:06.913896 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2tqz\" (UniqueName: \"kubernetes.io/projected/12e74d01-c322-4434-a3d2-b18e7891d5df-kube-api-access-q2tqz\") pod \"coredns-76f75df574-nl7gc\" (UID: \"12e74d01-c322-4434-a3d2-b18e7891d5df\") " pod="kube-system/coredns-76f75df574-nl7gc" Jan 17 12:02:07.167715 containerd[2133]: time="2025-01-17T12:02:07.167161316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nl7gc,Uid:12e74d01-c322-4434-a3d2-b18e7891d5df,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:07.184681 containerd[2133]: time="2025-01-17T12:02:07.184321976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l4fxn,Uid:d719f70c-10f9-4b72-ad56-55d7dcb47d42,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:08.693072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1476779234.mount: Deactivated successfully. Jan 17 12:02:14.252733 containerd[2133]: time="2025-01-17T12:02:14.252420495Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:14.254739 containerd[2133]: time="2025-01-17T12:02:14.254684007Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138318" Jan 17 12:02:14.256389 containerd[2133]: time="2025-01-17T12:02:14.256312815Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:14.262101 containerd[2133]: time="2025-01-17T12:02:14.262029207Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 12.331780897s" Jan 17 12:02:14.262530 containerd[2133]: time="2025-01-17T12:02:14.262301115Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 17 12:02:14.267481 containerd[2133]: time="2025-01-17T12:02:14.267253443Z" level=info msg="CreateContainer within sandbox \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 17 12:02:14.292092 containerd[2133]: time="2025-01-17T12:02:14.292021839Z" level=info msg="CreateContainer within sandbox \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\"" Jan 17 12:02:14.296610 containerd[2133]: time="2025-01-17T12:02:14.296528007Z" level=info msg="StartContainer for \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\"" Jan 17 12:02:14.401455 containerd[2133]: time="2025-01-17T12:02:14.401267680Z" level=info msg="StartContainer for \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\" returns successfully" Jan 17 12:02:14.484806 kubelet[3631]: I0117 12:02:14.484730 3631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kxmck" podStartSLOduration=13.876964388 podStartE2EDuration="23.484670416s" podCreationTimestamp="2025-01-17 12:01:51 +0000 UTC" firstStartedPulling="2025-01-17 12:01:52.321216942 +0000 UTC m=+15.326370581" lastFinishedPulling="2025-01-17 12:02:01.928922958 +0000 UTC m=+24.934076609" observedRunningTime="2025-01-17 12:02:07.483909117 +0000 UTC m=+30.489062768" watchObservedRunningTime="2025-01-17 12:02:14.484670416 +0000 UTC m=+37.489824091" Jan 17 12:02:18.457061 systemd-networkd[1685]: cilium_host: Link UP Jan 17 12:02:18.458672 systemd-networkd[1685]: cilium_net: Link UP Jan 17 12:02:18.458684 systemd-networkd[1685]: cilium_net: Gained carrier Jan 17 12:02:18.459173 systemd-networkd[1685]: cilium_host: Gained carrier Jan 17 12:02:18.461162 systemd-networkd[1685]: cilium_host: Gained IPv6LL Jan 17 12:02:18.472272 (udev-worker)[4454]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:02:18.477561 (udev-worker)[4451]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:02:18.642174 (udev-worker)[4468]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:02:18.651017 systemd-networkd[1685]: cilium_vxlan: Link UP Jan 17 12:02:18.651037 systemd-networkd[1685]: cilium_vxlan: Gained carrier Jan 17 12:02:19.163699 kernel: NET: Registered PF_ALG protocol family Jan 17 12:02:19.437267 systemd-networkd[1685]: cilium_net: Gained IPv6LL Jan 17 12:02:19.693801 systemd-networkd[1685]: cilium_vxlan: Gained IPv6LL Jan 17 12:02:20.374150 systemd[1]: Started sshd@7-172.31.18.162:22-139.178.68.195:40750.service - OpenSSH per-connection server daemon (139.178.68.195:40750). Jan 17 12:02:20.570690 sshd[4677]: Accepted publickey for core from 139.178.68.195 port 40750 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:20.574451 sshd[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:20.587845 systemd-logind[2098]: New session 8 of user core. Jan 17 12:02:20.592417 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:02:20.837987 systemd-networkd[1685]: lxc_health: Link UP Jan 17 12:02:20.877603 systemd-networkd[1685]: lxc_health: Gained carrier Jan 17 12:02:21.064760 sshd[4677]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:21.078297 systemd[1]: sshd@7-172.31.18.162:22-139.178.68.195:40750.service: Deactivated successfully. Jan 17 12:02:21.090693 systemd-logind[2098]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:02:21.092613 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:02:21.097190 systemd-logind[2098]: Removed session 8. Jan 17 12:02:21.455680 systemd-networkd[1685]: lxcaa1a77c81db7: Link UP Jan 17 12:02:21.463637 kernel: eth0: renamed from tmp93798 Jan 17 12:02:21.472204 systemd-networkd[1685]: lxcaa1a77c81db7: Gained carrier Jan 17 12:02:21.568293 (udev-worker)[4464]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:02:21.573361 systemd-networkd[1685]: lxc706ff95b300d: Link UP Jan 17 12:02:21.584728 kernel: eth0: renamed from tmpde619 Jan 17 12:02:21.595842 systemd-networkd[1685]: lxc706ff95b300d: Gained carrier Jan 17 12:02:22.105597 kubelet[3631]: I0117 12:02:22.105492 3631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-n9swn" podStartSLOduration=8.35916779 podStartE2EDuration="30.105431086s" podCreationTimestamp="2025-01-17 12:01:52 +0000 UTC" firstStartedPulling="2025-01-17 12:01:52.516507907 +0000 UTC m=+15.521661558" lastFinishedPulling="2025-01-17 12:02:14.262771227 +0000 UTC m=+37.267924854" observedRunningTime="2025-01-17 12:02:14.486690892 +0000 UTC m=+37.491844543" watchObservedRunningTime="2025-01-17 12:02:22.105431086 +0000 UTC m=+45.110584749" Jan 17 12:02:22.124803 systemd-networkd[1685]: lxc_health: Gained IPv6LL Jan 17 12:02:22.700767 systemd-networkd[1685]: lxcaa1a77c81db7: Gained IPv6LL Jan 17 12:02:22.892832 systemd-networkd[1685]: lxc706ff95b300d: Gained IPv6LL Jan 17 12:02:25.242370 ntpd[2088]: Listen normally on 6 cilium_host 192.168.0.142:123 Jan 17 12:02:25.242505 ntpd[2088]: Listen normally on 7 cilium_net [fe80::900b:18ff:fe55:b428%4]:123 Jan 17 12:02:25.244187 ntpd[2088]: 17 Jan 12:02:25 ntpd[2088]: Listen normally on 6 cilium_host 192.168.0.142:123 Jan 17 12:02:25.244187 ntpd[2088]: 17 Jan 12:02:25 ntpd[2088]: Listen normally on 7 cilium_net [fe80::900b:18ff:fe55:b428%4]:123 Jan 17 12:02:25.244187 ntpd[2088]: 17 Jan 12:02:25 ntpd[2088]: Listen normally on 8 cilium_host [fe80::a839:f3ff:fe95:9bfe%5]:123 Jan 17 12:02:25.244187 ntpd[2088]: 17 Jan 12:02:25 ntpd[2088]: Listen normally on 9 cilium_vxlan [fe80::643f:65ff:fe60:282c%6]:123 Jan 17 12:02:25.244187 ntpd[2088]: 17 Jan 12:02:25 ntpd[2088]: Listen normally on 10 lxc_health [fe80::cc53:60ff:fe4c:3f70%8]:123 Jan 17 12:02:25.244187 ntpd[2088]: 17 Jan 12:02:25 ntpd[2088]: Listen normally on 11 lxcaa1a77c81db7 [fe80::b8bc:5aff:fe0d:2f68%10]:123 Jan 17 12:02:25.244187 ntpd[2088]: 17 Jan 12:02:25 ntpd[2088]: Listen normally on 12 lxc706ff95b300d [fe80::5058:a4ff:fe11:5472%12]:123 Jan 17 12:02:25.242618 ntpd[2088]: Listen normally on 8 cilium_host [fe80::a839:f3ff:fe95:9bfe%5]:123 Jan 17 12:02:25.242691 ntpd[2088]: Listen normally on 9 cilium_vxlan [fe80::643f:65ff:fe60:282c%6]:123 Jan 17 12:02:25.242760 ntpd[2088]: Listen normally on 10 lxc_health [fe80::cc53:60ff:fe4c:3f70%8]:123 Jan 17 12:02:25.242826 ntpd[2088]: Listen normally on 11 lxcaa1a77c81db7 [fe80::b8bc:5aff:fe0d:2f68%10]:123 Jan 17 12:02:25.242898 ntpd[2088]: Listen normally on 12 lxc706ff95b300d [fe80::5058:a4ff:fe11:5472%12]:123 Jan 17 12:02:26.097505 systemd[1]: Started sshd@8-172.31.18.162:22-139.178.68.195:37648.service - OpenSSH per-connection server daemon (139.178.68.195:37648). Jan 17 12:02:26.282654 sshd[4831]: Accepted publickey for core from 139.178.68.195 port 37648 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:26.285532 sshd[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:26.294954 systemd-logind[2098]: New session 9 of user core. Jan 17 12:02:26.305147 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:02:26.601935 sshd[4831]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:26.610286 systemd[1]: sshd@8-172.31.18.162:22-139.178.68.195:37648.service: Deactivated successfully. Jan 17 12:02:26.626088 systemd-logind[2098]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:02:26.629348 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:02:26.635374 systemd-logind[2098]: Removed session 9. Jan 17 12:02:30.253137 containerd[2133]: time="2025-01-17T12:02:30.252960870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:30.253137 containerd[2133]: time="2025-01-17T12:02:30.253062642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:30.255065 containerd[2133]: time="2025-01-17T12:02:30.253129002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:30.255065 containerd[2133]: time="2025-01-17T12:02:30.253326870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:30.389946 containerd[2133]: time="2025-01-17T12:02:30.389000239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:30.390774 containerd[2133]: time="2025-01-17T12:02:30.390602659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:30.391477 containerd[2133]: time="2025-01-17T12:02:30.390737371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:30.394117 containerd[2133]: time="2025-01-17T12:02:30.392659351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:30.463611 containerd[2133]: time="2025-01-17T12:02:30.459989359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l4fxn,Uid:d719f70c-10f9-4b72-ad56-55d7dcb47d42,Namespace:kube-system,Attempt:0,} returns sandbox id \"de6198ea52e5388869761e9204e5ee1aaab11314bab5e5b640a737030c82369d\"" Jan 17 12:02:30.485707 containerd[2133]: time="2025-01-17T12:02:30.484096352Z" level=info msg="CreateContainer within sandbox \"de6198ea52e5388869761e9204e5ee1aaab11314bab5e5b640a737030c82369d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:02:30.534180 containerd[2133]: time="2025-01-17T12:02:30.534002600Z" level=info msg="CreateContainer within sandbox \"de6198ea52e5388869761e9204e5ee1aaab11314bab5e5b640a737030c82369d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ab86a7321370d4bc6561a07d36b443fafd95a13c020a08b1261efc7bf4d1e77\"" Jan 17 12:02:30.539307 containerd[2133]: time="2025-01-17T12:02:30.537553220Z" level=info msg="StartContainer for \"2ab86a7321370d4bc6561a07d36b443fafd95a13c020a08b1261efc7bf4d1e77\"" Jan 17 12:02:30.588109 containerd[2133]: time="2025-01-17T12:02:30.588055064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nl7gc,Uid:12e74d01-c322-4434-a3d2-b18e7891d5df,Namespace:kube-system,Attempt:0,} returns sandbox id \"9379832fd2c67d2c83fd36a9a63ee80e034b6b2e14180386493b6c81b9a81c34\"" Jan 17 12:02:30.595630 containerd[2133]: time="2025-01-17T12:02:30.595541060Z" level=info msg="CreateContainer within sandbox \"9379832fd2c67d2c83fd36a9a63ee80e034b6b2e14180386493b6c81b9a81c34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:02:30.630086 containerd[2133]: time="2025-01-17T12:02:30.630025508Z" level=info msg="CreateContainer within sandbox \"9379832fd2c67d2c83fd36a9a63ee80e034b6b2e14180386493b6c81b9a81c34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"624fbd274cbc44b5464617c32d008e11258c4b97485df0eefeed3eda8428fee0\"" Jan 17 12:02:30.633925 containerd[2133]: time="2025-01-17T12:02:30.633865736Z" level=info msg="StartContainer for \"624fbd274cbc44b5464617c32d008e11258c4b97485df0eefeed3eda8428fee0\"" Jan 17 12:02:30.733365 containerd[2133]: time="2025-01-17T12:02:30.733296081Z" level=info msg="StartContainer for \"2ab86a7321370d4bc6561a07d36b443fafd95a13c020a08b1261efc7bf4d1e77\" returns successfully" Jan 17 12:02:30.779939 containerd[2133]: time="2025-01-17T12:02:30.779871885Z" level=info msg="StartContainer for \"624fbd274cbc44b5464617c32d008e11258c4b97485df0eefeed3eda8428fee0\" returns successfully" Jan 17 12:02:31.598281 kubelet[3631]: I0117 12:02:31.598221 3631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nl7gc" podStartSLOduration=39.598156881 podStartE2EDuration="39.598156881s" podCreationTimestamp="2025-01-17 12:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:31.595737165 +0000 UTC m=+54.600890816" watchObservedRunningTime="2025-01-17 12:02:31.598156881 +0000 UTC m=+54.603310532" Jan 17 12:02:31.649535 systemd[1]: Started sshd@9-172.31.18.162:22-139.178.68.195:37650.service - OpenSSH per-connection server daemon (139.178.68.195:37650). Jan 17 12:02:31.651438 kubelet[3631]: I0117 12:02:31.650666 3631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l4fxn" podStartSLOduration=39.650605485 podStartE2EDuration="39.650605485s" podCreationTimestamp="2025-01-17 12:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:31.649050477 +0000 UTC m=+54.654204128" watchObservedRunningTime="2025-01-17 12:02:31.650605485 +0000 UTC m=+54.655759160" Jan 17 12:02:31.853048 sshd[5013]: Accepted publickey for core from 139.178.68.195 port 37650 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:31.855815 sshd[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:31.864194 systemd-logind[2098]: New session 10 of user core. Jan 17 12:02:31.873342 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:02:32.126880 sshd[5013]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:32.135055 systemd[1]: sshd@9-172.31.18.162:22-139.178.68.195:37650.service: Deactivated successfully. Jan 17 12:02:32.143185 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:02:32.143853 systemd-logind[2098]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:02:32.148307 systemd-logind[2098]: Removed session 10. Jan 17 12:02:37.157140 systemd[1]: Started sshd@10-172.31.18.162:22-139.178.68.195:50922.service - OpenSSH per-connection server daemon (139.178.68.195:50922). Jan 17 12:02:37.339268 sshd[5034]: Accepted publickey for core from 139.178.68.195 port 50922 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:37.342118 sshd[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:37.351014 systemd-logind[2098]: New session 11 of user core. Jan 17 12:02:37.357083 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:02:37.601988 sshd[5034]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:37.609208 systemd[1]: sshd@10-172.31.18.162:22-139.178.68.195:50922.service: Deactivated successfully. Jan 17 12:02:37.616186 systemd-logind[2098]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:02:37.616946 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:02:37.621502 systemd-logind[2098]: Removed session 11. Jan 17 12:02:42.632318 systemd[1]: Started sshd@11-172.31.18.162:22-139.178.68.195:50930.service - OpenSSH per-connection server daemon (139.178.68.195:50930). Jan 17 12:02:42.814199 sshd[5051]: Accepted publickey for core from 139.178.68.195 port 50930 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:42.816320 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:42.824976 systemd-logind[2098]: New session 12 of user core. Jan 17 12:02:42.835266 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:02:43.100780 sshd[5051]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:43.107207 systemd[1]: sshd@11-172.31.18.162:22-139.178.68.195:50930.service: Deactivated successfully. Jan 17 12:02:43.114931 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:02:43.116739 systemd-logind[2098]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:02:43.118718 systemd-logind[2098]: Removed session 12. Jan 17 12:02:48.132111 systemd[1]: Started sshd@12-172.31.18.162:22-139.178.68.195:58044.service - OpenSSH per-connection server daemon (139.178.68.195:58044). Jan 17 12:02:48.310404 sshd[5069]: Accepted publickey for core from 139.178.68.195 port 58044 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:48.313093 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:48.325207 systemd-logind[2098]: New session 13 of user core. Jan 17 12:02:48.336299 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:02:48.578082 sshd[5069]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:48.583843 systemd[1]: sshd@12-172.31.18.162:22-139.178.68.195:58044.service: Deactivated successfully. Jan 17 12:02:48.592360 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:02:48.596948 systemd-logind[2098]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:02:48.599346 systemd-logind[2098]: Removed session 13. Jan 17 12:02:48.609074 systemd[1]: Started sshd@13-172.31.18.162:22-139.178.68.195:58054.service - OpenSSH per-connection server daemon (139.178.68.195:58054). Jan 17 12:02:48.792307 sshd[5084]: Accepted publickey for core from 139.178.68.195 port 58054 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:48.795153 sshd[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:48.804907 systemd-logind[2098]: New session 14 of user core. Jan 17 12:02:48.810918 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:02:49.136838 sshd[5084]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:49.151423 systemd[1]: sshd@13-172.31.18.162:22-139.178.68.195:58054.service: Deactivated successfully. Jan 17 12:02:49.178116 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:02:49.186799 systemd-logind[2098]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:02:49.195208 systemd[1]: Started sshd@14-172.31.18.162:22-139.178.68.195:58062.service - OpenSSH per-connection server daemon (139.178.68.195:58062). Jan 17 12:02:49.197564 systemd-logind[2098]: Removed session 14. Jan 17 12:02:49.369818 sshd[5095]: Accepted publickey for core from 139.178.68.195 port 58062 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:49.372489 sshd[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:49.381290 systemd-logind[2098]: New session 15 of user core. Jan 17 12:02:49.397142 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:02:49.651935 sshd[5095]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:49.659657 systemd-logind[2098]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:02:49.661178 systemd[1]: sshd@14-172.31.18.162:22-139.178.68.195:58062.service: Deactivated successfully. Jan 17 12:02:49.668711 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:02:49.670829 systemd-logind[2098]: Removed session 15. Jan 17 12:02:54.682664 systemd[1]: Started sshd@15-172.31.18.162:22-139.178.68.195:58070.service - OpenSSH per-connection server daemon (139.178.68.195:58070). Jan 17 12:02:54.862445 sshd[5111]: Accepted publickey for core from 139.178.68.195 port 58070 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:02:54.864376 sshd[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:02:54.872011 systemd-logind[2098]: New session 16 of user core. Jan 17 12:02:54.879218 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:02:55.136812 sshd[5111]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:55.143557 systemd-logind[2098]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:02:55.144848 systemd[1]: sshd@15-172.31.18.162:22-139.178.68.195:58070.service: Deactivated successfully. Jan 17 12:02:55.152794 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:02:55.157531 systemd-logind[2098]: Removed session 16. Jan 17 12:03:00.168046 systemd[1]: Started sshd@16-172.31.18.162:22-139.178.68.195:55968.service - OpenSSH per-connection server daemon (139.178.68.195:55968). Jan 17 12:03:00.345421 sshd[5126]: Accepted publickey for core from 139.178.68.195 port 55968 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:00.348141 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:00.357706 systemd-logind[2098]: New session 17 of user core. Jan 17 12:03:00.364603 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:03:00.613264 sshd[5126]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:00.621877 systemd[1]: sshd@16-172.31.18.162:22-139.178.68.195:55968.service: Deactivated successfully. Jan 17 12:03:00.628174 systemd-logind[2098]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:03:00.629217 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:03:00.634696 systemd-logind[2098]: Removed session 17. Jan 17 12:03:05.646129 systemd[1]: Started sshd@17-172.31.18.162:22-139.178.68.195:44832.service - OpenSSH per-connection server daemon (139.178.68.195:44832). Jan 17 12:03:05.828818 sshd[5140]: Accepted publickey for core from 139.178.68.195 port 44832 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:05.831801 sshd[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:05.840971 systemd-logind[2098]: New session 18 of user core. Jan 17 12:03:05.853682 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:03:06.107942 sshd[5140]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:06.113307 systemd[1]: sshd@17-172.31.18.162:22-139.178.68.195:44832.service: Deactivated successfully. Jan 17 12:03:06.122034 systemd-logind[2098]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:03:06.123693 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:03:06.126085 systemd-logind[2098]: Removed session 18. Jan 17 12:03:06.138094 systemd[1]: Started sshd@18-172.31.18.162:22-139.178.68.195:44838.service - OpenSSH per-connection server daemon (139.178.68.195:44838). Jan 17 12:03:06.323199 sshd[5154]: Accepted publickey for core from 139.178.68.195 port 44838 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:06.326102 sshd[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:06.337730 systemd-logind[2098]: New session 19 of user core. Jan 17 12:03:06.344202 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:03:06.645706 sshd[5154]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:06.653459 systemd[1]: sshd@18-172.31.18.162:22-139.178.68.195:44838.service: Deactivated successfully. Jan 17 12:03:06.660946 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:03:06.663252 systemd-logind[2098]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:03:06.665383 systemd-logind[2098]: Removed session 19. Jan 17 12:03:06.680091 systemd[1]: Started sshd@19-172.31.18.162:22-139.178.68.195:44844.service - OpenSSH per-connection server daemon (139.178.68.195:44844). Jan 17 12:03:06.867151 sshd[5166]: Accepted publickey for core from 139.178.68.195 port 44844 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:06.869928 sshd[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:06.879791 systemd-logind[2098]: New session 20 of user core. Jan 17 12:03:06.887101 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:03:09.408165 sshd[5166]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:09.419440 systemd[1]: sshd@19-172.31.18.162:22-139.178.68.195:44844.service: Deactivated successfully. Jan 17 12:03:09.438155 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:03:09.444226 systemd-logind[2098]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:03:09.459444 systemd[1]: Started sshd@20-172.31.18.162:22-139.178.68.195:44856.service - OpenSSH per-connection server daemon (139.178.68.195:44856). Jan 17 12:03:09.464865 systemd-logind[2098]: Removed session 20. Jan 17 12:03:09.648816 sshd[5184]: Accepted publickey for core from 139.178.68.195 port 44856 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:09.651568 sshd[5184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:09.660919 systemd-logind[2098]: New session 21 of user core. Jan 17 12:03:09.668246 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:03:10.161159 sshd[5184]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:10.169698 systemd[1]: sshd@20-172.31.18.162:22-139.178.68.195:44856.service: Deactivated successfully. Jan 17 12:03:10.175763 systemd-logind[2098]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:03:10.176651 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:03:10.180365 systemd-logind[2098]: Removed session 21. Jan 17 12:03:10.190127 systemd[1]: Started sshd@21-172.31.18.162:22-139.178.68.195:44858.service - OpenSSH per-connection server daemon (139.178.68.195:44858). Jan 17 12:03:10.374269 sshd[5196]: Accepted publickey for core from 139.178.68.195 port 44858 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:10.376876 sshd[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:10.386323 systemd-logind[2098]: New session 22 of user core. Jan 17 12:03:10.394214 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:03:10.635800 sshd[5196]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:10.642313 systemd[1]: sshd@21-172.31.18.162:22-139.178.68.195:44858.service: Deactivated successfully. Jan 17 12:03:10.650745 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:03:10.652476 systemd-logind[2098]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:03:10.654935 systemd-logind[2098]: Removed session 22. Jan 17 12:03:15.665076 systemd[1]: Started sshd@22-172.31.18.162:22-139.178.68.195:34442.service - OpenSSH per-connection server daemon (139.178.68.195:34442). Jan 17 12:03:15.846029 sshd[5210]: Accepted publickey for core from 139.178.68.195 port 34442 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:15.848698 sshd[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:15.857183 systemd-logind[2098]: New session 23 of user core. Jan 17 12:03:15.866227 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:03:16.111970 sshd[5210]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:16.117343 systemd[1]: sshd@22-172.31.18.162:22-139.178.68.195:34442.service: Deactivated successfully. Jan 17 12:03:16.127784 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:03:16.130268 systemd-logind[2098]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:03:16.132196 systemd-logind[2098]: Removed session 23. Jan 17 12:03:21.144622 systemd[1]: Started sshd@23-172.31.18.162:22-139.178.68.195:34454.service - OpenSSH per-connection server daemon (139.178.68.195:34454). Jan 17 12:03:21.328286 sshd[5227]: Accepted publickey for core from 139.178.68.195 port 34454 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:21.331012 sshd[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:21.339175 systemd-logind[2098]: New session 24 of user core. Jan 17 12:03:21.347468 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:03:21.600122 sshd[5227]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:21.608453 systemd[1]: sshd@23-172.31.18.162:22-139.178.68.195:34454.service: Deactivated successfully. Jan 17 12:03:21.615996 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:03:21.618233 systemd-logind[2098]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:03:21.619981 systemd-logind[2098]: Removed session 24. Jan 17 12:03:26.630132 systemd[1]: Started sshd@24-172.31.18.162:22-139.178.68.195:46416.service - OpenSSH per-connection server daemon (139.178.68.195:46416). Jan 17 12:03:26.809263 sshd[5244]: Accepted publickey for core from 139.178.68.195 port 46416 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:26.811985 sshd[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:26.821168 systemd-logind[2098]: New session 25 of user core. Jan 17 12:03:26.827095 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:03:27.082473 sshd[5244]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:27.090717 systemd[1]: sshd@24-172.31.18.162:22-139.178.68.195:46416.service: Deactivated successfully. Jan 17 12:03:27.096772 systemd-logind[2098]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:03:27.096921 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:03:27.100605 systemd-logind[2098]: Removed session 25. Jan 17 12:03:32.118028 systemd[1]: Started sshd@25-172.31.18.162:22-139.178.68.195:46420.service - OpenSSH per-connection server daemon (139.178.68.195:46420). Jan 17 12:03:32.286798 sshd[5259]: Accepted publickey for core from 139.178.68.195 port 46420 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:32.289653 sshd[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:32.297479 systemd-logind[2098]: New session 26 of user core. Jan 17 12:03:32.303598 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:03:32.551692 sshd[5259]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:32.559949 systemd[1]: sshd@25-172.31.18.162:22-139.178.68.195:46420.service: Deactivated successfully. Jan 17 12:03:32.564990 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:03:32.565012 systemd-logind[2098]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:03:32.569685 systemd-logind[2098]: Removed session 26. Jan 17 12:03:32.585070 systemd[1]: Started sshd@26-172.31.18.162:22-139.178.68.195:46436.service - OpenSSH per-connection server daemon (139.178.68.195:46436). Jan 17 12:03:32.760533 sshd[5272]: Accepted publickey for core from 139.178.68.195 port 46436 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:32.763526 sshd[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:32.771477 systemd-logind[2098]: New session 27 of user core. Jan 17 12:03:32.783252 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:03:35.288792 containerd[2133]: time="2025-01-17T12:03:35.288651705Z" level=info msg="StopContainer for \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\" with timeout 30 (s)" Jan 17 12:03:35.293143 containerd[2133]: time="2025-01-17T12:03:35.289611693Z" level=info msg="Stop container \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\" with signal terminated" Jan 17 12:03:35.315888 kubelet[3631]: E0117 12:03:35.315738 3631 configmap.go:199] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Jan 17 12:03:35.315888 kubelet[3631]: E0117 12:03:35.315858 3631 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-config-path podName:282dfb05-c014-4ec4-85d0-786f0e08acc4 nodeName:}" failed. No retries permitted until 2025-01-17 12:03:35.815830626 +0000 UTC m=+118.820984265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-config-path") pod "cilium-kxmck" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4") : configmap "cilium-config" not found Jan 17 12:03:35.444562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9-rootfs.mount: Deactivated successfully. Jan 17 12:03:35.470455 containerd[2133]: time="2025-01-17T12:03:35.470351146Z" level=info msg="shim disconnected" id=c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9 namespace=k8s.io Jan 17 12:03:35.470455 containerd[2133]: time="2025-01-17T12:03:35.470506654Z" level=warning msg="cleaning up after shim disconnected" id=c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9 namespace=k8s.io Jan 17 12:03:35.470455 containerd[2133]: time="2025-01-17T12:03:35.470530378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:35.499052 containerd[2133]: time="2025-01-17T12:03:35.498878723Z" level=info msg="StopContainer for \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\" returns successfully" Jan 17 12:03:35.500296 containerd[2133]: time="2025-01-17T12:03:35.500019707Z" level=info msg="StopPodSandbox for \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\"" Jan 17 12:03:35.500296 containerd[2133]: time="2025-01-17T12:03:35.500084123Z" level=info msg="Container to stop \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:03:35.506947 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af-shm.mount: Deactivated successfully. Jan 17 12:03:35.559746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af-rootfs.mount: Deactivated successfully. Jan 17 12:03:35.563962 containerd[2133]: time="2025-01-17T12:03:35.563633111Z" level=info msg="shim disconnected" id=8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af namespace=k8s.io Jan 17 12:03:35.563962 containerd[2133]: time="2025-01-17T12:03:35.563714375Z" level=warning msg="cleaning up after shim disconnected" id=8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af namespace=k8s.io Jan 17 12:03:35.563962 containerd[2133]: time="2025-01-17T12:03:35.563738183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:35.587302 containerd[2133]: time="2025-01-17T12:03:35.587233619Z" level=info msg="TearDown network for sandbox \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\" successfully" Jan 17 12:03:35.587302 containerd[2133]: time="2025-01-17T12:03:35.587302271Z" level=info msg="StopPodSandbox for \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\" returns successfully" Jan 17 12:03:35.714344 kubelet[3631]: I0117 12:03:35.713860 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg77h\" (UniqueName: \"kubernetes.io/projected/b20fbcd7-c41f-4767-aba0-b40b37cfd576-kube-api-access-mg77h\") pod \"b20fbcd7-c41f-4767-aba0-b40b37cfd576\" (UID: \"b20fbcd7-c41f-4767-aba0-b40b37cfd576\") " Jan 17 12:03:35.714344 kubelet[3631]: I0117 12:03:35.713944 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b20fbcd7-c41f-4767-aba0-b40b37cfd576-cilium-config-path\") pod \"b20fbcd7-c41f-4767-aba0-b40b37cfd576\" (UID: \"b20fbcd7-c41f-4767-aba0-b40b37cfd576\") " Jan 17 12:03:35.719782 kubelet[3631]: I0117 12:03:35.719713 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b20fbcd7-c41f-4767-aba0-b40b37cfd576-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b20fbcd7-c41f-4767-aba0-b40b37cfd576" (UID: "b20fbcd7-c41f-4767-aba0-b40b37cfd576"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:03:35.723679 kubelet[3631]: I0117 12:03:35.723502 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b20fbcd7-c41f-4767-aba0-b40b37cfd576-kube-api-access-mg77h" (OuterVolumeSpecName: "kube-api-access-mg77h") pod "b20fbcd7-c41f-4767-aba0-b40b37cfd576" (UID: "b20fbcd7-c41f-4767-aba0-b40b37cfd576"). InnerVolumeSpecName "kube-api-access-mg77h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:03:35.724236 systemd[1]: var-lib-kubelet-pods-b20fbcd7\x2dc41f\x2d4767\x2daba0\x2db40b37cfd576-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmg77h.mount: Deactivated successfully. Jan 17 12:03:35.749209 kubelet[3631]: I0117 12:03:35.749047 3631 scope.go:117] "RemoveContainer" containerID="c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9" Jan 17 12:03:35.757566 containerd[2133]: time="2025-01-17T12:03:35.756888972Z" level=info msg="RemoveContainer for \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\"" Jan 17 12:03:35.771414 containerd[2133]: time="2025-01-17T12:03:35.770802372Z" level=info msg="RemoveContainer for \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\" returns successfully" Jan 17 12:03:35.772131 kubelet[3631]: I0117 12:03:35.771969 3631 scope.go:117] "RemoveContainer" containerID="c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9" Jan 17 12:03:35.772871 containerd[2133]: time="2025-01-17T12:03:35.772796328Z" level=error msg="ContainerStatus for \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\": not found" Jan 17 12:03:35.773745 kubelet[3631]: E0117 12:03:35.773515 3631 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\": not found" containerID="c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9" Jan 17 12:03:35.773745 kubelet[3631]: I0117 12:03:35.773704 3631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9"} err="failed to get container status \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6505b3711fff470ba0f9af7655f4995f47ad9fa26e087fcbfc90bad23481ca9\": not found" Jan 17 12:03:35.815208 kubelet[3631]: I0117 12:03:35.814792 3631 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mg77h\" (UniqueName: \"kubernetes.io/projected/b20fbcd7-c41f-4767-aba0-b40b37cfd576-kube-api-access-mg77h\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:35.815833 kubelet[3631]: I0117 12:03:35.815561 3631 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b20fbcd7-c41f-4767-aba0-b40b37cfd576-cilium-config-path\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:35.825046 containerd[2133]: time="2025-01-17T12:03:35.824935320Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:03:35.833527 containerd[2133]: time="2025-01-17T12:03:35.833477220Z" level=info msg="StopContainer for \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\" with timeout 2 (s)" Jan 17 12:03:35.834209 containerd[2133]: time="2025-01-17T12:03:35.834165432Z" level=info msg="Stop container \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\" with signal terminated" Jan 17 12:03:35.847187 systemd-networkd[1685]: lxc_health: Link DOWN Jan 17 12:03:35.847206 systemd-networkd[1685]: lxc_health: Lost carrier Jan 17 12:03:35.848045 systemd-resolved[2016]: lxc_health: Failed to determine whether the interface is managed, ignoring: No such file or directory Jan 17 12:03:35.916203 kubelet[3631]: E0117 12:03:35.916090 3631 configmap.go:199] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Jan 17 12:03:35.916203 kubelet[3631]: E0117 12:03:35.916195 3631 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-config-path podName:282dfb05-c014-4ec4-85d0-786f0e08acc4 nodeName:}" failed. No retries permitted until 2025-01-17 12:03:36.916166905 +0000 UTC m=+119.921320544 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-config-path") pod "cilium-kxmck" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4") : configmap "cilium-config" not found Jan 17 12:03:35.932650 containerd[2133]: time="2025-01-17T12:03:35.932497453Z" level=info msg="shim disconnected" id=2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f namespace=k8s.io Jan 17 12:03:35.932650 containerd[2133]: time="2025-01-17T12:03:35.932599933Z" level=warning msg="cleaning up after shim disconnected" id=2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f namespace=k8s.io Jan 17 12:03:35.932650 containerd[2133]: time="2025-01-17T12:03:35.932623213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:35.962159 containerd[2133]: time="2025-01-17T12:03:35.961945957Z" level=info msg="StopContainer for \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\" returns successfully" Jan 17 12:03:35.962985 containerd[2133]: time="2025-01-17T12:03:35.962630845Z" level=info msg="StopPodSandbox for \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\"" Jan 17 12:03:35.962985 containerd[2133]: time="2025-01-17T12:03:35.962707921Z" level=info msg="Container to stop \"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:03:35.962985 containerd[2133]: time="2025-01-17T12:03:35.962736229Z" level=info msg="Container to stop \"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:03:35.962985 containerd[2133]: time="2025-01-17T12:03:35.962759101Z" level=info msg="Container to stop \"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:03:35.962985 containerd[2133]: time="2025-01-17T12:03:35.962784637Z" level=info msg="Container to stop \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:03:35.962985 containerd[2133]: time="2025-01-17T12:03:35.962807353Z" level=info msg="Container to stop \"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 17 12:03:36.012022 containerd[2133]: time="2025-01-17T12:03:36.011178165Z" level=info msg="shim disconnected" id=4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b namespace=k8s.io Jan 17 12:03:36.012022 containerd[2133]: time="2025-01-17T12:03:36.011335701Z" level=warning msg="cleaning up after shim disconnected" id=4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b namespace=k8s.io Jan 17 12:03:36.012022 containerd[2133]: time="2025-01-17T12:03:36.011358297Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:36.036203 containerd[2133]: time="2025-01-17T12:03:36.036026853Z" level=info msg="TearDown network for sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" successfully" Jan 17 12:03:36.036203 containerd[2133]: time="2025-01-17T12:03:36.036080805Z" level=info msg="StopPodSandbox for \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" returns successfully" Jan 17 12:03:36.118148 kubelet[3631]: I0117 12:03:36.116914 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-host-proc-sys-kernel\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.118148 kubelet[3631]: I0117 12:03:36.116982 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cni-path\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.118148 kubelet[3631]: I0117 12:03:36.117030 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/282dfb05-c014-4ec4-85d0-786f0e08acc4-hubble-tls\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.118148 kubelet[3631]: I0117 12:03:36.117054 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:03:36.118148 kubelet[3631]: I0117 12:03:36.117089 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/282dfb05-c014-4ec4-85d0-786f0e08acc4-clustermesh-secrets\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.118148 kubelet[3631]: I0117 12:03:36.117130 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-xtables-lock\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.119655 kubelet[3631]: I0117 12:03:36.117174 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxlzd\" (UniqueName: \"kubernetes.io/projected/282dfb05-c014-4ec4-85d0-786f0e08acc4-kube-api-access-zxlzd\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.119655 kubelet[3631]: I0117 12:03:36.117217 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-cgroup\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.119655 kubelet[3631]: I0117 12:03:36.117282 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-host-proc-sys-net\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.119655 kubelet[3631]: I0117 12:03:36.117338 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-config-path\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.119655 kubelet[3631]: I0117 12:03:36.117379 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-hostproc\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.119655 kubelet[3631]: I0117 12:03:36.117419 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-etc-cni-netd\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.120031 kubelet[3631]: I0117 12:03:36.117459 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-lib-modules\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.120031 kubelet[3631]: I0117 12:03:36.117498 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-bpf-maps\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.120031 kubelet[3631]: I0117 12:03:36.117543 3631 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-run\") pod \"282dfb05-c014-4ec4-85d0-786f0e08acc4\" (UID: \"282dfb05-c014-4ec4-85d0-786f0e08acc4\") " Jan 17 12:03:36.120031 kubelet[3631]: I0117 12:03:36.117643 3631 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-host-proc-sys-kernel\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.120031 kubelet[3631]: I0117 12:03:36.117696 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:03:36.120031 kubelet[3631]: I0117 12:03:36.118403 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:03:36.120354 kubelet[3631]: I0117 12:03:36.119546 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:03:36.120354 kubelet[3631]: I0117 12:03:36.117125 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cni-path" (OuterVolumeSpecName: "cni-path") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:03:36.122789 kubelet[3631]: I0117 12:03:36.122552 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:03:36.122961 kubelet[3631]: I0117 12:03:36.122849 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:03:36.123324 kubelet[3631]: I0117 12:03:36.123244 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:03:36.123422 kubelet[3631]: I0117 12:03:36.123349 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:03:36.123484 kubelet[3631]: I0117 12:03:36.123422 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-hostproc" (OuterVolumeSpecName: "hostproc") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 17 12:03:36.128802 kubelet[3631]: I0117 12:03:36.128688 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/282dfb05-c014-4ec4-85d0-786f0e08acc4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 17 12:03:36.130616 kubelet[3631]: I0117 12:03:36.129969 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282dfb05-c014-4ec4-85d0-786f0e08acc4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:03:36.131313 kubelet[3631]: I0117 12:03:36.131249 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282dfb05-c014-4ec4-85d0-786f0e08acc4-kube-api-access-zxlzd" (OuterVolumeSpecName: "kube-api-access-zxlzd") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "kube-api-access-zxlzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 17 12:03:36.132318 kubelet[3631]: I0117 12:03:36.132278 3631 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "282dfb05-c014-4ec4-85d0-786f0e08acc4" (UID: "282dfb05-c014-4ec4-85d0-786f0e08acc4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 17 12:03:36.218632 kubelet[3631]: I0117 12:03:36.218562 3631 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-hostproc\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.218825 kubelet[3631]: I0117 12:03:36.218806 3631 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-etc-cni-netd\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.218980 kubelet[3631]: I0117 12:03:36.218962 3631 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-bpf-maps\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.219091 kubelet[3631]: I0117 12:03:36.219073 3631 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-lib-modules\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.219428 kubelet[3631]: I0117 12:03:36.219194 3631 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-run\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.219428 kubelet[3631]: I0117 12:03:36.219221 3631 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cni-path\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.219428 kubelet[3631]: I0117 12:03:36.219244 3631 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/282dfb05-c014-4ec4-85d0-786f0e08acc4-hubble-tls\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.219428 kubelet[3631]: I0117 12:03:36.219271 3631 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/282dfb05-c014-4ec4-85d0-786f0e08acc4-clustermesh-secrets\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.219428 kubelet[3631]: I0117 12:03:36.219294 3631 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-xtables-lock\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.219428 kubelet[3631]: I0117 12:03:36.219318 3631 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-host-proc-sys-net\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.219428 kubelet[3631]: I0117 12:03:36.219342 3631 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-config-path\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.219428 kubelet[3631]: I0117 12:03:36.219367 3631 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-zxlzd\" (UniqueName: \"kubernetes.io/projected/282dfb05-c014-4ec4-85d0-786f0e08acc4-kube-api-access-zxlzd\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.219875 kubelet[3631]: I0117 12:03:36.219389 3631 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/282dfb05-c014-4ec4-85d0-786f0e08acc4-cilium-cgroup\") on node \"ip-172-31-18-162\" DevicePath \"\"" Jan 17 12:03:36.441363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f-rootfs.mount: Deactivated successfully. Jan 17 12:03:36.441668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b-rootfs.mount: Deactivated successfully. Jan 17 12:03:36.441906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b-shm.mount: Deactivated successfully. Jan 17 12:03:36.442125 systemd[1]: var-lib-kubelet-pods-282dfb05\x2dc014\x2d4ec4\x2d85d0\x2d786f0e08acc4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzxlzd.mount: Deactivated successfully. Jan 17 12:03:36.442355 systemd[1]: var-lib-kubelet-pods-282dfb05\x2dc014\x2d4ec4\x2d85d0\x2d786f0e08acc4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 17 12:03:36.442625 systemd[1]: var-lib-kubelet-pods-282dfb05\x2dc014\x2d4ec4\x2d85d0\x2d786f0e08acc4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 17 12:03:36.768894 kubelet[3631]: I0117 12:03:36.767011 3631 scope.go:117] "RemoveContainer" containerID="2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f" Jan 17 12:03:36.773402 containerd[2133]: time="2025-01-17T12:03:36.773008921Z" level=info msg="RemoveContainer for \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\"" Jan 17 12:03:36.781093 containerd[2133]: time="2025-01-17T12:03:36.780790513Z" level=info msg="RemoveContainer for \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\" returns successfully" Jan 17 12:03:36.781533 kubelet[3631]: I0117 12:03:36.781480 3631 scope.go:117] "RemoveContainer" containerID="706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976" Jan 17 12:03:36.787095 containerd[2133]: time="2025-01-17T12:03:36.786734281Z" level=info msg="RemoveContainer for \"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976\"" Jan 17 12:03:36.792885 containerd[2133]: time="2025-01-17T12:03:36.792814765Z" level=info msg="RemoveContainer for \"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976\" returns successfully" Jan 17 12:03:36.793197 kubelet[3631]: I0117 12:03:36.793152 3631 scope.go:117] "RemoveContainer" containerID="d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51" Jan 17 12:03:36.801611 containerd[2133]: time="2025-01-17T12:03:36.801228373Z" level=info msg="RemoveContainer for \"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51\"" Jan 17 12:03:36.816264 containerd[2133]: time="2025-01-17T12:03:36.816190153Z" level=info msg="RemoveContainer for \"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51\" returns successfully" Jan 17 12:03:36.816994 kubelet[3631]: I0117 12:03:36.816829 3631 scope.go:117] "RemoveContainer" containerID="5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4" Jan 17 12:03:36.819185 containerd[2133]: time="2025-01-17T12:03:36.819114277Z" level=info msg="RemoveContainer for \"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4\"" Jan 17 12:03:36.827033 containerd[2133]: time="2025-01-17T12:03:36.826957393Z" level=info msg="RemoveContainer for \"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4\" returns successfully" Jan 17 12:03:36.827322 kubelet[3631]: I0117 12:03:36.827282 3631 scope.go:117] "RemoveContainer" containerID="180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187" Jan 17 12:03:36.829855 containerd[2133]: time="2025-01-17T12:03:36.829797697Z" level=info msg="RemoveContainer for \"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187\"" Jan 17 12:03:36.835013 containerd[2133]: time="2025-01-17T12:03:36.834941425Z" level=info msg="RemoveContainer for \"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187\" returns successfully" Jan 17 12:03:36.835533 kubelet[3631]: I0117 12:03:36.835297 3631 scope.go:117] "RemoveContainer" containerID="2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f" Jan 17 12:03:36.836099 containerd[2133]: time="2025-01-17T12:03:36.836020945Z" level=error msg="ContainerStatus for \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\": not found" Jan 17 12:03:36.836347 kubelet[3631]: E0117 12:03:36.836275 3631 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\": not found" containerID="2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f" Jan 17 12:03:36.836452 kubelet[3631]: I0117 12:03:36.836347 3631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f"} err="failed to get container status \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e4b57138c1abd75453735fc359e4b72115206b5bd9429893c49100612d5340f\": not found" Jan 17 12:03:36.836452 kubelet[3631]: I0117 12:03:36.836373 3631 scope.go:117] "RemoveContainer" containerID="706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976" Jan 17 12:03:36.837017 containerd[2133]: time="2025-01-17T12:03:36.836880409Z" level=error msg="ContainerStatus for \"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976\": not found" Jan 17 12:03:36.837225 kubelet[3631]: E0117 12:03:36.837133 3631 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976\": not found" containerID="706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976" Jan 17 12:03:36.837225 kubelet[3631]: I0117 12:03:36.837194 3631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976"} err="failed to get container status \"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976\": rpc error: code = NotFound desc = an error occurred when try to find container \"706ca770da1d3fb27b8a02ef5703fb3aac13d19a5aa4605ad8d25bb9a59db976\": not found" Jan 17 12:03:36.837225 kubelet[3631]: I0117 12:03:36.837219 3631 scope.go:117] "RemoveContainer" containerID="d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51" Jan 17 12:03:36.837827 containerd[2133]: time="2025-01-17T12:03:36.837610909Z" level=error msg="ContainerStatus for \"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51\": not found" Jan 17 12:03:36.838197 kubelet[3631]: E0117 12:03:36.837996 3631 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51\": not found" containerID="d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51" Jan 17 12:03:36.838197 kubelet[3631]: I0117 12:03:36.838060 3631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51"} err="failed to get container status \"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1611fa5ad81e45eef348bdef06b6ea1cd739a3792c5c587335a43d1334f6f51\": not found" Jan 17 12:03:36.838197 kubelet[3631]: I0117 12:03:36.838085 3631 scope.go:117] "RemoveContainer" containerID="5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4" Jan 17 12:03:36.838551 containerd[2133]: time="2025-01-17T12:03:36.838488373Z" level=error msg="ContainerStatus for \"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4\": not found" Jan 17 12:03:36.839083 kubelet[3631]: E0117 12:03:36.838838 3631 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4\": not found" containerID="5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4" Jan 17 12:03:36.839083 kubelet[3631]: I0117 12:03:36.838894 3631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4"} err="failed to get container status \"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"5dc460a0e3ac31a5ba389787fbff01fe00e87fe5431cf7563ac26fbdec3377b4\": not found" Jan 17 12:03:36.839083 kubelet[3631]: I0117 12:03:36.838917 3631 scope.go:117] "RemoveContainer" containerID="180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187" Jan 17 12:03:36.839408 containerd[2133]: time="2025-01-17T12:03:36.839293705Z" level=error msg="ContainerStatus for \"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187\": not found" Jan 17 12:03:36.839588 kubelet[3631]: E0117 12:03:36.839535 3631 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187\": not found" containerID="180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187" Jan 17 12:03:36.839658 kubelet[3631]: I0117 12:03:36.839624 3631 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187"} err="failed to get container status \"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187\": rpc error: code = NotFound desc = an error occurred when try to find container \"180252aa3e748b5702b9c9b9fac17581fb872b1ce740ed0ac9a719efce5d3187\": not found" Jan 17 12:03:37.230494 sshd[5272]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:37.245282 systemd[1]: sshd@26-172.31.18.162:22-139.178.68.195:46436.service: Deactivated successfully. Jan 17 12:03:37.252894 kubelet[3631]: I0117 12:03:37.250877 3631 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="282dfb05-c014-4ec4-85d0-786f0e08acc4" path="/var/lib/kubelet/pods/282dfb05-c014-4ec4-85d0-786f0e08acc4/volumes" Jan 17 12:03:37.255273 containerd[2133]: time="2025-01-17T12:03:37.254324759Z" level=info msg="StopPodSandbox for \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\"" Jan 17 12:03:37.255273 containerd[2133]: time="2025-01-17T12:03:37.254521847Z" level=info msg="TearDown network for sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" successfully" Jan 17 12:03:37.255273 containerd[2133]: time="2025-01-17T12:03:37.254936207Z" level=info msg="StopPodSandbox for \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" returns successfully" Jan 17 12:03:37.255068 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:03:37.256672 kubelet[3631]: I0117 12:03:37.256408 3631 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b20fbcd7-c41f-4767-aba0-b40b37cfd576" path="/var/lib/kubelet/pods/b20fbcd7-c41f-4767-aba0-b40b37cfd576/volumes" Jan 17 12:03:37.261890 containerd[2133]: time="2025-01-17T12:03:37.257383691Z" level=info msg="RemovePodSandbox for \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\"" Jan 17 12:03:37.261890 containerd[2133]: time="2025-01-17T12:03:37.257446859Z" level=info msg="Forcibly stopping sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\"" Jan 17 12:03:37.261890 containerd[2133]: time="2025-01-17T12:03:37.257559827Z" level=info msg="TearDown network for sandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" successfully" Jan 17 12:03:37.259885 systemd-logind[2098]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:03:37.266685 containerd[2133]: time="2025-01-17T12:03:37.266356967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:37.266685 containerd[2133]: time="2025-01-17T12:03:37.266450351Z" level=info msg="RemovePodSandbox \"4ec4f0d4ed700b2e53b03a1dce60cc667b13ea5d34cdc825a140e5a3e299288b\" returns successfully" Jan 17 12:03:37.270606 containerd[2133]: time="2025-01-17T12:03:37.270231851Z" level=info msg="StopPodSandbox for \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\"" Jan 17 12:03:37.270606 containerd[2133]: time="2025-01-17T12:03:37.270445043Z" level=info msg="TearDown network for sandbox \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\" successfully" Jan 17 12:03:37.270606 containerd[2133]: time="2025-01-17T12:03:37.270470147Z" level=info msg="StopPodSandbox for \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\" returns successfully" Jan 17 12:03:37.273489 containerd[2133]: time="2025-01-17T12:03:37.271334387Z" level=info msg="RemovePodSandbox for \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\"" Jan 17 12:03:37.273489 containerd[2133]: time="2025-01-17T12:03:37.271388435Z" level=info msg="Forcibly stopping sandbox \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\"" Jan 17 12:03:37.273489 containerd[2133]: time="2025-01-17T12:03:37.271483007Z" level=info msg="TearDown network for sandbox \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\" successfully" Jan 17 12:03:37.274625 systemd[1]: Started sshd@27-172.31.18.162:22-139.178.68.195:36680.service - OpenSSH per-connection server daemon (139.178.68.195:36680). Jan 17 12:03:37.276187 systemd-logind[2098]: Removed session 27. Jan 17 12:03:37.277480 containerd[2133]: time="2025-01-17T12:03:37.277419707Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:37.277769 containerd[2133]: time="2025-01-17T12:03:37.277729139Z" level=info msg="RemovePodSandbox \"8116f74946a855967fe5cdcac828f50bdcc208eeeda1923b34adc4731a9791af\" returns successfully" Jan 17 12:03:37.452836 sshd[5441]: Accepted publickey for core from 139.178.68.195 port 36680 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:37.455567 sshd[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:37.465695 systemd-logind[2098]: New session 28 of user core. Jan 17 12:03:37.472196 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 12:03:37.489595 kubelet[3631]: E0117 12:03:37.489279 3631 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:03:38.242340 ntpd[2088]: Deleting interface #10 lxc_health, fe80::cc53:60ff:fe4c:3f70%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs Jan 17 12:03:38.243038 ntpd[2088]: 17 Jan 12:03:38 ntpd[2088]: Deleting interface #10 lxc_health, fe80::cc53:60ff:fe4c:3f70%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs Jan 17 12:03:39.304657 kubelet[3631]: I0117 12:03:39.303497 3631 topology_manager.go:215] "Topology Admit Handler" podUID="b861ac18-146b-4d23-9957-e85b0eb4bfe8" podNamespace="kube-system" podName="cilium-25frn" Jan 17 12:03:39.305463 kubelet[3631]: E0117 12:03:39.304712 3631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b20fbcd7-c41f-4767-aba0-b40b37cfd576" containerName="cilium-operator" Jan 17 12:03:39.305463 kubelet[3631]: E0117 12:03:39.304742 3631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="282dfb05-c014-4ec4-85d0-786f0e08acc4" containerName="apply-sysctl-overwrites" Jan 17 12:03:39.305463 kubelet[3631]: E0117 12:03:39.304761 3631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="282dfb05-c014-4ec4-85d0-786f0e08acc4" containerName="mount-bpf-fs" Jan 17 12:03:39.305463 kubelet[3631]: E0117 12:03:39.304779 3631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="282dfb05-c014-4ec4-85d0-786f0e08acc4" containerName="clean-cilium-state" Jan 17 12:03:39.305463 kubelet[3631]: E0117 12:03:39.304797 3631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="282dfb05-c014-4ec4-85d0-786f0e08acc4" containerName="cilium-agent" Jan 17 12:03:39.305463 kubelet[3631]: E0117 12:03:39.304815 3631 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="282dfb05-c014-4ec4-85d0-786f0e08acc4" containerName="mount-cgroup" Jan 17 12:03:39.305463 kubelet[3631]: I0117 12:03:39.304885 3631 memory_manager.go:354] "RemoveStaleState removing state" podUID="282dfb05-c014-4ec4-85d0-786f0e08acc4" containerName="cilium-agent" Jan 17 12:03:39.305463 kubelet[3631]: I0117 12:03:39.304904 3631 memory_manager.go:354] "RemoveStaleState removing state" podUID="b20fbcd7-c41f-4767-aba0-b40b37cfd576" containerName="cilium-operator" Jan 17 12:03:39.311869 sshd[5441]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:39.335061 systemd[1]: sshd@27-172.31.18.162:22-139.178.68.195:36680.service: Deactivated successfully. Jan 17 12:03:39.349502 systemd-logind[2098]: Session 28 logged out. Waiting for processes to exit. Jan 17 12:03:39.349527 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 12:03:39.365976 systemd[1]: Started sshd@28-172.31.18.162:22-139.178.68.195:36690.service - OpenSSH per-connection server daemon (139.178.68.195:36690). Jan 17 12:03:39.376884 systemd-logind[2098]: Removed session 28. Jan 17 12:03:39.446665 kubelet[3631]: I0117 12:03:39.444870 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b861ac18-146b-4d23-9957-e85b0eb4bfe8-cilium-config-path\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.446665 kubelet[3631]: I0117 12:03:39.444962 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b861ac18-146b-4d23-9957-e85b0eb4bfe8-cni-path\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.446665 kubelet[3631]: I0117 12:03:39.445009 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b861ac18-146b-4d23-9957-e85b0eb4bfe8-lib-modules\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.446665 kubelet[3631]: I0117 12:03:39.445054 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b861ac18-146b-4d23-9957-e85b0eb4bfe8-clustermesh-secrets\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.446665 kubelet[3631]: I0117 12:03:39.445100 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b861ac18-146b-4d23-9957-e85b0eb4bfe8-cilium-ipsec-secrets\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.449641 kubelet[3631]: I0117 12:03:39.447636 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b861ac18-146b-4d23-9957-e85b0eb4bfe8-host-proc-sys-kernel\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.449641 kubelet[3631]: I0117 12:03:39.447757 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b861ac18-146b-4d23-9957-e85b0eb4bfe8-hubble-tls\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.449641 kubelet[3631]: I0117 12:03:39.447807 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9rhd\" (UniqueName: \"kubernetes.io/projected/b861ac18-146b-4d23-9957-e85b0eb4bfe8-kube-api-access-p9rhd\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.449641 kubelet[3631]: I0117 12:03:39.447857 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b861ac18-146b-4d23-9957-e85b0eb4bfe8-hostproc\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.449641 kubelet[3631]: I0117 12:03:39.447903 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b861ac18-146b-4d23-9957-e85b0eb4bfe8-host-proc-sys-net\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.449641 kubelet[3631]: I0117 12:03:39.447951 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b861ac18-146b-4d23-9957-e85b0eb4bfe8-cilium-cgroup\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.450076 kubelet[3631]: I0117 12:03:39.448005 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b861ac18-146b-4d23-9957-e85b0eb4bfe8-xtables-lock\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.450076 kubelet[3631]: I0117 12:03:39.448057 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b861ac18-146b-4d23-9957-e85b0eb4bfe8-cilium-run\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.450076 kubelet[3631]: I0117 12:03:39.448101 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b861ac18-146b-4d23-9957-e85b0eb4bfe8-etc-cni-netd\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.450076 kubelet[3631]: I0117 12:03:39.448150 3631 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b861ac18-146b-4d23-9957-e85b0eb4bfe8-bpf-maps\") pod \"cilium-25frn\" (UID: \"b861ac18-146b-4d23-9957-e85b0eb4bfe8\") " pod="kube-system/cilium-25frn" Jan 17 12:03:39.627158 sshd[5459]: Accepted publickey for core from 139.178.68.195 port 36690 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:39.630505 containerd[2133]: time="2025-01-17T12:03:39.629267775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25frn,Uid:b861ac18-146b-4d23-9957-e85b0eb4bfe8,Namespace:kube-system,Attempt:0,}" Jan 17 12:03:39.634550 sshd[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:39.667960 systemd-logind[2098]: New session 29 of user core. Jan 17 12:03:39.676161 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 17 12:03:39.692039 containerd[2133]: time="2025-01-17T12:03:39.691070379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:39.692039 containerd[2133]: time="2025-01-17T12:03:39.691167111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:39.692039 containerd[2133]: time="2025-01-17T12:03:39.691223295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:39.692039 containerd[2133]: time="2025-01-17T12:03:39.691420107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:39.758852 containerd[2133]: time="2025-01-17T12:03:39.758727784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25frn,Uid:b861ac18-146b-4d23-9957-e85b0eb4bfe8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9b64745a0fcb733b8768d8f80a1d0115f974b3abd2de4a5fe22fbbc6c9c0efc\"" Jan 17 12:03:39.764162 containerd[2133]: time="2025-01-17T12:03:39.763762864Z" level=info msg="CreateContainer within sandbox \"d9b64745a0fcb733b8768d8f80a1d0115f974b3abd2de4a5fe22fbbc6c9c0efc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 17 12:03:39.786177 containerd[2133]: time="2025-01-17T12:03:39.786056176Z" level=info msg="CreateContainer within sandbox \"d9b64745a0fcb733b8768d8f80a1d0115f974b3abd2de4a5fe22fbbc6c9c0efc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"151d7136a2e796dc45a8f7309114cfa60cec8face25f959548a98e6aa7e3a305\"" Jan 17 12:03:39.788036 containerd[2133]: time="2025-01-17T12:03:39.787960528Z" level=info msg="StartContainer for \"151d7136a2e796dc45a8f7309114cfa60cec8face25f959548a98e6aa7e3a305\"" Jan 17 12:03:39.807174 sshd[5459]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:39.819435 systemd[1]: sshd@28-172.31.18.162:22-139.178.68.195:36690.service: Deactivated successfully. Jan 17 12:03:39.830420 systemd-logind[2098]: Session 29 logged out. Waiting for processes to exit. Jan 17 12:03:39.833398 systemd[1]: session-29.scope: Deactivated successfully. Jan 17 12:03:39.844973 systemd[1]: Started sshd@29-172.31.18.162:22-139.178.68.195:36698.service - OpenSSH per-connection server daemon (139.178.68.195:36698). Jan 17 12:03:39.848947 systemd-logind[2098]: Removed session 29. Jan 17 12:03:39.919829 containerd[2133]: time="2025-01-17T12:03:39.919655824Z" level=info msg="StartContainer for \"151d7136a2e796dc45a8f7309114cfa60cec8face25f959548a98e6aa7e3a305\" returns successfully" Jan 17 12:03:39.993676 containerd[2133]: time="2025-01-17T12:03:39.993492089Z" level=info msg="shim disconnected" id=151d7136a2e796dc45a8f7309114cfa60cec8face25f959548a98e6aa7e3a305 namespace=k8s.io Jan 17 12:03:39.993676 containerd[2133]: time="2025-01-17T12:03:39.993628649Z" level=warning msg="cleaning up after shim disconnected" id=151d7136a2e796dc45a8f7309114cfa60cec8face25f959548a98e6aa7e3a305 namespace=k8s.io Jan 17 12:03:39.993676 containerd[2133]: time="2025-01-17T12:03:39.993650969Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:40.038731 kubelet[3631]: I0117 12:03:40.038674 3631 setters.go:568] "Node became not ready" node="ip-172-31-18-162" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-17T12:03:40Z","lastTransitionTime":"2025-01-17T12:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 17 12:03:40.045037 sshd[5527]: Accepted publickey for core from 139.178.68.195 port 36698 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:40.052253 sshd[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:40.070819 systemd-logind[2098]: New session 30 of user core. Jan 17 12:03:40.075153 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 17 12:03:40.233734 kubelet[3631]: E0117 12:03:40.233471 3631 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-nl7gc" podUID="12e74d01-c322-4434-a3d2-b18e7891d5df" Jan 17 12:03:40.796970 containerd[2133]: time="2025-01-17T12:03:40.796785701Z" level=info msg="CreateContainer within sandbox \"d9b64745a0fcb733b8768d8f80a1d0115f974b3abd2de4a5fe22fbbc6c9c0efc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 17 12:03:40.832836 containerd[2133]: time="2025-01-17T12:03:40.832698857Z" level=info msg="CreateContainer within sandbox \"d9b64745a0fcb733b8768d8f80a1d0115f974b3abd2de4a5fe22fbbc6c9c0efc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bf8f59684425182e809e0128bfc680b5144115fc0fba3195f59896f7eca11c5f\"" Jan 17 12:03:40.836441 containerd[2133]: time="2025-01-17T12:03:40.835822925Z" level=info msg="StartContainer for \"bf8f59684425182e809e0128bfc680b5144115fc0fba3195f59896f7eca11c5f\"" Jan 17 12:03:40.952401 containerd[2133]: time="2025-01-17T12:03:40.952202262Z" level=info msg="StartContainer for \"bf8f59684425182e809e0128bfc680b5144115fc0fba3195f59896f7eca11c5f\" returns successfully" Jan 17 12:03:41.010982 containerd[2133]: time="2025-01-17T12:03:41.010908890Z" level=info msg="shim disconnected" id=bf8f59684425182e809e0128bfc680b5144115fc0fba3195f59896f7eca11c5f namespace=k8s.io Jan 17 12:03:41.011375 containerd[2133]: time="2025-01-17T12:03:41.011108042Z" level=warning msg="cleaning up after shim disconnected" id=bf8f59684425182e809e0128bfc680b5144115fc0fba3195f59896f7eca11c5f namespace=k8s.io Jan 17 12:03:41.011375 containerd[2133]: time="2025-01-17T12:03:41.011134934Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:41.564492 systemd[1]: run-containerd-runc-k8s.io-bf8f59684425182e809e0128bfc680b5144115fc0fba3195f59896f7eca11c5f-runc.TMrEL3.mount: Deactivated successfully. Jan 17 12:03:41.565008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf8f59684425182e809e0128bfc680b5144115fc0fba3195f59896f7eca11c5f-rootfs.mount: Deactivated successfully. Jan 17 12:03:41.804729 containerd[2133]: time="2025-01-17T12:03:41.803211858Z" level=info msg="CreateContainer within sandbox \"d9b64745a0fcb733b8768d8f80a1d0115f974b3abd2de4a5fe22fbbc6c9c0efc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 17 12:03:41.837664 containerd[2133]: time="2025-01-17T12:03:41.835343574Z" level=info msg="CreateContainer within sandbox \"d9b64745a0fcb733b8768d8f80a1d0115f974b3abd2de4a5fe22fbbc6c9c0efc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6da0a9062e49e1a4171c45df2ec8586fa240058daf578555d06f4f0853072596\"" Jan 17 12:03:41.839827 containerd[2133]: time="2025-01-17T12:03:41.838069086Z" level=info msg="StartContainer for \"6da0a9062e49e1a4171c45df2ec8586fa240058daf578555d06f4f0853072596\"" Jan 17 12:03:41.948453 containerd[2133]: time="2025-01-17T12:03:41.948380923Z" level=info msg="StartContainer for \"6da0a9062e49e1a4171c45df2ec8586fa240058daf578555d06f4f0853072596\" returns successfully" Jan 17 12:03:42.001776 containerd[2133]: time="2025-01-17T12:03:42.001644519Z" level=info msg="shim disconnected" id=6da0a9062e49e1a4171c45df2ec8586fa240058daf578555d06f4f0853072596 namespace=k8s.io Jan 17 12:03:42.001776 containerd[2133]: time="2025-01-17T12:03:42.001720359Z" level=warning msg="cleaning up after shim disconnected" id=6da0a9062e49e1a4171c45df2ec8586fa240058daf578555d06f4f0853072596 namespace=k8s.io Jan 17 12:03:42.001776 containerd[2133]: time="2025-01-17T12:03:42.001740411Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:42.234297 kubelet[3631]: E0117 12:03:42.233863 3631 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-nl7gc" podUID="12e74d01-c322-4434-a3d2-b18e7891d5df" Jan 17 12:03:42.490983 kubelet[3631]: E0117 12:03:42.490810 3631 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 17 12:03:42.564063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6da0a9062e49e1a4171c45df2ec8586fa240058daf578555d06f4f0853072596-rootfs.mount: Deactivated successfully. Jan 17 12:03:42.810432 containerd[2133]: time="2025-01-17T12:03:42.810336151Z" level=info msg="CreateContainer within sandbox \"d9b64745a0fcb733b8768d8f80a1d0115f974b3abd2de4a5fe22fbbc6c9c0efc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 17 12:03:42.840895 containerd[2133]: time="2025-01-17T12:03:42.840667003Z" level=info msg="CreateContainer within sandbox \"d9b64745a0fcb733b8768d8f80a1d0115f974b3abd2de4a5fe22fbbc6c9c0efc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1435e32bdcd35380851547d0fdf130cf05104dd7ae79f06bae994ac1ad757856\"" Jan 17 12:03:42.842629 containerd[2133]: time="2025-01-17T12:03:42.841801027Z" level=info msg="StartContainer for \"1435e32bdcd35380851547d0fdf130cf05104dd7ae79f06bae994ac1ad757856\"" Jan 17 12:03:42.957712 containerd[2133]: time="2025-01-17T12:03:42.957639152Z" level=info msg="StartContainer for \"1435e32bdcd35380851547d0fdf130cf05104dd7ae79f06bae994ac1ad757856\" returns successfully" Jan 17 12:03:43.000309 containerd[2133]: time="2025-01-17T12:03:43.000205672Z" level=info msg="shim disconnected" id=1435e32bdcd35380851547d0fdf130cf05104dd7ae79f06bae994ac1ad757856 namespace=k8s.io Jan 17 12:03:43.000309 containerd[2133]: time="2025-01-17T12:03:43.000305788Z" level=warning msg="cleaning up after shim disconnected" id=1435e32bdcd35380851547d0fdf130cf05104dd7ae79f06bae994ac1ad757856 namespace=k8s.io Jan 17 12:03:43.000906 containerd[2133]: time="2025-01-17T12:03:43.000328060Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:43.564260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1435e32bdcd35380851547d0fdf130cf05104dd7ae79f06bae994ac1ad757856-rootfs.mount: Deactivated successfully. Jan 17 12:03:43.818859 containerd[2133]: time="2025-01-17T12:03:43.818497532Z" level=info msg="CreateContainer within sandbox \"d9b64745a0fcb733b8768d8f80a1d0115f974b3abd2de4a5fe22fbbc6c9c0efc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 17 12:03:43.856800 containerd[2133]: time="2025-01-17T12:03:43.856729748Z" level=info msg="CreateContainer within sandbox \"d9b64745a0fcb733b8768d8f80a1d0115f974b3abd2de4a5fe22fbbc6c9c0efc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4ae32229e089845c94f4c78eec9c5161018ed534079c597d8b8d15690bef01a0\"" Jan 17 12:03:43.857743 containerd[2133]: time="2025-01-17T12:03:43.857691728Z" level=info msg="StartContainer for \"4ae32229e089845c94f4c78eec9c5161018ed534079c597d8b8d15690bef01a0\"" Jan 17 12:03:43.974865 containerd[2133]: time="2025-01-17T12:03:43.974716257Z" level=info msg="StartContainer for \"4ae32229e089845c94f4c78eec9c5161018ed534079c597d8b8d15690bef01a0\" returns successfully" Jan 17 12:03:44.235193 kubelet[3631]: E0117 12:03:44.233390 3631 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-nl7gc" podUID="12e74d01-c322-4434-a3d2-b18e7891d5df" Jan 17 12:03:44.567944 systemd[1]: run-containerd-runc-k8s.io-4ae32229e089845c94f4c78eec9c5161018ed534079c597d8b8d15690bef01a0-runc.03Jti4.mount: Deactivated successfully. Jan 17 12:03:44.766660 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 17 12:03:44.872702 kubelet[3631]: I0117 12:03:44.872184 3631 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-25frn" podStartSLOduration=5.8721264810000005 podStartE2EDuration="5.872126481s" podCreationTimestamp="2025-01-17 12:03:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:03:44.871265925 +0000 UTC m=+127.876419600" watchObservedRunningTime="2025-01-17 12:03:44.872126481 +0000 UTC m=+127.877280132" Jan 17 12:03:46.233819 kubelet[3631]: E0117 12:03:46.233753 3631 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-nl7gc" podUID="12e74d01-c322-4434-a3d2-b18e7891d5df" Jan 17 12:03:47.236617 kubelet[3631]: E0117 12:03:47.234778 3631 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-l4fxn" podUID="d719f70c-10f9-4b72-ad56-55d7dcb47d42" Jan 17 12:03:48.903908 kubelet[3631]: E0117 12:03:48.903851 3631 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37954->127.0.0.1:34417: write tcp 127.0.0.1:37954->127.0.0.1:34417: write: broken pipe Jan 17 12:03:49.129196 systemd-networkd[1685]: lxc_health: Link UP Jan 17 12:03:49.139471 systemd-networkd[1685]: lxc_health: Gained carrier Jan 17 12:03:49.148104 (udev-worker)[6325]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:03:50.316941 systemd-networkd[1685]: lxc_health: Gained IPv6LL Jan 17 12:03:53.242837 ntpd[2088]: Listen normally on 13 lxc_health [fe80::283f:26ff:fe2d:f2d6%14]:123 Jan 17 12:03:53.245154 ntpd[2088]: 17 Jan 12:03:53 ntpd[2088]: Listen normally on 13 lxc_health [fe80::283f:26ff:fe2d:f2d6%14]:123 Jan 17 12:03:56.047959 sshd[5527]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:56.065897 systemd[1]: sshd@29-172.31.18.162:22-139.178.68.195:36698.service: Deactivated successfully. Jan 17 12:03:56.068882 systemd-logind[2098]: Session 30 logged out. Waiting for processes to exit. Jan 17 12:03:56.080062 systemd[1]: session-30.scope: Deactivated successfully. Jan 17 12:03:56.083317 systemd-logind[2098]: Removed session 30. Jan 17 12:04:09.453722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7117c4ca9388ffd702766b3cd662971e7ac0ae92168556ccb78c8662afaa6ee3-rootfs.mount: Deactivated successfully. Jan 17 12:04:09.496922 containerd[2133]: time="2025-01-17T12:04:09.496759279Z" level=info msg="shim disconnected" id=7117c4ca9388ffd702766b3cd662971e7ac0ae92168556ccb78c8662afaa6ee3 namespace=k8s.io Jan 17 12:04:09.496922 containerd[2133]: time="2025-01-17T12:04:09.496836559Z" level=warning msg="cleaning up after shim disconnected" id=7117c4ca9388ffd702766b3cd662971e7ac0ae92168556ccb78c8662afaa6ee3 namespace=k8s.io Jan 17 12:04:09.496922 containerd[2133]: time="2025-01-17T12:04:09.496857019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:09.907419 kubelet[3631]: I0117 12:04:09.906542 3631 scope.go:117] "RemoveContainer" containerID="7117c4ca9388ffd702766b3cd662971e7ac0ae92168556ccb78c8662afaa6ee3" Jan 17 12:04:09.910921 containerd[2133]: time="2025-01-17T12:04:09.910822137Z" level=info msg="CreateContainer within sandbox \"118b6b453ae826f0a08ba25ec8ecb15b8250ddb45d059ae16756dc77997381fc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 12:04:09.932961 containerd[2133]: time="2025-01-17T12:04:09.932823430Z" level=info msg="CreateContainer within sandbox \"118b6b453ae826f0a08ba25ec8ecb15b8250ddb45d059ae16756dc77997381fc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3abab376670cf23b7a348c97a2437f290163e3767dbb0405e0b77d24c46c547a\"" Jan 17 12:04:09.933812 containerd[2133]: time="2025-01-17T12:04:09.933626170Z" level=info msg="StartContainer for \"3abab376670cf23b7a348c97a2437f290163e3767dbb0405e0b77d24c46c547a\"" Jan 17 12:04:10.064053 containerd[2133]: time="2025-01-17T12:04:10.063797946Z" level=info msg="StartContainer for \"3abab376670cf23b7a348c97a2437f290163e3767dbb0405e0b77d24c46c547a\" returns successfully" Jan 17 12:04:10.398067 kubelet[3631]: E0117 12:04:10.397840 3631 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-162?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:04:10.450061 systemd[1]: run-containerd-runc-k8s.io-3abab376670cf23b7a348c97a2437f290163e3767dbb0405e0b77d24c46c547a-runc.XGAbgt.mount: Deactivated successfully. Jan 17 12:04:15.832146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0384011aa11269e80410d7d705e67b52179bbba470ba435b4ee588fa6645941-rootfs.mount: Deactivated successfully. Jan 17 12:04:15.843924 containerd[2133]: time="2025-01-17T12:04:15.843770127Z" level=info msg="shim disconnected" id=a0384011aa11269e80410d7d705e67b52179bbba470ba435b4ee588fa6645941 namespace=k8s.io Jan 17 12:04:15.843924 containerd[2133]: time="2025-01-17T12:04:15.843921507Z" level=warning msg="cleaning up after shim disconnected" id=a0384011aa11269e80410d7d705e67b52179bbba470ba435b4ee588fa6645941 namespace=k8s.io Jan 17 12:04:15.844771 containerd[2133]: time="2025-01-17T12:04:15.843944283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:04:15.925783 kubelet[3631]: I0117 12:04:15.925713 3631 scope.go:117] "RemoveContainer" containerID="a0384011aa11269e80410d7d705e67b52179bbba470ba435b4ee588fa6645941" Jan 17 12:04:15.929857 containerd[2133]: time="2025-01-17T12:04:15.929741991Z" level=info msg="CreateContainer within sandbox \"b89cb0b08426d914cde5b26defc548d47050e3620d958e5fcc310e6bbb87dd24\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 12:04:15.951990 containerd[2133]: time="2025-01-17T12:04:15.951922539Z" level=info msg="CreateContainer within sandbox \"b89cb0b08426d914cde5b26defc548d47050e3620d958e5fcc310e6bbb87dd24\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5e122a67c5886d4fd9c9c75e30af6c497799852eec96e4a607d6b24600ed0029\"" Jan 17 12:04:15.953477 containerd[2133]: time="2025-01-17T12:04:15.953252547Z" level=info msg="StartContainer for \"5e122a67c5886d4fd9c9c75e30af6c497799852eec96e4a607d6b24600ed0029\"" Jan 17 12:04:16.068854 containerd[2133]: time="2025-01-17T12:04:16.068780688Z" level=info msg="StartContainer for \"5e122a67c5886d4fd9c9c75e30af6c497799852eec96e4a607d6b24600ed0029\" returns successfully" Jan 17 12:04:20.398663 kubelet[3631]: E0117 12:04:20.398414 3631 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-162?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:04:30.400006 kubelet[3631]: E0117 12:04:30.399686 3631 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-162?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"