Jan 29 16:04:11.235300 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 29 16:04:11.235353 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Jan 29 14:53:00 -00 2025 Jan 29 16:04:11.235382 kernel: KASLR disabled due to lack of seed Jan 29 16:04:11.235400 kernel: efi: EFI v2.7 by EDK II Jan 29 16:04:11.235417 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Jan 29 16:04:11.235433 kernel: secureboot: Secure boot disabled Jan 29 16:04:11.235452 kernel: ACPI: Early table checksum verification disabled Jan 29 16:04:11.235468 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 29 16:04:11.235484 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 29 16:04:11.235500 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 16:04:11.235521 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 29 16:04:11.235538 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 16:04:11.235554 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 29 16:04:11.235571 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 29 16:04:11.235590 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 29 16:04:11.235611 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 16:04:11.235628 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 29 16:04:11.235645 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 29 16:04:11.235662 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 29 16:04:11.235679 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 29 16:04:11.235696 kernel: printk: bootconsole [uart0] enabled Jan 29 16:04:11.235713 kernel: NUMA: Failed to initialise from firmware Jan 29 16:04:11.235730 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 16:04:11.235748 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 29 16:04:11.235765 kernel: Zone ranges: Jan 29 16:04:11.235783 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 16:04:11.235804 kernel: DMA32 empty Jan 29 16:04:11.235822 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 29 16:04:11.235839 kernel: Movable zone start for each node Jan 29 16:04:11.235856 kernel: Early memory node ranges Jan 29 16:04:11.235874 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 29 16:04:11.235890 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 29 16:04:11.235907 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 29 16:04:11.235924 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 29 16:04:11.235941 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 29 16:04:11.235958 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 29 16:04:11.235974 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 29 16:04:11.235991 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 29 16:04:11.236012 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 16:04:11.236030 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 29 16:04:11.236084 kernel: psci: probing for conduit method from ACPI. Jan 29 16:04:11.236104 kernel: psci: PSCIv1.0 detected in firmware. Jan 29 16:04:11.236122 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 16:04:11.236145 kernel: psci: Trusted OS migration not required Jan 29 16:04:11.236163 kernel: psci: SMC Calling Convention v1.1 Jan 29 16:04:11.236180 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 16:04:11.236199 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 16:04:11.236217 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 16:04:11.236235 kernel: Detected PIPT I-cache on CPU0 Jan 29 16:04:11.236253 kernel: CPU features: detected: GIC system register CPU interface Jan 29 16:04:11.236271 kernel: CPU features: detected: Spectre-v2 Jan 29 16:04:11.236289 kernel: CPU features: detected: Spectre-v3a Jan 29 16:04:11.236307 kernel: CPU features: detected: Spectre-BHB Jan 29 16:04:11.236324 kernel: CPU features: detected: ARM erratum 1742098 Jan 29 16:04:11.236342 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 29 16:04:11.236365 kernel: alternatives: applying boot alternatives Jan 29 16:04:11.236386 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:04:11.236406 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:04:11.236424 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:04:11.236441 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:04:11.236460 kernel: Fallback order for Node 0: 0 Jan 29 16:04:11.236478 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 29 16:04:11.236496 kernel: Policy zone: Normal Jan 29 16:04:11.236514 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:04:11.236531 kernel: software IO TLB: area num 2. Jan 29 16:04:11.236554 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 29 16:04:11.236573 kernel: Memory: 3821240K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 209224K reserved, 0K cma-reserved) Jan 29 16:04:11.236591 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:04:11.236609 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:04:11.236628 kernel: rcu: RCU event tracing is enabled. Jan 29 16:04:11.236646 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:04:11.236665 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:04:11.236684 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:04:11.236702 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:04:11.236720 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:04:11.236738 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 16:04:11.236761 kernel: GICv3: 96 SPIs implemented Jan 29 16:04:11.236780 kernel: GICv3: 0 Extended SPIs implemented Jan 29 16:04:11.236798 kernel: Root IRQ handler: gic_handle_irq Jan 29 16:04:11.236816 kernel: GICv3: GICv3 features: 16 PPIs Jan 29 16:04:11.236834 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 29 16:04:11.236852 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 29 16:04:11.236870 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 16:04:11.236888 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 29 16:04:11.236912 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 29 16:04:11.236930 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 29 16:04:11.236948 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 29 16:04:11.236966 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:04:11.236989 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 29 16:04:11.237008 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 29 16:04:11.237027 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 29 16:04:11.237103 kernel: Console: colour dummy device 80x25 Jan 29 16:04:11.237128 kernel: printk: console [tty1] enabled Jan 29 16:04:11.237148 kernel: ACPI: Core revision 20230628 Jan 29 16:04:11.237167 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 29 16:04:11.237186 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:04:11.237205 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:04:11.237230 kernel: landlock: Up and running. Jan 29 16:04:11.237249 kernel: SELinux: Initializing. Jan 29 16:04:11.237267 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:04:11.237286 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:04:11.237304 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:04:11.237323 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:04:11.237342 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:04:11.237362 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:04:11.237381 kernel: Platform MSI: ITS@0x10080000 domain created Jan 29 16:04:11.237404 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 29 16:04:11.237424 kernel: Remapping and enabling EFI services. Jan 29 16:04:11.237442 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:04:11.237461 kernel: Detected PIPT I-cache on CPU1 Jan 29 16:04:11.237480 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 29 16:04:11.237498 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 29 16:04:11.237516 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 29 16:04:11.237535 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:04:11.237553 kernel: SMP: Total of 2 processors activated. Jan 29 16:04:11.237576 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 16:04:11.237594 kernel: CPU features: detected: 32-bit EL1 Support Jan 29 16:04:11.237613 kernel: CPU features: detected: CRC32 instructions Jan 29 16:04:11.237642 kernel: CPU: All CPU(s) started at EL1 Jan 29 16:04:11.237665 kernel: alternatives: applying system-wide alternatives Jan 29 16:04:11.237705 kernel: devtmpfs: initialized Jan 29 16:04:11.237727 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:04:11.237746 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:04:11.237766 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:04:11.237785 kernel: SMBIOS 3.0.0 present. Jan 29 16:04:11.237810 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 29 16:04:11.237829 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:04:11.237848 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 16:04:11.237867 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 16:04:11.237886 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 16:04:11.237905 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:04:11.237924 kernel: audit: type=2000 audit(0.232:1): state=initialized audit_enabled=0 res=1 Jan 29 16:04:11.237948 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:04:11.237967 kernel: cpuidle: using governor menu Jan 29 16:04:11.237986 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 16:04:11.238005 kernel: ASID allocator initialised with 65536 entries Jan 29 16:04:11.238024 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:04:11.238072 kernel: Serial: AMBA PL011 UART driver Jan 29 16:04:11.238093 kernel: Modules: 17760 pages in range for non-PLT usage Jan 29 16:04:11.238113 kernel: Modules: 509280 pages in range for PLT usage Jan 29 16:04:11.238132 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:04:11.238158 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:04:11.238177 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 16:04:11.238197 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 16:04:11.238216 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:04:11.238235 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:04:11.238254 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 16:04:11.238273 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 16:04:11.238292 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:04:11.238311 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:04:11.238335 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:04:11.238354 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:04:11.238374 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:04:11.238393 kernel: ACPI: Interpreter enabled Jan 29 16:04:11.238412 kernel: ACPI: Using GIC for interrupt routing Jan 29 16:04:11.238431 kernel: ACPI: MCFG table detected, 1 entries Jan 29 16:04:11.238451 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 29 16:04:11.238830 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:04:11.239135 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 16:04:11.239389 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 16:04:11.239625 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 29 16:04:11.239850 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 29 16:04:11.239880 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 29 16:04:11.239900 kernel: acpiphp: Slot [1] registered Jan 29 16:04:11.239920 kernel: acpiphp: Slot [2] registered Jan 29 16:04:11.239940 kernel: acpiphp: Slot [3] registered Jan 29 16:04:11.239974 kernel: acpiphp: Slot [4] registered Jan 29 16:04:11.239994 kernel: acpiphp: Slot [5] registered Jan 29 16:04:11.240014 kernel: acpiphp: Slot [6] registered Jan 29 16:04:11.240033 kernel: acpiphp: Slot [7] registered Jan 29 16:04:11.240098 kernel: acpiphp: Slot [8] registered Jan 29 16:04:11.240118 kernel: acpiphp: Slot [9] registered Jan 29 16:04:11.240143 kernel: acpiphp: Slot [10] registered Jan 29 16:04:11.240166 kernel: acpiphp: Slot [11] registered Jan 29 16:04:11.240190 kernel: acpiphp: Slot [12] registered Jan 29 16:04:11.240210 kernel: acpiphp: Slot [13] registered Jan 29 16:04:11.240238 kernel: acpiphp: Slot [14] registered Jan 29 16:04:11.240258 kernel: acpiphp: Slot [15] registered Jan 29 16:04:11.240277 kernel: acpiphp: Slot [16] registered Jan 29 16:04:11.240295 kernel: acpiphp: Slot [17] registered Jan 29 16:04:11.240315 kernel: acpiphp: Slot [18] registered Jan 29 16:04:11.240335 kernel: acpiphp: Slot [19] registered Jan 29 16:04:11.240356 kernel: acpiphp: Slot [20] registered Jan 29 16:04:11.240375 kernel: acpiphp: Slot [21] registered Jan 29 16:04:11.240396 kernel: acpiphp: Slot [22] registered Jan 29 16:04:11.240424 kernel: acpiphp: Slot [23] registered Jan 29 16:04:11.240445 kernel: acpiphp: Slot [24] registered Jan 29 16:04:11.240464 kernel: acpiphp: Slot [25] registered Jan 29 16:04:11.240483 kernel: acpiphp: Slot [26] registered Jan 29 16:04:11.240501 kernel: acpiphp: Slot [27] registered Jan 29 16:04:11.240521 kernel: acpiphp: Slot [28] registered Jan 29 16:04:11.240540 kernel: acpiphp: Slot [29] registered Jan 29 16:04:11.240558 kernel: acpiphp: Slot [30] registered Jan 29 16:04:11.240577 kernel: acpiphp: Slot [31] registered Jan 29 16:04:11.240596 kernel: PCI host bridge to bus 0000:00 Jan 29 16:04:11.240892 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 29 16:04:11.241137 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 16:04:11.241346 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 29 16:04:11.241540 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 29 16:04:11.246394 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 29 16:04:11.246718 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 29 16:04:11.246975 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 29 16:04:11.248529 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 16:04:11.249438 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 29 16:04:11.249732 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 16:04:11.250030 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 16:04:11.250347 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 29 16:04:11.250604 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 29 16:04:11.250842 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 29 16:04:11.253656 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 16:04:11.254008 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 29 16:04:11.254333 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 29 16:04:11.254591 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 29 16:04:11.254814 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 29 16:04:11.256153 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 29 16:04:11.256467 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 29 16:04:11.256682 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 16:04:11.256882 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 29 16:04:11.256910 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 16:04:11.256930 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 16:04:11.256950 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 16:04:11.256969 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 16:04:11.256990 kernel: iommu: Default domain type: Translated Jan 29 16:04:11.257020 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 16:04:11.258023 kernel: efivars: Registered efivars operations Jan 29 16:04:11.258096 kernel: vgaarb: loaded Jan 29 16:04:11.258118 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 16:04:11.258138 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:04:11.258159 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:04:11.258179 kernel: pnp: PnP ACPI init Jan 29 16:04:11.258500 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 29 16:04:11.258560 kernel: pnp: PnP ACPI: found 1 devices Jan 29 16:04:11.258581 kernel: NET: Registered PF_INET protocol family Jan 29 16:04:11.258602 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:04:11.258623 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:04:11.258644 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:04:11.258665 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:04:11.258686 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:04:11.258708 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:04:11.258729 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:04:11.258757 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:04:11.258777 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:04:11.258800 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:04:11.258821 kernel: kvm [1]: HYP mode not available Jan 29 16:04:11.258841 kernel: Initialise system trusted keyrings Jan 29 16:04:11.258863 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:04:11.258885 kernel: Key type asymmetric registered Jan 29 16:04:11.258907 kernel: Asymmetric key parser 'x509' registered Jan 29 16:04:11.258927 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 16:04:11.258956 kernel: io scheduler mq-deadline registered Jan 29 16:04:11.258978 kernel: io scheduler kyber registered Jan 29 16:04:11.258998 kernel: io scheduler bfq registered Jan 29 16:04:11.259436 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 29 16:04:11.259481 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 16:04:11.259501 kernel: ACPI: button: Power Button [PWRB] Jan 29 16:04:11.259521 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 29 16:04:11.259541 kernel: ACPI: button: Sleep Button [SLPB] Jan 29 16:04:11.259574 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:04:11.259596 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 16:04:11.259903 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 29 16:04:11.259961 kernel: printk: console [ttyS0] disabled Jan 29 16:04:11.259985 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 29 16:04:11.260005 kernel: printk: console [ttyS0] enabled Jan 29 16:04:11.260026 kernel: printk: bootconsole [uart0] disabled Jan 29 16:04:11.260097 kernel: thunder_xcv, ver 1.0 Jan 29 16:04:11.260119 kernel: thunder_bgx, ver 1.0 Jan 29 16:04:11.260152 kernel: nicpf, ver 1.0 Jan 29 16:04:11.260171 kernel: nicvf, ver 1.0 Jan 29 16:04:11.260445 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 16:04:11.260667 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T16:04:10 UTC (1738166650) Jan 29 16:04:11.260697 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:04:11.260717 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 29 16:04:11.260737 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 16:04:11.260756 kernel: watchdog: Hard watchdog permanently disabled Jan 29 16:04:11.260786 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:04:11.260806 kernel: Segment Routing with IPv6 Jan 29 16:04:11.260827 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:04:11.260848 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:04:11.260868 kernel: Key type dns_resolver registered Jan 29 16:04:11.260887 kernel: registered taskstats version 1 Jan 29 16:04:11.260907 kernel: Loading compiled-in X.509 certificates Jan 29 16:04:11.260927 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6aa2640fb67e4af9702410ddab8a5c8b9fc0d77b' Jan 29 16:04:11.260947 kernel: Key type .fscrypt registered Jan 29 16:04:11.260975 kernel: Key type fscrypt-provisioning registered Jan 29 16:04:11.260996 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:04:11.261016 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:04:11.261036 kernel: ima: No architecture policies found Jan 29 16:04:11.261137 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 16:04:11.261157 kernel: clk: Disabling unused clocks Jan 29 16:04:11.261177 kernel: Freeing unused kernel memory: 38336K Jan 29 16:04:11.261196 kernel: Run /init as init process Jan 29 16:04:11.261215 kernel: with arguments: Jan 29 16:04:11.261234 kernel: /init Jan 29 16:04:11.261263 kernel: with environment: Jan 29 16:04:11.261282 kernel: HOME=/ Jan 29 16:04:11.261303 kernel: TERM=linux Jan 29 16:04:11.261323 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:04:11.261346 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:04:11.261375 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:04:11.261398 systemd[1]: Detected virtualization amazon. Jan 29 16:04:11.261427 systemd[1]: Detected architecture arm64. Jan 29 16:04:11.261450 systemd[1]: Running in initrd. Jan 29 16:04:11.261471 systemd[1]: No hostname configured, using default hostname. Jan 29 16:04:11.261493 systemd[1]: Hostname set to . Jan 29 16:04:11.261514 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:04:11.261535 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:04:11.261557 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:04:11.261580 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:04:11.261611 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:04:11.261635 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:04:11.261658 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:04:11.261706 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:04:11.261739 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:04:11.261764 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:04:11.261787 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:04:11.261821 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:04:11.261844 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:04:11.261866 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:04:11.261887 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:04:11.261908 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:04:11.261929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:04:11.261955 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:04:11.261979 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:04:11.262001 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:04:11.262031 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:04:11.264917 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:04:11.264954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:04:11.264975 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:04:11.264997 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:04:11.265018 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:04:11.265103 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:04:11.265134 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:04:11.265170 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:04:11.265193 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:04:11.265215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:04:11.265237 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:04:11.265260 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:04:11.265282 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:04:11.265311 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:04:11.265405 systemd-journald[252]: Collecting audit messages is disabled. Jan 29 16:04:11.265455 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:04:11.265484 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:04:11.265507 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:04:11.265531 kernel: Bridge firewalling registered Jan 29 16:04:11.265556 systemd-journald[252]: Journal started Jan 29 16:04:11.265598 systemd-journald[252]: Runtime Journal (/run/log/journal/ec218178b50a6b078b06c983936d48c2) is 8M, max 75.3M, 67.3M free. Jan 29 16:04:11.212217 systemd-modules-load[253]: Inserted module 'overlay' Jan 29 16:04:11.279163 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:04:11.279210 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:04:11.262131 systemd-modules-load[253]: Inserted module 'br_netfilter' Jan 29 16:04:11.274722 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:04:11.283388 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:04:11.303095 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:04:11.321610 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:04:11.342194 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:04:11.372160 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:04:11.375099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:04:11.378206 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:04:11.393556 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:04:11.405516 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:04:11.423482 dracut-cmdline[289]: dracut-dracut-053 Jan 29 16:04:11.432280 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:04:11.515124 systemd-resolved[292]: Positive Trust Anchors: Jan 29 16:04:11.515162 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:04:11.515225 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:04:11.589091 kernel: SCSI subsystem initialized Jan 29 16:04:11.596085 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:04:11.610094 kernel: iscsi: registered transport (tcp) Jan 29 16:04:11.633406 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:04:11.633484 kernel: QLogic iSCSI HBA Driver Jan 29 16:04:11.729089 kernel: random: crng init done Jan 29 16:04:11.729420 systemd-resolved[292]: Defaulting to hostname 'linux'. Jan 29 16:04:11.733146 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:04:11.735943 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:04:11.767565 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:04:11.780391 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:04:11.817619 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:04:11.817719 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:04:11.819563 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:04:11.889141 kernel: raid6: neonx8 gen() 6462 MB/s Jan 29 16:04:11.906101 kernel: raid6: neonx4 gen() 6414 MB/s Jan 29 16:04:11.923099 kernel: raid6: neonx2 gen() 5375 MB/s Jan 29 16:04:11.940108 kernel: raid6: neonx1 gen() 3897 MB/s Jan 29 16:04:11.957108 kernel: raid6: int64x8 gen() 3559 MB/s Jan 29 16:04:11.974104 kernel: raid6: int64x4 gen() 3645 MB/s Jan 29 16:04:11.991118 kernel: raid6: int64x2 gen() 3534 MB/s Jan 29 16:04:12.008932 kernel: raid6: int64x1 gen() 2698 MB/s Jan 29 16:04:12.009004 kernel: raid6: using algorithm neonx8 gen() 6462 MB/s Jan 29 16:04:12.026913 kernel: raid6: .... xor() 4702 MB/s, rmw enabled Jan 29 16:04:12.026997 kernel: raid6: using neon recovery algorithm Jan 29 16:04:12.035535 kernel: xor: measuring software checksum speed Jan 29 16:04:12.035620 kernel: 8regs : 12461 MB/sec Jan 29 16:04:12.036659 kernel: 32regs : 13012 MB/sec Jan 29 16:04:12.037912 kernel: arm64_neon : 9565 MB/sec Jan 29 16:04:12.037973 kernel: xor: using function: 32regs (13012 MB/sec) Jan 29 16:04:12.125105 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:04:12.147443 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:04:12.157362 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:04:12.203789 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 29 16:04:12.215604 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:04:12.228361 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:04:12.267757 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jan 29 16:04:12.332552 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:04:12.339441 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:04:12.464553 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:04:12.488347 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:04:12.543484 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:04:12.548790 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:04:12.555822 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:04:12.562425 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:04:12.574378 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:04:12.619413 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:04:12.685777 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:04:12.687010 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:04:12.696558 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 16:04:12.696601 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 29 16:04:12.723017 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 16:04:12.723545 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 16:04:12.723820 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c8:77:e7:67:7b Jan 29 16:04:12.701897 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:04:12.704280 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:04:12.704580 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:04:12.719437 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:04:12.743359 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:04:12.748257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:04:12.753950 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:04:12.788095 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 16:04:12.788532 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:04:12.792245 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 16:04:12.801727 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 16:04:12.803343 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:04:12.814099 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:04:12.814169 kernel: GPT:9289727 != 16777215 Jan 29 16:04:12.814194 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:04:12.816730 kernel: GPT:9289727 != 16777215 Jan 29 16:04:12.816803 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:04:12.816843 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 16:04:12.851142 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:04:12.941074 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (541) Jan 29 16:04:12.953114 kernel: BTRFS: device fsid d7b4a0ef-7a03-4a6c-8f31-7cafae04447a devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (531) Jan 29 16:04:13.017516 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 16:04:13.089186 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 16:04:13.116318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 16:04:13.154956 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 16:04:13.155598 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 16:04:13.177443 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:04:13.194484 disk-uuid[663]: Primary Header is updated. Jan 29 16:04:13.194484 disk-uuid[663]: Secondary Entries is updated. Jan 29 16:04:13.194484 disk-uuid[663]: Secondary Header is updated. Jan 29 16:04:13.206087 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 16:04:13.215089 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 16:04:14.225110 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 16:04:14.227291 disk-uuid[664]: The operation has completed successfully. Jan 29 16:04:14.425340 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:04:14.427269 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:04:14.533364 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:04:14.543794 sh[923]: Success Jan 29 16:04:14.564552 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 16:04:14.663646 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:04:14.677273 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:04:14.687160 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:04:14.723400 kernel: BTRFS info (device dm-0): first mount of filesystem d7b4a0ef-7a03-4a6c-8f31-7cafae04447a Jan 29 16:04:14.723479 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:04:14.723521 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:04:14.726365 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:04:14.726412 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:04:14.810081 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:04:14.827133 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:04:14.832885 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:04:14.847280 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:04:14.854385 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:04:14.879081 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:04:14.879164 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:04:14.879198 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 16:04:14.888132 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 16:04:14.910106 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:04:14.909746 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:04:14.931814 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:04:14.943391 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:04:15.042689 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:04:15.057370 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:04:15.120441 systemd-networkd[1119]: lo: Link UP Jan 29 16:04:15.120463 systemd-networkd[1119]: lo: Gained carrier Jan 29 16:04:15.125582 systemd-networkd[1119]: Enumeration completed Jan 29 16:04:15.126277 systemd-networkd[1119]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:04:15.126285 systemd-networkd[1119]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:04:15.126505 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:04:15.128627 systemd[1]: Reached target network.target - Network. Jan 29 16:04:15.142663 systemd-networkd[1119]: eth0: Link UP Jan 29 16:04:15.142683 systemd-networkd[1119]: eth0: Gained carrier Jan 29 16:04:15.142702 systemd-networkd[1119]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:04:15.163167 systemd-networkd[1119]: eth0: DHCPv4 address 172.31.23.241/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 16:04:15.272240 ignition[1035]: Ignition 2.20.0 Jan 29 16:04:15.273720 ignition[1035]: Stage: fetch-offline Jan 29 16:04:15.274197 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:15.274222 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:04:15.274682 ignition[1035]: Ignition finished successfully Jan 29 16:04:15.281227 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:04:15.293424 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:04:15.318424 ignition[1129]: Ignition 2.20.0 Jan 29 16:04:15.318463 ignition[1129]: Stage: fetch Jan 29 16:04:15.319754 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:15.319787 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:04:15.320002 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:04:15.331928 ignition[1129]: PUT result: OK Jan 29 16:04:15.335233 ignition[1129]: parsed url from cmdline: "" Jan 29 16:04:15.335400 ignition[1129]: no config URL provided Jan 29 16:04:15.335995 ignition[1129]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:04:15.336035 ignition[1129]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:04:15.336220 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:04:15.341848 ignition[1129]: PUT result: OK Jan 29 16:04:15.342804 ignition[1129]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 16:04:15.345886 ignition[1129]: GET result: OK Jan 29 16:04:15.347207 ignition[1129]: parsing config with SHA512: 6e3ca4f87fd79e6b87c2bdb273adb942cfe2104e4e10b55864e3f476a0cf6050d5f8defb9e99d3c766f27d8e2f641ffd460097d3a59ed488e0cfc651b7dc0823 Jan 29 16:04:15.356386 unknown[1129]: fetched base config from "system" Jan 29 16:04:15.356919 unknown[1129]: fetched base config from "system" Jan 29 16:04:15.356939 unknown[1129]: fetched user config from "aws" Jan 29 16:04:15.358161 ignition[1129]: fetch: fetch complete Jan 29 16:04:15.358192 ignition[1129]: fetch: fetch passed Jan 29 16:04:15.358302 ignition[1129]: Ignition finished successfully Jan 29 16:04:15.367931 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:04:15.380334 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:04:15.406966 ignition[1135]: Ignition 2.20.0 Jan 29 16:04:15.406988 ignition[1135]: Stage: kargs Jan 29 16:04:15.407630 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:15.407664 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:04:15.407815 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:04:15.411435 ignition[1135]: PUT result: OK Jan 29 16:04:15.421492 ignition[1135]: kargs: kargs passed Jan 29 16:04:15.421614 ignition[1135]: Ignition finished successfully Jan 29 16:04:15.425874 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:04:15.437370 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:04:15.460991 ignition[1141]: Ignition 2.20.0 Jan 29 16:04:15.461020 ignition[1141]: Stage: disks Jan 29 16:04:15.462634 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:15.462662 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:04:15.463459 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:04:15.470014 ignition[1141]: PUT result: OK Jan 29 16:04:15.480300 ignition[1141]: disks: disks passed Jan 29 16:04:15.480407 ignition[1141]: Ignition finished successfully Jan 29 16:04:15.484199 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:04:15.488882 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:04:15.491830 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:04:15.495080 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:04:15.503076 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:04:15.506807 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:04:15.521395 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:04:15.569885 systemd-fsck[1150]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 16:04:15.575850 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:04:15.729222 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:04:15.806076 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 41c89329-6889-4dd8-82a1-efe68f55bab8 r/w with ordered data mode. Quota mode: none. Jan 29 16:04:15.807105 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:04:15.810789 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:04:15.843201 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:04:15.847440 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:04:15.855868 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:04:15.855966 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:04:15.856018 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:04:15.870763 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:04:15.893804 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:04:15.901068 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1169) Jan 29 16:04:15.907638 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:04:15.907708 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:04:15.908959 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 16:04:15.914612 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 16:04:15.915778 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:04:16.233521 initrd-setup-root[1193]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:04:16.241743 initrd-setup-root[1200]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:04:16.249723 initrd-setup-root[1207]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:04:16.258097 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:04:16.463429 systemd-networkd[1119]: eth0: Gained IPv6LL Jan 29 16:04:16.542184 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:04:16.556352 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:04:16.563366 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:04:16.581166 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:04:16.623884 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:04:16.636825 ignition[1282]: INFO : Ignition 2.20.0 Jan 29 16:04:16.639664 ignition[1282]: INFO : Stage: mount Jan 29 16:04:16.639664 ignition[1282]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:16.639664 ignition[1282]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:04:16.639664 ignition[1282]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:04:16.648823 ignition[1282]: INFO : PUT result: OK Jan 29 16:04:16.653358 ignition[1282]: INFO : mount: mount passed Jan 29 16:04:16.653358 ignition[1282]: INFO : Ignition finished successfully Jan 29 16:04:16.659121 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:04:16.672345 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:04:16.721624 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:04:16.738189 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:04:16.758123 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1293) Jan 29 16:04:16.758207 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:04:16.759748 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:04:16.759793 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 16:04:16.766101 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 16:04:16.769426 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:04:16.812392 ignition[1311]: INFO : Ignition 2.20.0 Jan 29 16:04:16.812392 ignition[1311]: INFO : Stage: files Jan 29 16:04:16.815796 ignition[1311]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:16.815796 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:04:16.815796 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:04:16.822793 ignition[1311]: INFO : PUT result: OK Jan 29 16:04:16.826755 ignition[1311]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:04:16.829791 ignition[1311]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:04:16.829791 ignition[1311]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:04:16.863555 ignition[1311]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:04:16.866202 ignition[1311]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:04:16.868855 ignition[1311]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:04:16.866933 unknown[1311]: wrote ssh authorized keys file for user: core Jan 29 16:04:16.873263 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 16:04:16.873263 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 29 16:04:16.958842 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:04:17.114505 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 16:04:17.114505 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:04:17.121322 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 16:04:17.599276 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:04:17.738108 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:04:17.738108 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:04:17.745626 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 29 16:04:18.085360 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:04:18.418149 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 16:04:18.418149 ignition[1311]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:04:18.424159 ignition[1311]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:04:18.424159 ignition[1311]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:04:18.424159 ignition[1311]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:04:18.424159 ignition[1311]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:04:18.424159 ignition[1311]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:04:18.424159 ignition[1311]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:04:18.424159 ignition[1311]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:04:18.424159 ignition[1311]: INFO : files: files passed Jan 29 16:04:18.424159 ignition[1311]: INFO : Ignition finished successfully Jan 29 16:04:18.429485 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:04:18.456479 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:04:18.462366 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:04:18.475274 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:04:18.477048 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:04:18.497705 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:04:18.497705 initrd-setup-root-after-ignition[1339]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:04:18.506974 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:04:18.512767 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:04:18.515503 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:04:18.532888 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:04:18.578171 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:04:18.580378 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:04:18.585999 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:04:18.603118 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:04:18.607577 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:04:18.617319 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:04:18.646078 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:04:18.657321 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:04:18.682956 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:04:18.684531 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:04:18.684877 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:04:18.686502 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:04:18.686779 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:04:18.687809 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:04:18.688136 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:04:18.688405 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:04:18.688691 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:04:18.689264 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:04:18.689861 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:04:18.690171 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:04:18.690732 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:04:18.691022 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:04:18.691584 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:04:18.691821 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:04:18.692062 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:04:18.692739 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:04:18.693644 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:04:18.694167 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:04:18.712582 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:04:18.712812 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:04:18.713067 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:04:18.713490 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:04:18.713722 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:04:18.713963 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:04:18.714184 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:04:18.776420 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:04:18.789148 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:04:18.792230 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:04:18.792534 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:04:18.796204 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:04:18.796442 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:04:18.824675 ignition[1363]: INFO : Ignition 2.20.0 Jan 29 16:04:18.824675 ignition[1363]: INFO : Stage: umount Jan 29 16:04:18.824675 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:04:18.824675 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 16:04:18.824675 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 16:04:18.824675 ignition[1363]: INFO : PUT result: OK Jan 29 16:04:18.842241 ignition[1363]: INFO : umount: umount passed Jan 29 16:04:18.842241 ignition[1363]: INFO : Ignition finished successfully Jan 29 16:04:18.845585 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:04:18.846551 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:04:18.852387 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:04:18.852569 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:04:18.862836 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:04:18.863008 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:04:18.866530 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:04:18.866637 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:04:18.875171 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:04:18.875298 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:04:18.881283 systemd[1]: Stopped target network.target - Network. Jan 29 16:04:18.885464 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:04:18.885785 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:04:18.890625 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:04:18.893509 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:04:18.898033 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:04:18.902857 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:04:18.908981 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:04:18.916257 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:04:18.916345 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:04:18.926815 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:04:18.926897 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:04:18.928764 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:04:18.928856 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:04:18.930708 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:04:18.930791 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:04:18.932994 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:04:18.934933 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:04:18.938467 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:04:18.939573 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:04:18.941337 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:04:18.947308 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:04:18.947480 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:04:18.968542 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:04:18.970321 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:04:18.978907 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:04:18.981723 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:04:18.983829 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:04:18.989540 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:04:18.991705 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:04:18.991818 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:04:19.005319 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:04:19.008694 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:04:19.008814 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:04:19.011675 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:04:19.011760 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:04:19.014886 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:04:19.014977 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:04:19.030057 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:04:19.030164 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:04:19.037896 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:04:19.044163 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:04:19.044300 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:04:19.064101 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:04:19.064483 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:04:19.077157 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:04:19.078893 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:04:19.084172 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:04:19.084263 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:04:19.086243 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:04:19.086308 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:04:19.088184 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:04:19.088274 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:04:19.090449 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:04:19.090544 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:04:19.101250 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:04:19.102961 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:04:19.121271 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:04:19.126391 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:04:19.126520 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:04:19.134808 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:04:19.134920 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:04:19.137817 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:04:19.137909 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:04:19.140169 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:04:19.140262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:04:19.158942 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:04:19.161429 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:04:19.169505 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:04:19.170169 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:04:19.178149 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:04:19.198406 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:04:19.214180 systemd[1]: Switching root. Jan 29 16:04:19.276378 systemd-journald[252]: Journal stopped Jan 29 16:04:22.044681 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jan 29 16:04:22.044826 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:04:22.044872 kernel: SELinux: policy capability open_perms=1 Jan 29 16:04:22.044904 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:04:22.044943 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:04:22.044984 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:04:22.045015 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:04:22.049126 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:04:22.049189 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:04:22.049239 kernel: audit: type=1403 audit(1738166660.222:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:04:22.049286 systemd[1]: Successfully loaded SELinux policy in 86.144ms. Jan 29 16:04:22.049342 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.213ms. Jan 29 16:04:22.049378 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:04:22.049419 systemd[1]: Detected virtualization amazon. Jan 29 16:04:22.049449 systemd[1]: Detected architecture arm64. Jan 29 16:04:22.049481 systemd[1]: Detected first boot. Jan 29 16:04:22.049510 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:04:22.049540 zram_generator::config[1408]: No configuration found. Jan 29 16:04:22.049572 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:04:22.049603 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:04:22.049637 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:04:22.049701 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:04:22.049735 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:04:22.049768 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:04:22.049801 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:04:22.049835 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:04:22.049865 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:04:22.049896 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:04:22.049928 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:04:22.049969 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:04:22.050004 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:04:22.052155 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:04:22.052243 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:04:22.052280 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:04:22.052315 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:04:22.052347 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:04:22.052381 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:04:22.052412 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:04:22.052456 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:04:22.052487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:04:22.052519 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:04:22.052569 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:04:22.052600 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:04:22.052633 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:04:22.052665 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:04:22.052696 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:04:22.052734 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:04:22.052768 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:04:22.052800 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:04:22.052830 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:04:22.052862 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:04:22.052894 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:04:22.052927 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:04:22.052958 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:04:22.052990 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:04:22.053031 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:04:22.059174 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:04:22.059214 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:04:22.059250 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:04:22.059280 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:04:22.059311 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:04:22.059343 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:04:22.059376 systemd[1]: Reached target machines.target - Containers. Jan 29 16:04:22.059407 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:04:22.059451 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:04:22.059486 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:04:22.059516 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:04:22.059546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:04:22.059579 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:04:22.059612 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:04:22.059642 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:04:22.059672 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:04:22.059710 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:04:22.059743 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:04:22.059773 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:04:22.059803 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:04:22.059835 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:04:22.059867 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:04:22.059901 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:04:22.059932 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:04:22.059964 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:04:22.060001 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:04:22.060035 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:04:22.065275 kernel: loop: module loaded Jan 29 16:04:22.065312 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:04:22.065360 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:04:22.065394 systemd[1]: Stopped verity-setup.service. Jan 29 16:04:22.065425 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:04:22.065456 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:04:22.065486 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:04:22.065517 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:04:22.065552 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:04:22.065591 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:04:22.065621 kernel: fuse: init (API version 7.39) Jan 29 16:04:22.065673 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:04:22.065713 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:04:22.065744 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:04:22.065773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:04:22.065802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:04:22.065832 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:04:22.065868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:04:22.065898 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:04:22.065931 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:04:22.065961 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:04:22.065993 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:04:22.066023 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:04:22.075168 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:04:22.075223 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:04:22.075336 systemd-journald[1491]: Collecting audit messages is disabled. Jan 29 16:04:22.075394 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:04:22.075430 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:04:22.075461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:04:22.075492 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:04:22.075521 systemd-journald[1491]: Journal started Jan 29 16:04:22.075574 systemd-journald[1491]: Runtime Journal (/run/log/journal/ec218178b50a6b078b06c983936d48c2) is 8M, max 75.3M, 67.3M free. Jan 29 16:04:21.441425 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:04:21.453097 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 16:04:21.454033 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:04:22.092560 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:04:22.092645 kernel: ACPI: bus type drm_connector registered Jan 29 16:04:22.092683 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:04:22.097169 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:04:22.101917 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:04:22.104731 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:04:22.119867 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:04:22.120399 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:04:22.124544 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:04:22.158682 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:04:22.194726 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:04:22.194830 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:04:22.200288 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:04:22.212328 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:04:22.218929 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:04:22.221174 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:04:22.223645 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 29 16:04:22.223677 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jan 29 16:04:22.235424 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:04:22.241235 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:04:22.243440 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:04:22.247241 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:04:22.255389 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:04:22.261129 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:04:22.266259 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:04:22.286478 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:04:22.290993 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:04:22.294497 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:04:22.307906 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:04:22.320414 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:04:22.333093 kernel: loop0: detected capacity change from 0 to 123192 Jan 29 16:04:22.344526 systemd-journald[1491]: Time spent on flushing to /var/log/journal/ec218178b50a6b078b06c983936d48c2 is 61.714ms for 934 entries. Jan 29 16:04:22.344526 systemd-journald[1491]: System Journal (/var/log/journal/ec218178b50a6b078b06c983936d48c2) is 8M, max 195.6M, 187.6M free. Jan 29 16:04:22.417368 systemd-journald[1491]: Received client request to flush runtime journal. Jan 29 16:04:22.422076 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:04:22.430208 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:04:22.454927 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:04:22.459664 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:04:22.464158 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:04:22.477514 kernel: loop1: detected capacity change from 0 to 113512 Jan 29 16:04:22.478768 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:04:22.499376 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:04:22.513807 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:04:22.531295 udevadm[1563]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 16:04:22.573845 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Jan 29 16:04:22.574445 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Jan 29 16:04:22.585077 kernel: loop2: detected capacity change from 0 to 201592 Jan 29 16:04:22.589120 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:04:22.648209 kernel: loop3: detected capacity change from 0 to 53784 Jan 29 16:04:22.729384 kernel: loop4: detected capacity change from 0 to 123192 Jan 29 16:04:22.762091 kernel: loop5: detected capacity change from 0 to 113512 Jan 29 16:04:22.783744 kernel: loop6: detected capacity change from 0 to 201592 Jan 29 16:04:22.808069 kernel: loop7: detected capacity change from 0 to 53784 Jan 29 16:04:22.831576 (sd-merge)[1571]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 16:04:22.838963 (sd-merge)[1571]: Merged extensions into '/usr'. Jan 29 16:04:22.855917 systemd[1]: Reload requested from client PID 1549 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:04:22.855962 systemd[1]: Reloading... Jan 29 16:04:22.990129 zram_generator::config[1599]: No configuration found. Jan 29 16:04:23.410467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:04:23.573551 systemd[1]: Reloading finished in 716 ms. Jan 29 16:04:23.600199 ldconfig[1545]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:04:23.605106 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:04:23.608009 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:04:23.622497 systemd[1]: Starting ensure-sysext.service... Jan 29 16:04:23.633390 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:04:23.664846 systemd[1]: Reload requested from client PID 1651 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:04:23.664887 systemd[1]: Reloading... Jan 29 16:04:23.701672 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:04:23.702292 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:04:23.704545 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:04:23.705286 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Jan 29 16:04:23.705467 systemd-tmpfiles[1652]: ACLs are not supported, ignoring. Jan 29 16:04:23.717927 systemd-tmpfiles[1652]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:04:23.717957 systemd-tmpfiles[1652]: Skipping /boot Jan 29 16:04:23.749508 systemd-tmpfiles[1652]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:04:23.749542 systemd-tmpfiles[1652]: Skipping /boot Jan 29 16:04:23.843097 zram_generator::config[1684]: No configuration found. Jan 29 16:04:24.086581 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:04:24.241402 systemd[1]: Reloading finished in 575 ms. Jan 29 16:04:24.260101 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:04:24.285898 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:04:24.310576 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:04:24.320151 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:04:24.335597 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:04:24.357628 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:04:24.373437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:04:24.382253 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:04:24.407847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:04:24.417621 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:04:24.424327 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:04:24.437253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:04:24.439558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:04:24.439868 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:04:24.448693 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:04:24.455580 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:04:24.482708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:04:24.483199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:04:24.483497 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:04:24.491913 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:04:24.504997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:04:24.516069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:04:24.518717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:04:24.519008 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:04:24.519403 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:04:24.525203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:04:24.525697 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:04:24.529329 augenrules[1767]: No rules Jan 29 16:04:24.534085 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:04:24.534800 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:04:24.547523 systemd[1]: Finished ensure-sysext.service. Jan 29 16:04:24.557903 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:04:24.570437 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:04:24.571137 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:04:24.575862 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:04:24.587032 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:04:24.587812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:04:24.592509 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:04:24.600443 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:04:24.600950 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:04:24.608595 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:04:24.632191 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:04:24.636697 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:04:24.642859 systemd-udevd[1745]: Using default interface naming scheme 'v255'. Jan 29 16:04:24.693274 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:04:24.704255 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:04:24.716467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:04:24.893831 systemd-networkd[1786]: lo: Link UP Jan 29 16:04:24.893857 systemd-networkd[1786]: lo: Gained carrier Jan 29 16:04:24.896675 systemd-networkd[1786]: Enumeration completed Jan 29 16:04:24.896876 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:04:24.908842 (udev-worker)[1799]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:04:24.953594 systemd-resolved[1744]: Positive Trust Anchors: Jan 29 16:04:24.953618 systemd-resolved[1744]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:04:24.953703 systemd-resolved[1744]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:04:24.963861 systemd-resolved[1744]: Defaulting to hostname 'linux'. Jan 29 16:04:24.969550 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:04:24.974327 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:04:24.977371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:04:24.982566 systemd[1]: Reached target network.target - Network. Jan 29 16:04:24.985220 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:04:25.019720 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:04:25.030603 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:04:25.062823 systemd-networkd[1786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:04:25.062848 systemd-networkd[1786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:04:25.066393 systemd-networkd[1786]: eth0: Link UP Jan 29 16:04:25.066715 systemd-networkd[1786]: eth0: Gained carrier Jan 29 16:04:25.066752 systemd-networkd[1786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:04:25.083213 systemd-networkd[1786]: eth0: DHCPv4 address 172.31.23.241/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 16:04:25.152112 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1809) Jan 29 16:04:25.411716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:04:25.422127 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 16:04:25.438259 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:04:25.463707 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:04:25.474449 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:04:25.499962 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:04:25.505088 lvm[1911]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:04:25.528641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:04:25.534173 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:04:25.537943 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:04:25.540746 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:04:25.542983 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:04:25.545332 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:04:25.547913 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:04:25.550075 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:04:25.552319 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:04:25.554565 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:04:25.554614 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:04:25.556229 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:04:25.559567 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:04:25.564296 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:04:25.572221 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:04:25.575192 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:04:25.577840 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:04:25.583980 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:04:25.587282 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:04:25.599455 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:04:25.603454 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:04:25.606392 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:04:25.608471 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:04:25.610545 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:04:25.610702 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:04:25.614078 lvm[1919]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:04:25.620252 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:04:25.633369 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:04:25.644247 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:04:25.650073 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:04:25.657390 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:04:25.659562 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:04:25.669504 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:04:25.682432 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 16:04:25.690266 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:04:25.701312 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 16:04:25.709422 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:04:25.719480 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:04:25.733379 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:04:25.738669 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:04:25.740650 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:04:25.744295 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:04:25.749615 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:04:25.757482 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:04:25.766221 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:04:25.767161 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:04:25.793928 jq[1936]: true Jan 29 16:04:25.804183 jq[1923]: false Jan 29 16:04:25.811095 update_engine[1934]: I20250129 16:04:25.807787 1934 main.cc:92] Flatcar Update Engine starting Jan 29 16:04:25.836129 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:04:25.836561 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:04:25.861013 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:04:25.864319 dbus-daemon[1922]: [system] SELinux support is enabled Jan 29 16:04:25.865073 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:04:25.875509 jq[1944]: true Jan 29 16:04:25.884023 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:04:25.884528 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:04:25.891187 extend-filesystems[1924]: Found loop4 Jan 29 16:04:25.903870 extend-filesystems[1924]: Found loop5 Jan 29 16:04:25.903870 extend-filesystems[1924]: Found loop6 Jan 29 16:04:25.903870 extend-filesystems[1924]: Found loop7 Jan 29 16:04:25.903870 extend-filesystems[1924]: Found nvme0n1 Jan 29 16:04:25.903870 extend-filesystems[1924]: Found nvme0n1p1 Jan 29 16:04:25.903870 extend-filesystems[1924]: Found nvme0n1p2 Jan 29 16:04:25.903870 extend-filesystems[1924]: Found nvme0n1p3 Jan 29 16:04:25.903870 extend-filesystems[1924]: Found usr Jan 29 16:04:25.903870 extend-filesystems[1924]: Found nvme0n1p4 Jan 29 16:04:25.903870 extend-filesystems[1924]: Found nvme0n1p6 Jan 29 16:04:25.903870 extend-filesystems[1924]: Found nvme0n1p7 Jan 29 16:04:25.903870 extend-filesystems[1924]: Found nvme0n1p9 Jan 29 16:04:25.903870 extend-filesystems[1924]: Checking size of /dev/nvme0n1p9 Jan 29 16:04:25.941589 dbus-daemon[1922]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1786 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 16:04:25.972228 tar[1939]: linux-arm64/LICENSE Jan 29 16:04:25.972228 tar[1939]: linux-arm64/helm Jan 29 16:04:25.972637 update_engine[1934]: I20250129 16:04:25.957361 1934 update_check_scheduler.cc:74] Next update check in 10m59s Jan 29 16:04:25.916298 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:04:25.947979 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 16:04:25.916357 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:04:25.933942 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:04:25.933987 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:04:25.970456 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 16:04:25.973413 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:04:25.992849 (ntainerd)[1961]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:04:25.993544 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:04:26.040721 extend-filesystems[1924]: Resized partition /dev/nvme0n1p9 Jan 29 16:04:26.064990 extend-filesystems[1979]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:04:26.074920 ntpd[1926]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 14:24:45 UTC 2025 (1): Starting Jan 29 16:04:26.078583 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 14:24:45 UTC 2025 (1): Starting Jan 29 16:04:26.078583 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 16:04:26.078583 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: ---------------------------------------------------- Jan 29 16:04:26.078583 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: ntp-4 is maintained by Network Time Foundation, Jan 29 16:04:26.078583 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 16:04:26.078583 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: corporation. Support and training for ntp-4 are Jan 29 16:04:26.078583 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: available at https://www.nwtime.org/support Jan 29 16:04:26.078583 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: ---------------------------------------------------- Jan 29 16:04:26.074978 ntpd[1926]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 16:04:26.074998 ntpd[1926]: ---------------------------------------------------- Jan 29 16:04:26.075017 ntpd[1926]: ntp-4 is maintained by Network Time Foundation, Jan 29 16:04:26.075058 ntpd[1926]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 16:04:26.075081 ntpd[1926]: corporation. Support and training for ntp-4 are Jan 29 16:04:26.075100 ntpd[1926]: available at https://www.nwtime.org/support Jan 29 16:04:26.075118 ntpd[1926]: ---------------------------------------------------- Jan 29 16:04:26.102187 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 16:04:26.102308 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: proto: precision = 0.096 usec (-23) Jan 29 16:04:26.102308 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: basedate set to 2025-01-17 Jan 29 16:04:26.102308 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: gps base set to 2025-01-19 (week 2350) Jan 29 16:04:26.091313 ntpd[1926]: proto: precision = 0.096 usec (-23) Jan 29 16:04:26.092347 ntpd[1926]: basedate set to 2025-01-17 Jan 29 16:04:26.092378 ntpd[1926]: gps base set to 2025-01-19 (week 2350) Jan 29 16:04:26.107399 ntpd[1926]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 16:04:26.112916 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 16:04:26.112916 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 16:04:26.112761 ntpd[1926]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 16:04:26.113810 ntpd[1926]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 16:04:26.117743 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 16:04:26.117743 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: Listen normally on 3 eth0 172.31.23.241:123 Jan 29 16:04:26.117743 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: Listen normally on 4 lo [::1]:123 Jan 29 16:04:26.117743 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: bind(21) AF_INET6 fe80::4c8:77ff:fee7:677b%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:04:26.117743 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: unable to create socket on eth0 (5) for fe80::4c8:77ff:fee7:677b%2#123 Jan 29 16:04:26.117743 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: failed to init interface for address fe80::4c8:77ff:fee7:677b%2 Jan 29 16:04:26.117743 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: Listening on routing socket on fd #21 for interface updates Jan 29 16:04:26.116659 ntpd[1926]: Listen normally on 3 eth0 172.31.23.241:123 Jan 29 16:04:26.116730 ntpd[1926]: Listen normally on 4 lo [::1]:123 Jan 29 16:04:26.116813 ntpd[1926]: bind(21) AF_INET6 fe80::4c8:77ff:fee7:677b%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:04:26.116852 ntpd[1926]: unable to create socket on eth0 (5) for fe80::4c8:77ff:fee7:677b%2#123 Jan 29 16:04:26.116880 ntpd[1926]: failed to init interface for address fe80::4c8:77ff:fee7:677b%2 Jan 29 16:04:26.116945 ntpd[1926]: Listening on routing socket on fd #21 for interface updates Jan 29 16:04:26.154757 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:04:26.164418 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:04:26.164418 ntpd[1926]: 29 Jan 16:04:26 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:04:26.154810 ntpd[1926]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:04:26.205097 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 16:04:26.240476 coreos-metadata[1921]: Jan 29 16:04:26.225 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 16:04:26.240476 coreos-metadata[1921]: Jan 29 16:04:26.228 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 16:04:26.240476 coreos-metadata[1921]: Jan 29 16:04:26.234 INFO Fetch successful Jan 29 16:04:26.240476 coreos-metadata[1921]: Jan 29 16:04:26.234 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 16:04:26.240476 coreos-metadata[1921]: Jan 29 16:04:26.235 INFO Fetch successful Jan 29 16:04:26.240476 coreos-metadata[1921]: Jan 29 16:04:26.235 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 16:04:26.240476 coreos-metadata[1921]: Jan 29 16:04:26.239 INFO Fetch successful Jan 29 16:04:26.240476 coreos-metadata[1921]: Jan 29 16:04:26.239 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 16:04:26.240476 coreos-metadata[1921]: Jan 29 16:04:26.240 INFO Fetch successful Jan 29 16:04:26.241390 coreos-metadata[1921]: Jan 29 16:04:26.240 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 16:04:26.247434 coreos-metadata[1921]: Jan 29 16:04:26.247 INFO Fetch failed with 404: resource not found Jan 29 16:04:26.251375 extend-filesystems[1979]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 16:04:26.251375 extend-filesystems[1979]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 16:04:26.251375 extend-filesystems[1979]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 16:04:26.264282 coreos-metadata[1921]: Jan 29 16:04:26.248 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 16:04:26.264282 coreos-metadata[1921]: Jan 29 16:04:26.258 INFO Fetch successful Jan 29 16:04:26.264282 coreos-metadata[1921]: Jan 29 16:04:26.258 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 16:04:26.248345 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 16:04:26.264578 extend-filesystems[1924]: Resized filesystem in /dev/nvme0n1p9 Jan 29 16:04:26.278592 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:04:26.279785 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:04:26.286013 coreos-metadata[1921]: Jan 29 16:04:26.282 INFO Fetch successful Jan 29 16:04:26.286013 coreos-metadata[1921]: Jan 29 16:04:26.282 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 16:04:26.286013 coreos-metadata[1921]: Jan 29 16:04:26.282 INFO Fetch successful Jan 29 16:04:26.286013 coreos-metadata[1921]: Jan 29 16:04:26.282 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 16:04:26.286013 coreos-metadata[1921]: Jan 29 16:04:26.283 INFO Fetch successful Jan 29 16:04:26.286013 coreos-metadata[1921]: Jan 29 16:04:26.283 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 16:04:26.296263 bash[1992]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:04:26.294781 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:04:26.296727 coreos-metadata[1921]: Jan 29 16:04:26.289 INFO Fetch successful Jan 29 16:04:26.325434 systemd[1]: Starting sshkeys.service... Jan 29 16:04:26.396154 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:04:26.406104 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:04:26.433730 systemd-logind[1933]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 16:04:26.433789 systemd-logind[1933]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 29 16:04:26.434213 systemd-logind[1933]: New seat seat0. Jan 29 16:04:26.556825 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:04:26.586157 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:04:26.589312 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:04:26.676089 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 16:04:26.683865 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 16:04:26.687878 dbus-daemon[1922]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1966 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 16:04:26.733803 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1793) Jan 29 16:04:26.735443 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 16:04:26.793664 containerd[1961]: time="2025-01-29T16:04:26.790229760Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:04:26.799005 polkitd[2021]: Started polkitd version 121 Jan 29 16:04:26.850555 polkitd[2021]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 16:04:26.850696 polkitd[2021]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 16:04:26.863422 polkitd[2021]: Finished loading, compiling and executing 2 rules Jan 29 16:04:26.864804 dbus-daemon[1922]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 16:04:26.866167 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 16:04:26.873094 polkitd[2021]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 16:04:26.900821 coreos-metadata[2009]: Jan 29 16:04:26.900 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 16:04:26.907648 coreos-metadata[2009]: Jan 29 16:04:26.907 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 16:04:26.909777 coreos-metadata[2009]: Jan 29 16:04:26.909 INFO Fetch successful Jan 29 16:04:26.909777 coreos-metadata[2009]: Jan 29 16:04:26.909 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 16:04:26.909777 coreos-metadata[2009]: Jan 29 16:04:26.909 INFO Fetch successful Jan 29 16:04:26.914666 unknown[2009]: wrote ssh authorized keys file for user: core Jan 29 16:04:26.957602 systemd-hostnamed[1966]: Hostname set to (transient) Jan 29 16:04:26.960251 systemd-resolved[1744]: System hostname changed to 'ip-172-31-23-241'. Jan 29 16:04:26.973102 update-ssh-keys[2079]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:04:26.977267 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:04:26.984650 systemd[1]: Finished sshkeys.service. Jan 29 16:04:27.006579 containerd[1961]: time="2025-01-29T16:04:27.006501261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:27.015676 containerd[1961]: time="2025-01-29T16:04:27.015577858Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:04:27.015676 containerd[1961]: time="2025-01-29T16:04:27.015663334Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:04:27.015832 containerd[1961]: time="2025-01-29T16:04:27.015701338Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:04:27.016084 containerd[1961]: time="2025-01-29T16:04:27.016017094Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:04:27.016149 containerd[1961]: time="2025-01-29T16:04:27.016092598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:27.016280 containerd[1961]: time="2025-01-29T16:04:27.016232122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:04:27.016347 containerd[1961]: time="2025-01-29T16:04:27.016274098Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:27.016725 containerd[1961]: time="2025-01-29T16:04:27.016665670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:04:27.016725 containerd[1961]: time="2025-01-29T16:04:27.016715794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:27.016865 containerd[1961]: time="2025-01-29T16:04:27.016749418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:04:27.016865 containerd[1961]: time="2025-01-29T16:04:27.016773262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:27.019073 containerd[1961]: time="2025-01-29T16:04:27.016955458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:27.025012 containerd[1961]: time="2025-01-29T16:04:27.024932782Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:04:27.028365 containerd[1961]: time="2025-01-29T16:04:27.028294930Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:04:27.028365 containerd[1961]: time="2025-01-29T16:04:27.028353610Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:04:27.028620 containerd[1961]: time="2025-01-29T16:04:27.028576546Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:04:27.028710 containerd[1961]: time="2025-01-29T16:04:27.028693726Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:04:27.042995 containerd[1961]: time="2025-01-29T16:04:27.042916666Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:04:27.043204 containerd[1961]: time="2025-01-29T16:04:27.043013554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:04:27.043204 containerd[1961]: time="2025-01-29T16:04:27.043096522Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:04:27.043204 containerd[1961]: time="2025-01-29T16:04:27.043137778Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:04:27.043204 containerd[1961]: time="2025-01-29T16:04:27.043172386Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:04:27.044007 containerd[1961]: time="2025-01-29T16:04:27.043455370Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:04:27.044007 containerd[1961]: time="2025-01-29T16:04:27.043913038Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:04:27.045749 containerd[1961]: time="2025-01-29T16:04:27.045570898Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:04:27.045749 containerd[1961]: time="2025-01-29T16:04:27.045645742Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:04:27.045749 containerd[1961]: time="2025-01-29T16:04:27.045688234Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:04:27.045749 containerd[1961]: time="2025-01-29T16:04:27.045740278Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:04:27.046810 containerd[1961]: time="2025-01-29T16:04:27.045772066Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:04:27.046810 containerd[1961]: time="2025-01-29T16:04:27.045801250Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:04:27.046810 containerd[1961]: time="2025-01-29T16:04:27.045832486Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:04:27.046810 containerd[1961]: time="2025-01-29T16:04:27.045863374Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:04:27.046810 containerd[1961]: time="2025-01-29T16:04:27.045892906Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:04:27.046810 containerd[1961]: time="2025-01-29T16:04:27.045920314Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:04:27.046810 containerd[1961]: time="2025-01-29T16:04:27.045946630Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:04:27.046810 containerd[1961]: time="2025-01-29T16:04:27.045986566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.046810 containerd[1961]: time="2025-01-29T16:04:27.046016926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.048282 containerd[1961]: time="2025-01-29T16:04:27.047099086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.048282 containerd[1961]: time="2025-01-29T16:04:27.047148166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.048282 containerd[1961]: time="2025-01-29T16:04:27.047183158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.048282 containerd[1961]: time="2025-01-29T16:04:27.047231230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.048282 containerd[1961]: time="2025-01-29T16:04:27.047259586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.047290510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.048554362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.048606634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.048636178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.048664954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.048695470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.048727390Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.048775786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.048810286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.048837370Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.048974062Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.049014058Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:04:27.049074 containerd[1961]: time="2025-01-29T16:04:27.049057714Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:04:27.051243 containerd[1961]: time="2025-01-29T16:04:27.049091674Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:04:27.051243 containerd[1961]: time="2025-01-29T16:04:27.049115146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.051243 containerd[1961]: time="2025-01-29T16:04:27.049143706Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:04:27.051243 containerd[1961]: time="2025-01-29T16:04:27.049169566Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:04:27.051243 containerd[1961]: time="2025-01-29T16:04:27.049194442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:04:27.051464 containerd[1961]: time="2025-01-29T16:04:27.049718518Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:04:27.051464 containerd[1961]: time="2025-01-29T16:04:27.049808818Z" level=info msg="Connect containerd service" Jan 29 16:04:27.051464 containerd[1961]: time="2025-01-29T16:04:27.049878154Z" level=info msg="using legacy CRI server" Jan 29 16:04:27.051464 containerd[1961]: time="2025-01-29T16:04:27.049895806Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:04:27.056072 containerd[1961]: time="2025-01-29T16:04:27.053583934Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:04:27.056072 containerd[1961]: time="2025-01-29T16:04:27.055964062Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:04:27.057159 containerd[1961]: time="2025-01-29T16:04:27.056406970Z" level=info msg="Start subscribing containerd event" Jan 29 16:04:27.057159 containerd[1961]: time="2025-01-29T16:04:27.056479066Z" level=info msg="Start recovering state" Jan 29 16:04:27.057159 containerd[1961]: time="2025-01-29T16:04:27.056603050Z" level=info msg="Start event monitor" Jan 29 16:04:27.057159 containerd[1961]: time="2025-01-29T16:04:27.056625982Z" level=info msg="Start snapshots syncer" Jan 29 16:04:27.057159 containerd[1961]: time="2025-01-29T16:04:27.056646646Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:04:27.057159 containerd[1961]: time="2025-01-29T16:04:27.056665054Z" level=info msg="Start streaming server" Jan 29 16:04:27.057482 containerd[1961]: time="2025-01-29T16:04:27.057219610Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:04:27.059195 containerd[1961]: time="2025-01-29T16:04:27.058903354Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:04:27.068176 containerd[1961]: time="2025-01-29T16:04:27.064302334Z" level=info msg="containerd successfully booted in 0.275650s" Jan 29 16:04:27.064448 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:04:27.071809 locksmithd[1968]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:04:27.075682 ntpd[1926]: bind(24) AF_INET6 fe80::4c8:77ff:fee7:677b%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:04:27.077721 ntpd[1926]: 29 Jan 16:04:27 ntpd[1926]: bind(24) AF_INET6 fe80::4c8:77ff:fee7:677b%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:04:27.077721 ntpd[1926]: 29 Jan 16:04:27 ntpd[1926]: unable to create socket on eth0 (6) for fe80::4c8:77ff:fee7:677b%2#123 Jan 29 16:04:27.077721 ntpd[1926]: 29 Jan 16:04:27 ntpd[1926]: failed to init interface for address fe80::4c8:77ff:fee7:677b%2 Jan 29 16:04:27.075750 ntpd[1926]: unable to create socket on eth0 (6) for fe80::4c8:77ff:fee7:677b%2#123 Jan 29 16:04:27.075779 ntpd[1926]: failed to init interface for address fe80::4c8:77ff:fee7:677b%2 Jan 29 16:04:27.088240 systemd-networkd[1786]: eth0: Gained IPv6LL Jan 29 16:04:27.101122 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:04:27.104691 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:04:27.118311 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 16:04:27.142649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:27.149101 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:04:27.332772 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: Initializing new seelog logger Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: New Seelog Logger Creation Complete Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: 2025/01/29 16:04:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: 2025/01/29 16:04:27 processing appconfig overrides Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: 2025/01/29 16:04:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: 2025/01/29 16:04:27 processing appconfig overrides Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: 2025/01/29 16:04:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: 2025/01/29 16:04:27 processing appconfig overrides Jan 29 16:04:27.351086 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO Proxy environment variables: Jan 29 16:04:27.357097 amazon-ssm-agent[2114]: 2025/01/29 16:04:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:04:27.357097 amazon-ssm-agent[2114]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 16:04:27.357097 amazon-ssm-agent[2114]: 2025/01/29 16:04:27 processing appconfig overrides Jan 29 16:04:27.450700 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO no_proxy: Jan 29 16:04:27.551341 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO https_proxy: Jan 29 16:04:27.650022 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO http_proxy: Jan 29 16:04:27.749246 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO Checking if agent identity type OnPrem can be assumed Jan 29 16:04:27.849058 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO Checking if agent identity type EC2 can be assumed Jan 29 16:04:27.946657 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO Agent will take identity from EC2 Jan 29 16:04:27.968889 sshd_keygen[1953]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:04:28.045694 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 16:04:28.070479 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:04:28.085541 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:04:28.101571 systemd[1]: Started sshd@0-172.31.23.241:22-139.178.89.65:46028.service - OpenSSH per-connection server daemon (139.178.89.65:46028). Jan 29 16:04:28.143308 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:04:28.145149 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 16:04:28.145771 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:04:28.159635 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:04:28.213976 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:04:28.234633 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:04:28.245066 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 16:04:28.249705 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:04:28.252103 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:04:28.344742 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 16:04:28.357067 tar[1939]: linux-arm64/README.md Jan 29 16:04:28.393108 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:04:28.401206 sshd[2156]: Accepted publickey for core from 139.178.89.65 port 46028 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:04:28.406533 sshd-session[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:28.428241 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:04:28.437706 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:04:28.447819 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 29 16:04:28.460990 systemd-logind[1933]: New session 1 of user core. Jan 29 16:04:28.492139 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:04:28.512433 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:04:28.533298 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 16:04:28.533298 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 16:04:28.533298 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO [Registrar] Starting registrar module Jan 29 16:04:28.533504 amazon-ssm-agent[2114]: 2025-01-29 16:04:27 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 16:04:28.533504 amazon-ssm-agent[2114]: 2025-01-29 16:04:28 INFO [EC2Identity] EC2 registration was successful. Jan 29 16:04:28.533504 amazon-ssm-agent[2114]: 2025-01-29 16:04:28 INFO [CredentialRefresher] credentialRefresher has started Jan 29 16:04:28.533504 amazon-ssm-agent[2114]: 2025-01-29 16:04:28 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 16:04:28.533504 amazon-ssm-agent[2114]: 2025-01-29 16:04:28 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 16:04:28.538875 (systemd)[2170]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:04:28.543708 systemd-logind[1933]: New session c1 of user core. Jan 29 16:04:28.549025 amazon-ssm-agent[2114]: 2025-01-29 16:04:28 INFO [CredentialRefresher] Next credential rotation will be in 30.833323081233335 minutes Jan 29 16:04:28.816535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:28.823231 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:04:28.836210 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:04:28.874696 systemd[2170]: Queued start job for default target default.target. Jan 29 16:04:28.881474 systemd[2170]: Created slice app.slice - User Application Slice. Jan 29 16:04:28.881547 systemd[2170]: Reached target paths.target - Paths. Jan 29 16:04:28.881661 systemd[2170]: Reached target timers.target - Timers. Jan 29 16:04:28.884893 systemd[2170]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:04:28.937768 systemd[2170]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:04:28.937921 systemd[2170]: Reached target sockets.target - Sockets. Jan 29 16:04:28.938028 systemd[2170]: Reached target basic.target - Basic System. Jan 29 16:04:28.938203 systemd[2170]: Reached target default.target - Main User Target. Jan 29 16:04:28.938269 systemd[2170]: Startup finished in 379ms. Jan 29 16:04:28.938574 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:04:28.950389 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:04:28.953362 systemd[1]: Startup finished in 1.223s (kernel) + 9.395s (initrd) + 8.815s (userspace) = 19.434s. Jan 29 16:04:29.135011 systemd[1]: Started sshd@1-172.31.23.241:22-139.178.89.65:46040.service - OpenSSH per-connection server daemon (139.178.89.65:46040). Jan 29 16:04:29.326874 sshd[2195]: Accepted publickey for core from 139.178.89.65 port 46040 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:04:29.329208 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:29.343153 systemd-logind[1933]: New session 2 of user core. Jan 29 16:04:29.347384 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:04:29.479508 sshd[2197]: Connection closed by 139.178.89.65 port 46040 Jan 29 16:04:29.480347 sshd-session[2195]: pam_unix(sshd:session): session closed for user core Jan 29 16:04:29.486165 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:04:29.490025 systemd[1]: sshd@1-172.31.23.241:22-139.178.89.65:46040.service: Deactivated successfully. Jan 29 16:04:29.495797 systemd-logind[1933]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:04:29.498492 systemd-logind[1933]: Removed session 2. Jan 29 16:04:29.529232 systemd[1]: Started sshd@2-172.31.23.241:22-139.178.89.65:46056.service - OpenSSH per-connection server daemon (139.178.89.65:46056). Jan 29 16:04:29.569598 amazon-ssm-agent[2114]: 2025-01-29 16:04:29 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 16:04:29.669169 amazon-ssm-agent[2114]: 2025-01-29 16:04:29 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2205) started Jan 29 16:04:29.678778 kubelet[2181]: E0129 16:04:29.678694 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:04:29.685596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:04:29.686501 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:04:29.687645 systemd[1]: kubelet.service: Consumed 1.330s CPU time, 250M memory peak. Jan 29 16:04:29.763418 sshd[2203]: Accepted publickey for core from 139.178.89.65 port 46056 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:04:29.766747 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:29.770507 amazon-ssm-agent[2114]: 2025-01-29 16:04:29 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 16:04:29.784017 systemd-logind[1933]: New session 3 of user core. Jan 29 16:04:29.791972 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:04:29.911871 sshd[2220]: Connection closed by 139.178.89.65 port 46056 Jan 29 16:04:29.910798 sshd-session[2203]: pam_unix(sshd:session): session closed for user core Jan 29 16:04:29.917904 systemd[1]: sshd@2-172.31.23.241:22-139.178.89.65:46056.service: Deactivated successfully. Jan 29 16:04:29.922494 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:04:29.924652 systemd-logind[1933]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:04:29.926657 systemd-logind[1933]: Removed session 3. Jan 29 16:04:29.948655 systemd[1]: Started sshd@3-172.31.23.241:22-139.178.89.65:46060.service - OpenSSH per-connection server daemon (139.178.89.65:46060). Jan 29 16:04:30.075746 ntpd[1926]: Listen normally on 7 eth0 [fe80::4c8:77ff:fee7:677b%2]:123 Jan 29 16:04:30.076848 ntpd[1926]: 29 Jan 16:04:30 ntpd[1926]: Listen normally on 7 eth0 [fe80::4c8:77ff:fee7:677b%2]:123 Jan 29 16:04:30.212240 sshd[2226]: Accepted publickey for core from 139.178.89.65 port 46060 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:04:30.214803 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:30.226486 systemd-logind[1933]: New session 4 of user core. Jan 29 16:04:30.233301 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:04:30.359260 sshd[2228]: Connection closed by 139.178.89.65 port 46060 Jan 29 16:04:30.360410 sshd-session[2226]: pam_unix(sshd:session): session closed for user core Jan 29 16:04:30.365610 systemd[1]: sshd@3-172.31.23.241:22-139.178.89.65:46060.service: Deactivated successfully. Jan 29 16:04:30.369757 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:04:30.373528 systemd-logind[1933]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:04:30.375507 systemd-logind[1933]: Removed session 4. Jan 29 16:04:30.400600 systemd[1]: Started sshd@4-172.31.23.241:22-139.178.89.65:46070.service - OpenSSH per-connection server daemon (139.178.89.65:46070). Jan 29 16:04:30.589097 sshd[2234]: Accepted publickey for core from 139.178.89.65 port 46070 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:04:30.591482 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:30.599309 systemd-logind[1933]: New session 5 of user core. Jan 29 16:04:30.607281 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:04:30.719853 sudo[2237]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:04:30.720690 sudo[2237]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:04:30.741279 sudo[2237]: pam_unix(sudo:session): session closed for user root Jan 29 16:04:30.764736 sshd[2236]: Connection closed by 139.178.89.65 port 46070 Jan 29 16:04:30.764551 sshd-session[2234]: pam_unix(sshd:session): session closed for user core Jan 29 16:04:30.770892 systemd[1]: sshd@4-172.31.23.241:22-139.178.89.65:46070.service: Deactivated successfully. Jan 29 16:04:30.774415 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:04:30.775828 systemd-logind[1933]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:04:30.778270 systemd-logind[1933]: Removed session 5. Jan 29 16:04:30.805532 systemd[1]: Started sshd@5-172.31.23.241:22-139.178.89.65:39252.service - OpenSSH per-connection server daemon (139.178.89.65:39252). Jan 29 16:04:30.987024 sshd[2243]: Accepted publickey for core from 139.178.89.65 port 39252 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:04:30.989696 sshd-session[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:31.000825 systemd-logind[1933]: New session 6 of user core. Jan 29 16:04:31.008391 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:04:31.113932 sudo[2247]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:04:31.115249 sudo[2247]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:04:31.121767 sudo[2247]: pam_unix(sudo:session): session closed for user root Jan 29 16:04:31.132322 sudo[2246]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:04:31.132947 sudo[2246]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:04:31.159816 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:04:31.208786 augenrules[2269]: No rules Jan 29 16:04:31.211330 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:04:31.211794 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:04:31.213806 sudo[2246]: pam_unix(sudo:session): session closed for user root Jan 29 16:04:31.237664 sshd[2245]: Connection closed by 139.178.89.65 port 39252 Jan 29 16:04:31.238921 sshd-session[2243]: pam_unix(sshd:session): session closed for user core Jan 29 16:04:31.243667 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:04:31.245937 systemd[1]: sshd@5-172.31.23.241:22-139.178.89.65:39252.service: Deactivated successfully. Jan 29 16:04:31.251386 systemd-logind[1933]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:04:31.253164 systemd-logind[1933]: Removed session 6. Jan 29 16:04:31.279289 systemd[1]: Started sshd@6-172.31.23.241:22-139.178.89.65:39254.service - OpenSSH per-connection server daemon (139.178.89.65:39254). Jan 29 16:04:31.460137 sshd[2278]: Accepted publickey for core from 139.178.89.65 port 39254 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:04:31.462767 sshd-session[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:04:31.473348 systemd-logind[1933]: New session 7 of user core. Jan 29 16:04:31.482296 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:04:31.583753 sudo[2281]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:04:31.585653 sudo[2281]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:04:32.044888 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:04:32.057677 (dockerd)[2299]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:04:32.418168 dockerd[2299]: time="2025-01-29T16:04:32.417362428Z" level=info msg="Starting up" Jan 29 16:04:33.388687 systemd-resolved[1744]: Clock change detected. Flushing caches. Jan 29 16:04:33.564173 dockerd[2299]: time="2025-01-29T16:04:33.564089720Z" level=info msg="Loading containers: start." Jan 29 16:04:33.821846 kernel: Initializing XFRM netlink socket Jan 29 16:04:33.853074 (udev-worker)[2323]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:04:33.944614 systemd-networkd[1786]: docker0: Link UP Jan 29 16:04:33.986094 dockerd[2299]: time="2025-01-29T16:04:33.986045026Z" level=info msg="Loading containers: done." Jan 29 16:04:34.024022 dockerd[2299]: time="2025-01-29T16:04:34.023873851Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:04:34.024270 dockerd[2299]: time="2025-01-29T16:04:34.024097147Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:04:34.024381 dockerd[2299]: time="2025-01-29T16:04:34.024331459Z" level=info msg="Daemon has completed initialization" Jan 29 16:04:34.124321 dockerd[2299]: time="2025-01-29T16:04:34.123962971Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:04:34.124290 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:04:35.144636 containerd[1961]: time="2025-01-29T16:04:35.144559856Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 16:04:35.941544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849061954.mount: Deactivated successfully. Jan 29 16:04:37.869890 containerd[1961]: time="2025-01-29T16:04:37.869590778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:37.872033 containerd[1961]: time="2025-01-29T16:04:37.871938038Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26220948" Jan 29 16:04:37.873690 containerd[1961]: time="2025-01-29T16:04:37.873580646Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:37.879832 containerd[1961]: time="2025-01-29T16:04:37.879675014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:37.883216 containerd[1961]: time="2025-01-29T16:04:37.882442490Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 2.737805666s" Jan 29 16:04:37.883216 containerd[1961]: time="2025-01-29T16:04:37.882515786Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 29 16:04:37.883571 containerd[1961]: time="2025-01-29T16:04:37.883499378Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 16:04:39.679376 containerd[1961]: time="2025-01-29T16:04:39.679296387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:39.681501 containerd[1961]: time="2025-01-29T16:04:39.681429243Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527107" Jan 29 16:04:39.683711 containerd[1961]: time="2025-01-29T16:04:39.683647023Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:39.690348 containerd[1961]: time="2025-01-29T16:04:39.690266427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:39.692498 containerd[1961]: time="2025-01-29T16:04:39.692450799Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 1.808880789s" Jan 29 16:04:39.692789 containerd[1961]: time="2025-01-29T16:04:39.692646243Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 29 16:04:39.693861 containerd[1961]: time="2025-01-29T16:04:39.693545643Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 16:04:40.248660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:04:40.260148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:40.857473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:40.867675 (kubelet)[2560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:04:40.957521 kubelet[2560]: E0129 16:04:40.956750 2560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:04:40.964784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:04:40.965155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:04:40.965746 systemd[1]: kubelet.service: Consumed 302ms CPU time, 104M memory peak. Jan 29 16:04:41.402023 containerd[1961]: time="2025-01-29T16:04:41.401922675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:41.404565 containerd[1961]: time="2025-01-29T16:04:41.404449815Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481113" Jan 29 16:04:41.407228 containerd[1961]: time="2025-01-29T16:04:41.407100975Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:41.413847 containerd[1961]: time="2025-01-29T16:04:41.413576571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:41.417038 containerd[1961]: time="2025-01-29T16:04:41.416387355Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.72277762s" Jan 29 16:04:41.417038 containerd[1961]: time="2025-01-29T16:04:41.416459871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 29 16:04:41.417453 containerd[1961]: time="2025-01-29T16:04:41.417251751Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 16:04:42.953775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928100796.mount: Deactivated successfully. Jan 29 16:04:43.574209 containerd[1961]: time="2025-01-29T16:04:43.574113978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:43.576676 containerd[1961]: time="2025-01-29T16:04:43.576575490Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364397" Jan 29 16:04:43.578994 containerd[1961]: time="2025-01-29T16:04:43.578898426Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:43.584162 containerd[1961]: time="2025-01-29T16:04:43.584039862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:43.586025 containerd[1961]: time="2025-01-29T16:04:43.585731730Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 2.168422979s" Jan 29 16:04:43.586025 containerd[1961]: time="2025-01-29T16:04:43.585849558Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 29 16:04:43.586901 containerd[1961]: time="2025-01-29T16:04:43.586834806Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 16:04:44.252453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118265720.mount: Deactivated successfully. Jan 29 16:04:45.689340 containerd[1961]: time="2025-01-29T16:04:45.689242053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:45.691826 containerd[1961]: time="2025-01-29T16:04:45.691713729Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 29 16:04:45.693091 containerd[1961]: time="2025-01-29T16:04:45.693011985Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:45.711646 containerd[1961]: time="2025-01-29T16:04:45.711038481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:45.714307 containerd[1961]: time="2025-01-29T16:04:45.714228621Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.127328331s" Jan 29 16:04:45.714586 containerd[1961]: time="2025-01-29T16:04:45.714540753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 29 16:04:45.716696 containerd[1961]: time="2025-01-29T16:04:45.716626869Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:04:46.294290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1260939835.mount: Deactivated successfully. Jan 29 16:04:46.304921 containerd[1961]: time="2025-01-29T16:04:46.304172408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:46.306085 containerd[1961]: time="2025-01-29T16:04:46.305976596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 29 16:04:46.308258 containerd[1961]: time="2025-01-29T16:04:46.308159948Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:46.313054 containerd[1961]: time="2025-01-29T16:04:46.312985964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:46.315609 containerd[1961]: time="2025-01-29T16:04:46.315050228Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 598.353915ms" Jan 29 16:04:46.315609 containerd[1961]: time="2025-01-29T16:04:46.315119288Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 16:04:46.316445 containerd[1961]: time="2025-01-29T16:04:46.316054532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 16:04:46.976005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154655614.mount: Deactivated successfully. Jan 29 16:04:50.753874 containerd[1961]: time="2025-01-29T16:04:50.753782966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:50.756763 containerd[1961]: time="2025-01-29T16:04:50.756701414Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812429" Jan 29 16:04:50.757881 containerd[1961]: time="2025-01-29T16:04:50.757839494Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:50.764139 containerd[1961]: time="2025-01-29T16:04:50.764088194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:04:50.766759 containerd[1961]: time="2025-01-29T16:04:50.766708262Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.450595974s" Jan 29 16:04:50.766968 containerd[1961]: time="2025-01-29T16:04:50.766934570Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 29 16:04:51.013685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:04:51.025126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:51.533204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:51.541283 (kubelet)[2712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:04:51.621857 kubelet[2712]: E0129 16:04:51.619611 2712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:04:51.624319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:04:51.625420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:04:51.627919 systemd[1]: kubelet.service: Consumed 286ms CPU time, 104.3M memory peak. Jan 29 16:04:57.309057 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 16:04:58.162093 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:58.162791 systemd[1]: kubelet.service: Consumed 286ms CPU time, 104.3M memory peak. Jan 29 16:04:58.171372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:58.244030 systemd[1]: Reload requested from client PID 2731 ('systemctl') (unit session-7.scope)... Jan 29 16:04:58.244070 systemd[1]: Reloading... Jan 29 16:04:58.471858 zram_generator::config[2788]: No configuration found. Jan 29 16:04:58.730369 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:04:58.952759 systemd[1]: Reloading finished in 708 ms. Jan 29 16:04:59.033120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:59.046618 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:04:59.051650 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:59.053100 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:04:59.053594 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:59.053682 systemd[1]: kubelet.service: Consumed 224ms CPU time, 91.3M memory peak. Jan 29 16:04:59.059475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:04:59.565094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:04:59.577459 (kubelet)[2842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:04:59.654939 kubelet[2842]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:04:59.655382 kubelet[2842]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:04:59.655496 kubelet[2842]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:04:59.655770 kubelet[2842]: I0129 16:04:59.655719 2842 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:05:01.104841 kubelet[2842]: I0129 16:05:01.104752 2842 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:05:01.105498 kubelet[2842]: I0129 16:05:01.104896 2842 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:05:01.105498 kubelet[2842]: I0129 16:05:01.105384 2842 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:05:01.148520 kubelet[2842]: E0129 16:05:01.148455 2842 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.241:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.241:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:05:01.153861 kubelet[2842]: I0129 16:05:01.153045 2842 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:05:01.169185 kubelet[2842]: E0129 16:05:01.169119 2842 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:05:01.169453 kubelet[2842]: I0129 16:05:01.169429 2842 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:05:01.174588 kubelet[2842]: I0129 16:05:01.174552 2842 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:05:01.178403 kubelet[2842]: I0129 16:05:01.178343 2842 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:05:01.178899 kubelet[2842]: I0129 16:05:01.178547 2842 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-241","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:05:01.179780 kubelet[2842]: I0129 16:05:01.179204 2842 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:05:01.179780 kubelet[2842]: I0129 16:05:01.179236 2842 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:05:01.179780 kubelet[2842]: I0129 16:05:01.179462 2842 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:05:01.185912 kubelet[2842]: I0129 16:05:01.185875 2842 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:05:01.186100 kubelet[2842]: I0129 16:05:01.186078 2842 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:05:01.186222 kubelet[2842]: I0129 16:05:01.186204 2842 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:05:01.186354 kubelet[2842]: I0129 16:05:01.186334 2842 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:05:01.187852 kubelet[2842]: W0129 16:05:01.187737 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-241&limit=500&resourceVersion=0": dial tcp 172.31.23.241:6443: connect: connection refused Jan 29 16:05:01.187996 kubelet[2842]: E0129 16:05:01.187880 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-241&limit=500&resourceVersion=0\": dial tcp 172.31.23.241:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:05:01.192860 kubelet[2842]: W0129 16:05:01.191176 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.241:6443: connect: connection refused Jan 29 16:05:01.192860 kubelet[2842]: E0129 16:05:01.191262 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.241:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:05:01.192860 kubelet[2842]: I0129 16:05:01.191503 2842 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:05:01.192860 kubelet[2842]: I0129 16:05:01.192264 2842 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:05:01.192860 kubelet[2842]: W0129 16:05:01.192368 2842 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:05:01.194451 kubelet[2842]: I0129 16:05:01.194416 2842 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:05:01.194659 kubelet[2842]: I0129 16:05:01.194639 2842 server.go:1287] "Started kubelet" Jan 29 16:05:01.203355 kubelet[2842]: I0129 16:05:01.203320 2842 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:05:01.204836 kubelet[2842]: E0129 16:05:01.204564 2842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.241:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.241:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-241.181f3567d11e006e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-241,UID:ip-172-31-23-241,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-241,},FirstTimestamp:2025-01-29 16:05:01.19460875 +0000 UTC m=+1.610466333,LastTimestamp:2025-01-29 16:05:01.19460875 +0000 UTC m=+1.610466333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-241,}" Jan 29 16:05:01.209650 kubelet[2842]: I0129 16:05:01.209569 2842 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:05:01.211403 kubelet[2842]: I0129 16:05:01.211341 2842 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:05:01.213059 kubelet[2842]: I0129 16:05:01.212966 2842 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:05:01.213382 kubelet[2842]: I0129 16:05:01.213342 2842 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:05:01.213773 kubelet[2842]: I0129 16:05:01.213725 2842 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:05:01.215169 kubelet[2842]: I0129 16:05:01.215136 2842 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:05:01.215699 kubelet[2842]: E0129 16:05:01.215668 2842 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-23-241\" not found" Jan 29 16:05:01.217608 kubelet[2842]: E0129 16:05:01.217552 2842 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:05:01.217768 kubelet[2842]: E0129 16:05:01.217740 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-241?timeout=10s\": dial tcp 172.31.23.241:6443: connect: connection refused" interval="200ms" Jan 29 16:05:01.218153 kubelet[2842]: I0129 16:05:01.218099 2842 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:05:01.218317 kubelet[2842]: I0129 16:05:01.218267 2842 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:05:01.221353 kubelet[2842]: I0129 16:05:01.221298 2842 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:05:01.221907 kubelet[2842]: I0129 16:05:01.221321 2842 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:05:01.221907 kubelet[2842]: I0129 16:05:01.221597 2842 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:05:01.246084 kubelet[2842]: I0129 16:05:01.246018 2842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:05:01.250714 kubelet[2842]: I0129 16:05:01.250661 2842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:05:01.251259 kubelet[2842]: I0129 16:05:01.250937 2842 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:05:01.251259 kubelet[2842]: I0129 16:05:01.250984 2842 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:05:01.251259 kubelet[2842]: I0129 16:05:01.251000 2842 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:05:01.251259 kubelet[2842]: E0129 16:05:01.251070 2842 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:05:01.255909 kubelet[2842]: W0129 16:05:01.255840 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.241:6443: connect: connection refused Jan 29 16:05:01.260901 kubelet[2842]: E0129 16:05:01.259969 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.241:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:05:01.263299 kubelet[2842]: W0129 16:05:01.263255 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.241:6443: connect: connection refused Jan 29 16:05:01.264032 kubelet[2842]: E0129 16:05:01.263992 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.241:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:05:01.276316 kubelet[2842]: I0129 16:05:01.276283 2842 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:05:01.276642 kubelet[2842]: I0129 16:05:01.276548 2842 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:05:01.276793 kubelet[2842]: I0129 16:05:01.276775 2842 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:05:01.280069 kubelet[2842]: I0129 16:05:01.280038 2842 policy_none.go:49] "None policy: Start" Jan 29 16:05:01.280246 kubelet[2842]: I0129 16:05:01.280216 2842 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:05:01.280376 kubelet[2842]: I0129 16:05:01.280359 2842 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:05:01.291189 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:05:01.307735 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:05:01.315225 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:05:01.316457 kubelet[2842]: E0129 16:05:01.316411 2842 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-23-241\" not found" Jan 29 16:05:01.326510 kubelet[2842]: I0129 16:05:01.325777 2842 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:05:01.326510 kubelet[2842]: I0129 16:05:01.326097 2842 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:05:01.326510 kubelet[2842]: I0129 16:05:01.326117 2842 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:05:01.326929 kubelet[2842]: I0129 16:05:01.326577 2842 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:05:01.329832 kubelet[2842]: E0129 16:05:01.329686 2842 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:05:01.329832 kubelet[2842]: E0129 16:05:01.329779 2842 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-241\" not found" Jan 29 16:05:01.371929 systemd[1]: Created slice kubepods-burstable-pod5ce724182df412367891eabb3a4ab6a6.slice - libcontainer container kubepods-burstable-pod5ce724182df412367891eabb3a4ab6a6.slice. Jan 29 16:05:01.386885 kubelet[2842]: E0129 16:05:01.386479 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-241\" not found" node="ip-172-31-23-241" Jan 29 16:05:01.392105 systemd[1]: Created slice kubepods-burstable-podb1951bff192c7465c13da98eb8f7a374.slice - libcontainer container kubepods-burstable-podb1951bff192c7465c13da98eb8f7a374.slice. Jan 29 16:05:01.399915 kubelet[2842]: E0129 16:05:01.399554 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-241\" not found" node="ip-172-31-23-241" Jan 29 16:05:01.401258 systemd[1]: Created slice kubepods-burstable-podcbcae26fff010e45e543a451b53c82af.slice - libcontainer container kubepods-burstable-podcbcae26fff010e45e543a451b53c82af.slice. Jan 29 16:05:01.404942 kubelet[2842]: E0129 16:05:01.404907 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-241\" not found" node="ip-172-31-23-241" Jan 29 16:05:01.418588 kubelet[2842]: E0129 16:05:01.418539 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-241?timeout=10s\": dial tcp 172.31.23.241:6443: connect: connection refused" interval="400ms" Jan 29 16:05:01.423112 kubelet[2842]: I0129 16:05:01.423071 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbcae26fff010e45e543a451b53c82af-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-241\" (UID: \"cbcae26fff010e45e543a451b53c82af\") " pod="kube-system/kube-scheduler-ip-172-31-23-241" Jan 29 16:05:01.423320 kubelet[2842]: I0129 16:05:01.423296 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ce724182df412367891eabb3a4ab6a6-ca-certs\") pod \"kube-apiserver-ip-172-31-23-241\" (UID: \"5ce724182df412367891eabb3a4ab6a6\") " pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:01.423523 kubelet[2842]: I0129 16:05:01.423467 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ce724182df412367891eabb3a4ab6a6-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-241\" (UID: \"5ce724182df412367891eabb3a4ab6a6\") " pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:01.423795 kubelet[2842]: I0129 16:05:01.423653 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ce724182df412367891eabb3a4ab6a6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-241\" (UID: \"5ce724182df412367891eabb3a4ab6a6\") " pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:01.423795 kubelet[2842]: I0129 16:05:01.423737 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1951bff192c7465c13da98eb8f7a374-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-241\" (UID: \"b1951bff192c7465c13da98eb8f7a374\") " pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:01.424057 kubelet[2842]: I0129 16:05:01.423965 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b1951bff192c7465c13da98eb8f7a374-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-241\" (UID: \"b1951bff192c7465c13da98eb8f7a374\") " pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:01.424325 kubelet[2842]: I0129 16:05:01.424156 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1951bff192c7465c13da98eb8f7a374-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-241\" (UID: \"b1951bff192c7465c13da98eb8f7a374\") " pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:01.424325 kubelet[2842]: I0129 16:05:01.424205 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1951bff192c7465c13da98eb8f7a374-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-241\" (UID: \"b1951bff192c7465c13da98eb8f7a374\") " pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:01.424325 kubelet[2842]: I0129 16:05:01.424246 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1951bff192c7465c13da98eb8f7a374-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-241\" (UID: \"b1951bff192c7465c13da98eb8f7a374\") " pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:01.429593 kubelet[2842]: I0129 16:05:01.429169 2842 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-241" Jan 29 16:05:01.429731 kubelet[2842]: E0129 16:05:01.429671 2842 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.23.241:6443/api/v1/nodes\": dial tcp 172.31.23.241:6443: connect: connection refused" node="ip-172-31-23-241" Jan 29 16:05:01.632248 kubelet[2842]: I0129 16:05:01.632084 2842 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-241" Jan 29 16:05:01.632649 kubelet[2842]: E0129 16:05:01.632560 2842 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.23.241:6443/api/v1/nodes\": dial tcp 172.31.23.241:6443: connect: connection refused" node="ip-172-31-23-241" Jan 29 16:05:01.688052 containerd[1961]: time="2025-01-29T16:05:01.687952104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-241,Uid:5ce724182df412367891eabb3a4ab6a6,Namespace:kube-system,Attempt:0,}" Jan 29 16:05:01.701527 containerd[1961]: time="2025-01-29T16:05:01.701458524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-241,Uid:b1951bff192c7465c13da98eb8f7a374,Namespace:kube-system,Attempt:0,}" Jan 29 16:05:01.706313 containerd[1961]: time="2025-01-29T16:05:01.706227768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-241,Uid:cbcae26fff010e45e543a451b53c82af,Namespace:kube-system,Attempt:0,}" Jan 29 16:05:01.820223 kubelet[2842]: E0129 16:05:01.820164 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-241?timeout=10s\": dial tcp 172.31.23.241:6443: connect: connection refused" interval="800ms" Jan 29 16:05:02.035460 kubelet[2842]: I0129 16:05:02.034975 2842 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-241" Jan 29 16:05:02.035460 kubelet[2842]: E0129 16:05:02.035391 2842 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.23.241:6443/api/v1/nodes\": dial tcp 172.31.23.241:6443: connect: connection refused" node="ip-172-31-23-241" Jan 29 16:05:02.085874 kubelet[2842]: W0129 16:05:02.085656 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-241&limit=500&resourceVersion=0": dial tcp 172.31.23.241:6443: connect: connection refused Jan 29 16:05:02.085874 kubelet[2842]: E0129 16:05:02.085774 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-241&limit=500&resourceVersion=0\": dial tcp 172.31.23.241:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:05:02.103169 kubelet[2842]: W0129 16:05:02.103047 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.241:6443: connect: connection refused Jan 29 16:05:02.103291 kubelet[2842]: E0129 16:05:02.103228 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.241:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.241:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:05:02.281056 kubelet[2842]: W0129 16:05:02.281004 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.241:6443: connect: connection refused Jan 29 16:05:02.281643 kubelet[2842]: E0129 16:05:02.281079 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.241:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.241:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:05:02.299883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469239144.mount: Deactivated successfully. Jan 29 16:05:02.312270 containerd[1961]: time="2025-01-29T16:05:02.312182951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:05:02.316291 containerd[1961]: time="2025-01-29T16:05:02.316213931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 29 16:05:02.327009 containerd[1961]: time="2025-01-29T16:05:02.326762507Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:05:02.331008 containerd[1961]: time="2025-01-29T16:05:02.330923927Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:05:02.333485 containerd[1961]: time="2025-01-29T16:05:02.333417707Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:05:02.336918 containerd[1961]: time="2025-01-29T16:05:02.336790199Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:05:02.338993 containerd[1961]: time="2025-01-29T16:05:02.338675771Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:05:02.340543 containerd[1961]: time="2025-01-29T16:05:02.340495211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:05:02.344551 containerd[1961]: time="2025-01-29T16:05:02.344132867Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 637.779435ms" Jan 29 16:05:02.347099 containerd[1961]: time="2025-01-29T16:05:02.347023019Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 658.963719ms" Jan 29 16:05:02.354137 containerd[1961]: time="2025-01-29T16:05:02.354011315Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 652.414299ms" Jan 29 16:05:02.590135 containerd[1961]: time="2025-01-29T16:05:02.588865261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:05:02.590135 containerd[1961]: time="2025-01-29T16:05:02.589516765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:05:02.590135 containerd[1961]: time="2025-01-29T16:05:02.589955473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:02.590827 containerd[1961]: time="2025-01-29T16:05:02.590426401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:02.591791 containerd[1961]: time="2025-01-29T16:05:02.591605065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:05:02.594544 containerd[1961]: time="2025-01-29T16:05:02.594106861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:05:02.594544 containerd[1961]: time="2025-01-29T16:05:02.594150325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:02.594544 containerd[1961]: time="2025-01-29T16:05:02.594329377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:02.601748 containerd[1961]: time="2025-01-29T16:05:02.600475801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:05:02.601748 containerd[1961]: time="2025-01-29T16:05:02.600589549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:05:02.601748 containerd[1961]: time="2025-01-29T16:05:02.600627145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:02.601748 containerd[1961]: time="2025-01-29T16:05:02.600780481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:02.621597 kubelet[2842]: E0129 16:05:02.621525 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-241?timeout=10s\": dial tcp 172.31.23.241:6443: connect: connection refused" interval="1.6s" Jan 29 16:05:02.654132 systemd[1]: Started cri-containerd-687025ece090e612b98bb1c8ada044144bcc9a29cda3853df29afad88f8935e5.scope - libcontainer container 687025ece090e612b98bb1c8ada044144bcc9a29cda3853df29afad88f8935e5. Jan 29 16:05:02.665439 systemd[1]: Started cri-containerd-66ab943257f5772f34c433f2b72cb3bbf378ff2741c92c9bb3031a4d38bee6bc.scope - libcontainer container 66ab943257f5772f34c433f2b72cb3bbf378ff2741c92c9bb3031a4d38bee6bc. Jan 29 16:05:02.670542 systemd[1]: Started cri-containerd-ff6905b3fa70a768f4a1316a540f06555dd9b227c6253f82a5cff5c63893e34c.scope - libcontainer container ff6905b3fa70a768f4a1316a540f06555dd9b227c6253f82a5cff5c63893e34c. Jan 29 16:05:02.753286 kubelet[2842]: W0129 16:05:02.753068 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.241:6443: connect: connection refused Jan 29 16:05:02.753443 kubelet[2842]: E0129 16:05:02.753383 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.241:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.241:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:05:02.765457 containerd[1961]: time="2025-01-29T16:05:02.765268765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-241,Uid:5ce724182df412367891eabb3a4ab6a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"687025ece090e612b98bb1c8ada044144bcc9a29cda3853df29afad88f8935e5\"" Jan 29 16:05:02.778716 containerd[1961]: time="2025-01-29T16:05:02.778562929Z" level=info msg="CreateContainer within sandbox \"687025ece090e612b98bb1c8ada044144bcc9a29cda3853df29afad88f8935e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:05:02.787718 containerd[1961]: time="2025-01-29T16:05:02.787593350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-241,Uid:b1951bff192c7465c13da98eb8f7a374,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff6905b3fa70a768f4a1316a540f06555dd9b227c6253f82a5cff5c63893e34c\"" Jan 29 16:05:02.807424 containerd[1961]: time="2025-01-29T16:05:02.807372098Z" level=info msg="CreateContainer within sandbox \"ff6905b3fa70a768f4a1316a540f06555dd9b227c6253f82a5cff5c63893e34c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:05:02.817592 containerd[1961]: time="2025-01-29T16:05:02.817438610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-241,Uid:cbcae26fff010e45e543a451b53c82af,Namespace:kube-system,Attempt:0,} returns sandbox id \"66ab943257f5772f34c433f2b72cb3bbf378ff2741c92c9bb3031a4d38bee6bc\"" Jan 29 16:05:02.822659 containerd[1961]: time="2025-01-29T16:05:02.822602582Z" level=info msg="CreateContainer within sandbox \"66ab943257f5772f34c433f2b72cb3bbf378ff2741c92c9bb3031a4d38bee6bc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:05:02.833733 containerd[1961]: time="2025-01-29T16:05:02.832869650Z" level=info msg="CreateContainer within sandbox \"687025ece090e612b98bb1c8ada044144bcc9a29cda3853df29afad88f8935e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cad1dbb6466774663760da6d0df22668eb7f90f395386a10f1618e05534f36ca\"" Jan 29 16:05:02.834753 containerd[1961]: time="2025-01-29T16:05:02.834706850Z" level=info msg="StartContainer for \"cad1dbb6466774663760da6d0df22668eb7f90f395386a10f1618e05534f36ca\"" Jan 29 16:05:02.842874 kubelet[2842]: I0129 16:05:02.841874 2842 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-241" Jan 29 16:05:02.843890 kubelet[2842]: E0129 16:05:02.843793 2842 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.23.241:6443/api/v1/nodes\": dial tcp 172.31.23.241:6443: connect: connection refused" node="ip-172-31-23-241" Jan 29 16:05:02.873855 containerd[1961]: time="2025-01-29T16:05:02.873638750Z" level=info msg="CreateContainer within sandbox \"ff6905b3fa70a768f4a1316a540f06555dd9b227c6253f82a5cff5c63893e34c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8c88e92ece21529f8fb727b75f18844b643994ea6cf0503f07ce2a94a0cfe42c\"" Jan 29 16:05:02.874489 containerd[1961]: time="2025-01-29T16:05:02.874448870Z" level=info msg="StartContainer for \"8c88e92ece21529f8fb727b75f18844b643994ea6cf0503f07ce2a94a0cfe42c\"" Jan 29 16:05:02.883318 containerd[1961]: time="2025-01-29T16:05:02.883245470Z" level=info msg="CreateContainer within sandbox \"66ab943257f5772f34c433f2b72cb3bbf378ff2741c92c9bb3031a4d38bee6bc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"01e8a9abffb7f70a9f13e61a4cf48da2554aa9a5c47a506ed73062eae0b36574\"" Jan 29 16:05:02.885486 containerd[1961]: time="2025-01-29T16:05:02.885385442Z" level=info msg="StartContainer for \"01e8a9abffb7f70a9f13e61a4cf48da2554aa9a5c47a506ed73062eae0b36574\"" Jan 29 16:05:02.904905 systemd[1]: Started cri-containerd-cad1dbb6466774663760da6d0df22668eb7f90f395386a10f1618e05534f36ca.scope - libcontainer container cad1dbb6466774663760da6d0df22668eb7f90f395386a10f1618e05534f36ca. Jan 29 16:05:02.960102 systemd[1]: Started cri-containerd-01e8a9abffb7f70a9f13e61a4cf48da2554aa9a5c47a506ed73062eae0b36574.scope - libcontainer container 01e8a9abffb7f70a9f13e61a4cf48da2554aa9a5c47a506ed73062eae0b36574. Jan 29 16:05:02.976554 systemd[1]: Started cri-containerd-8c88e92ece21529f8fb727b75f18844b643994ea6cf0503f07ce2a94a0cfe42c.scope - libcontainer container 8c88e92ece21529f8fb727b75f18844b643994ea6cf0503f07ce2a94a0cfe42c. Jan 29 16:05:03.038584 containerd[1961]: time="2025-01-29T16:05:03.038459651Z" level=info msg="StartContainer for \"cad1dbb6466774663760da6d0df22668eb7f90f395386a10f1618e05534f36ca\" returns successfully" Jan 29 16:05:03.106244 containerd[1961]: time="2025-01-29T16:05:03.105963299Z" level=info msg="StartContainer for \"01e8a9abffb7f70a9f13e61a4cf48da2554aa9a5c47a506ed73062eae0b36574\" returns successfully" Jan 29 16:05:03.129969 containerd[1961]: time="2025-01-29T16:05:03.129866423Z" level=info msg="StartContainer for \"8c88e92ece21529f8fb727b75f18844b643994ea6cf0503f07ce2a94a0cfe42c\" returns successfully" Jan 29 16:05:03.281849 kubelet[2842]: E0129 16:05:03.280305 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-241\" not found" node="ip-172-31-23-241" Jan 29 16:05:03.281849 kubelet[2842]: E0129 16:05:03.281065 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-241\" not found" node="ip-172-31-23-241" Jan 29 16:05:03.296864 kubelet[2842]: E0129 16:05:03.291626 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-241\" not found" node="ip-172-31-23-241" Jan 29 16:05:04.290826 kubelet[2842]: E0129 16:05:04.290547 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-241\" not found" node="ip-172-31-23-241" Jan 29 16:05:04.294018 kubelet[2842]: E0129 16:05:04.293308 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-241\" not found" node="ip-172-31-23-241" Jan 29 16:05:04.447852 kubelet[2842]: I0129 16:05:04.447354 2842 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-241" Jan 29 16:05:05.734563 kubelet[2842]: E0129 16:05:05.734516 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-241\" not found" node="ip-172-31-23-241" Jan 29 16:05:07.753973 kubelet[2842]: E0129 16:05:07.753912 2842 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-241\" not found" node="ip-172-31-23-241" Jan 29 16:05:07.785903 kubelet[2842]: E0129 16:05:07.784461 2842 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-241.181f3567d11e006e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-241,UID:ip-172-31-23-241,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-241,},FirstTimestamp:2025-01-29 16:05:01.19460875 +0000 UTC m=+1.610466333,LastTimestamp:2025-01-29 16:05:01.19460875 +0000 UTC m=+1.610466333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-241,}" Jan 29 16:05:07.843829 kubelet[2842]: I0129 16:05:07.843448 2842 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-23-241" Jan 29 16:05:07.843829 kubelet[2842]: E0129 16:05:07.843505 2842 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-23-241\": node \"ip-172-31-23-241\" not found" Jan 29 16:05:07.867278 kubelet[2842]: E0129 16:05:07.866982 2842 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-241.181f3567d27bc0ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-241,UID:ip-172-31-23-241,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-23-241,},FirstTimestamp:2025-01-29 16:05:01.217530058 +0000 UTC m=+1.633387629,LastTimestamp:2025-01-29 16:05:01.217530058 +0000 UTC m=+1.633387629,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-241,}" Jan 29 16:05:07.916602 kubelet[2842]: I0129 16:05:07.916341 2842 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:07.945566 kubelet[2842]: E0129 16:05:07.945188 2842 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-241.181f3567d5ebcd7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-241,UID:ip-172-31-23-241,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-23-241 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-23-241,},FirstTimestamp:2025-01-29 16:05:01.275204986 +0000 UTC m=+1.691062533,LastTimestamp:2025-01-29 16:05:01.275204986 +0000 UTC m=+1.691062533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-241,}" Jan 29 16:05:07.946887 kubelet[2842]: E0129 16:05:07.946565 2842 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-241\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:07.946887 kubelet[2842]: I0129 16:05:07.946607 2842 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-241" Jan 29 16:05:07.953176 kubelet[2842]: E0129 16:05:07.952941 2842 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-241\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-241" Jan 29 16:05:07.953176 kubelet[2842]: I0129 16:05:07.952993 2842 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:07.961256 kubelet[2842]: E0129 16:05:07.961044 2842 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-241\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:08.010098 kubelet[2842]: E0129 16:05:08.009632 2842 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-241.181f3567d5ebebc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-241,UID:ip-172-31-23-241,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-172-31-23-241 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-172-31-23-241,},FirstTimestamp:2025-01-29 16:05:01.275212738 +0000 UTC m=+1.691070285,LastTimestamp:2025-01-29 16:05:01.275212738 +0000 UTC m=+1.691070285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-241,}" Jan 29 16:05:08.192601 kubelet[2842]: I0129 16:05:08.192546 2842 apiserver.go:52] "Watching apiserver" Jan 29 16:05:08.222217 kubelet[2842]: I0129 16:05:08.222157 2842 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:05:08.489526 kubelet[2842]: I0129 16:05:08.489466 2842 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-241" Jan 29 16:05:08.752942 kubelet[2842]: I0129 16:05:08.750481 2842 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:10.053158 systemd[1]: Reload requested from client PID 3118 ('systemctl') (unit session-7.scope)... Jan 29 16:05:10.053640 systemd[1]: Reloading... Jan 29 16:05:10.237864 zram_generator::config[3166]: No configuration found. Jan 29 16:05:10.484640 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:05:10.737849 systemd[1]: Reloading finished in 683 ms. Jan 29 16:05:10.786713 kubelet[2842]: I0129 16:05:10.786143 2842 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:05:10.786567 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:05:10.807181 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:05:10.807691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:05:10.807790 systemd[1]: kubelet.service: Consumed 2.356s CPU time, 124.1M memory peak. Jan 29 16:05:10.815322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:05:11.453070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:05:11.470123 (kubelet)[3223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:05:11.580454 kubelet[3223]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:05:11.580454 kubelet[3223]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 16:05:11.580454 kubelet[3223]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:05:11.582077 kubelet[3223]: I0129 16:05:11.581107 3223 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:05:11.591612 sudo[3235]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:05:11.592455 sudo[3235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:05:11.601849 kubelet[3223]: I0129 16:05:11.599607 3223 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 16:05:11.601849 kubelet[3223]: I0129 16:05:11.599653 3223 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:05:11.602969 kubelet[3223]: I0129 16:05:11.602781 3223 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 16:05:11.606206 kubelet[3223]: I0129 16:05:11.606111 3223 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:05:11.611593 kubelet[3223]: I0129 16:05:11.611315 3223 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:05:11.619849 kubelet[3223]: E0129 16:05:11.619145 3223 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:05:11.619849 kubelet[3223]: I0129 16:05:11.619200 3223 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:05:11.629157 kubelet[3223]: I0129 16:05:11.629097 3223 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:05:11.629736 kubelet[3223]: I0129 16:05:11.629676 3223 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:05:11.630057 kubelet[3223]: I0129 16:05:11.629734 3223 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-241","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:05:11.630196 kubelet[3223]: I0129 16:05:11.630072 3223 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:05:11.630196 kubelet[3223]: I0129 16:05:11.630092 3223 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 16:05:11.630196 kubelet[3223]: I0129 16:05:11.630176 3223 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:05:11.630858 kubelet[3223]: I0129 16:05:11.630418 3223 kubelet.go:446] "Attempting to sync node with API server" Jan 29 16:05:11.630858 kubelet[3223]: I0129 16:05:11.630487 3223 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:05:11.630858 kubelet[3223]: I0129 16:05:11.630522 3223 kubelet.go:352] "Adding apiserver pod source" Jan 29 16:05:11.630858 kubelet[3223]: I0129 16:05:11.630543 3223 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:05:11.633941 kubelet[3223]: I0129 16:05:11.633449 3223 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:05:11.637841 kubelet[3223]: I0129 16:05:11.635365 3223 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:05:11.637841 kubelet[3223]: I0129 16:05:11.637575 3223 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 16:05:11.637841 kubelet[3223]: I0129 16:05:11.637626 3223 server.go:1287] "Started kubelet" Jan 29 16:05:11.648850 kubelet[3223]: I0129 16:05:11.648500 3223 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:05:11.659990 kubelet[3223]: I0129 16:05:11.659419 3223 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:05:11.694924 kubelet[3223]: I0129 16:05:11.694873 3223 server.go:490] "Adding debug handlers to kubelet server" Jan 29 16:05:11.698480 kubelet[3223]: I0129 16:05:11.678175 3223 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:05:11.720355 kubelet[3223]: I0129 16:05:11.661324 3223 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:05:11.727874 kubelet[3223]: I0129 16:05:11.726819 3223 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:05:11.727874 kubelet[3223]: E0129 16:05:11.688723 3223 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-23-241\" not found" Jan 29 16:05:11.727874 kubelet[3223]: I0129 16:05:11.688446 3223 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 16:05:11.727874 kubelet[3223]: I0129 16:05:11.727716 3223 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:05:11.728698 kubelet[3223]: I0129 16:05:11.727936 3223 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:05:11.734286 kubelet[3223]: I0129 16:05:11.688469 3223 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:05:11.734824 kubelet[3223]: I0129 16:05:11.734513 3223 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:05:11.769416 kubelet[3223]: I0129 16:05:11.769231 3223 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:05:11.795523 kubelet[3223]: E0129 16:05:11.793012 3223 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:05:11.801697 kubelet[3223]: I0129 16:05:11.800935 3223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:05:11.819885 kubelet[3223]: I0129 16:05:11.815693 3223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:05:11.819885 kubelet[3223]: I0129 16:05:11.815754 3223 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 16:05:11.819885 kubelet[3223]: I0129 16:05:11.815785 3223 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 16:05:11.819885 kubelet[3223]: I0129 16:05:11.815841 3223 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 16:05:11.819885 kubelet[3223]: E0129 16:05:11.815932 3223 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:05:11.833975 update_engine[1934]: I20250129 16:05:11.833884 1934 update_attempter.cc:509] Updating boot flags... Jan 29 16:05:11.917775 kubelet[3223]: E0129 16:05:11.916306 3223 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:05:12.032846 kubelet[3223]: I0129 16:05:12.030590 3223 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 16:05:12.032846 kubelet[3223]: I0129 16:05:12.030968 3223 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 16:05:12.032846 kubelet[3223]: I0129 16:05:12.031047 3223 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:05:12.032846 kubelet[3223]: I0129 16:05:12.031621 3223 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:05:12.032846 kubelet[3223]: I0129 16:05:12.031765 3223 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:05:12.032846 kubelet[3223]: I0129 16:05:12.031887 3223 policy_none.go:49] "None policy: Start" Jan 29 16:05:12.032846 kubelet[3223]: I0129 16:05:12.031940 3223 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 16:05:12.032846 kubelet[3223]: I0129 16:05:12.031966 3223 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:05:12.032846 kubelet[3223]: I0129 16:05:12.032166 3223 state_mem.go:75] "Updated machine memory state" Jan 29 16:05:12.049997 kubelet[3223]: I0129 16:05:12.048672 3223 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:05:12.049997 kubelet[3223]: I0129 16:05:12.049928 3223 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:05:12.050186 kubelet[3223]: I0129 16:05:12.049981 3223 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:05:12.051034 kubelet[3223]: I0129 16:05:12.050461 3223 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:05:12.063082 kubelet[3223]: E0129 16:05:12.062229 3223 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 16:05:12.119973 kubelet[3223]: I0129 16:05:12.119448 3223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:12.126949 kubelet[3223]: I0129 16:05:12.126889 3223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-241" Jan 29 16:05:12.129713 kubelet[3223]: I0129 16:05:12.129088 3223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:12.144621 kubelet[3223]: I0129 16:05:12.136900 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ce724182df412367891eabb3a4ab6a6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-241\" (UID: \"5ce724182df412367891eabb3a4ab6a6\") " pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:12.144621 kubelet[3223]: I0129 16:05:12.138545 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1951bff192c7465c13da98eb8f7a374-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-241\" (UID: \"b1951bff192c7465c13da98eb8f7a374\") " pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:12.144621 kubelet[3223]: I0129 16:05:12.138953 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbcae26fff010e45e543a451b53c82af-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-241\" (UID: \"cbcae26fff010e45e543a451b53c82af\") " pod="kube-system/kube-scheduler-ip-172-31-23-241" Jan 29 16:05:12.144621 kubelet[3223]: I0129 16:05:12.139309 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ce724182df412367891eabb3a4ab6a6-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-241\" (UID: \"5ce724182df412367891eabb3a4ab6a6\") " pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:12.144621 kubelet[3223]: I0129 16:05:12.140009 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1951bff192c7465c13da98eb8f7a374-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-241\" (UID: \"b1951bff192c7465c13da98eb8f7a374\") " pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:12.145003 kubelet[3223]: I0129 16:05:12.141394 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b1951bff192c7465c13da98eb8f7a374-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-241\" (UID: \"b1951bff192c7465c13da98eb8f7a374\") " pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:12.145003 kubelet[3223]: I0129 16:05:12.141648 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1951bff192c7465c13da98eb8f7a374-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-241\" (UID: \"b1951bff192c7465c13da98eb8f7a374\") " pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:12.145003 kubelet[3223]: I0129 16:05:12.141976 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1951bff192c7465c13da98eb8f7a374-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-241\" (UID: \"b1951bff192c7465c13da98eb8f7a374\") " pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:12.145003 kubelet[3223]: I0129 16:05:12.142035 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ce724182df412367891eabb3a4ab6a6-ca-certs\") pod \"kube-apiserver-ip-172-31-23-241\" (UID: \"5ce724182df412367891eabb3a4ab6a6\") " pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:12.157432 kubelet[3223]: E0129 16:05:12.157370 3223 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-241\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-241" Jan 29 16:05:12.163939 kubelet[3223]: E0129 16:05:12.163574 3223 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-241\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:12.198697 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3274) Jan 29 16:05:12.218992 kubelet[3223]: I0129 16:05:12.209642 3223 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-23-241" Jan 29 16:05:12.232126 kubelet[3223]: I0129 16:05:12.231236 3223 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-23-241" Jan 29 16:05:12.232126 kubelet[3223]: I0129 16:05:12.231391 3223 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-23-241" Jan 29 16:05:12.648648 kubelet[3223]: I0129 16:05:12.648223 3223 apiserver.go:52] "Watching apiserver" Jan 29 16:05:12.735008 kubelet[3223]: I0129 16:05:12.734770 3223 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:05:12.750728 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3273) Jan 29 16:05:12.885297 kubelet[3223]: I0129 16:05:12.884144 3223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:12.886507 kubelet[3223]: I0129 16:05:12.885995 3223 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:12.911668 kubelet[3223]: E0129 16:05:12.911169 3223 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-241\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-241" Jan 29 16:05:12.911668 kubelet[3223]: E0129 16:05:12.911538 3223 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-241\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-241" Jan 29 16:05:12.948584 sudo[3235]: pam_unix(sudo:session): session closed for user root Jan 29 16:05:13.072495 kubelet[3223]: I0129 16:05:13.072395 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-241" podStartSLOduration=5.072370545 podStartE2EDuration="5.072370545s" podCreationTimestamp="2025-01-29 16:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:05:13.029415812 +0000 UTC m=+1.549001156" watchObservedRunningTime="2025-01-29 16:05:13.072370545 +0000 UTC m=+1.591955889" Jan 29 16:05:13.121232 kubelet[3223]: I0129 16:05:13.119322 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-241" podStartSLOduration=5.119294613 podStartE2EDuration="5.119294613s" podCreationTimestamp="2025-01-29 16:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:05:13.073999461 +0000 UTC m=+1.593584805" watchObservedRunningTime="2025-01-29 16:05:13.119294613 +0000 UTC m=+1.638879945" Jan 29 16:05:13.264138 kubelet[3223]: I0129 16:05:13.260245 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-241" podStartSLOduration=1.260221858 podStartE2EDuration="1.260221858s" podCreationTimestamp="2025-01-29 16:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:05:13.123107109 +0000 UTC m=+1.642692465" watchObservedRunningTime="2025-01-29 16:05:13.260221858 +0000 UTC m=+1.779807178" Jan 29 16:05:13.362646 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3273) Jan 29 16:05:15.690433 kubelet[3223]: I0129 16:05:15.690370 3223 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:05:15.692635 containerd[1961]: time="2025-01-29T16:05:15.691332566Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:05:15.693259 kubelet[3223]: I0129 16:05:15.692999 3223 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:05:15.852505 systemd[1]: Created slice kubepods-besteffort-pod4329181b_2776_4c84_ab7f_29fd525811e7.slice - libcontainer container kubepods-besteffort-pod4329181b_2776_4c84_ab7f_29fd525811e7.slice. Jan 29 16:05:15.895460 kubelet[3223]: I0129 16:05:15.894969 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4329181b-2776-4c84-ab7f-29fd525811e7-xtables-lock\") pod \"kube-proxy-qn7js\" (UID: \"4329181b-2776-4c84-ab7f-29fd525811e7\") " pod="kube-system/kube-proxy-qn7js" Jan 29 16:05:15.895460 kubelet[3223]: I0129 16:05:15.895067 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4329181b-2776-4c84-ab7f-29fd525811e7-kube-proxy\") pod \"kube-proxy-qn7js\" (UID: \"4329181b-2776-4c84-ab7f-29fd525811e7\") " pod="kube-system/kube-proxy-qn7js" Jan 29 16:05:15.895460 kubelet[3223]: I0129 16:05:15.895134 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4329181b-2776-4c84-ab7f-29fd525811e7-lib-modules\") pod \"kube-proxy-qn7js\" (UID: \"4329181b-2776-4c84-ab7f-29fd525811e7\") " pod="kube-system/kube-proxy-qn7js" Jan 29 16:05:15.895864 kubelet[3223]: I0129 16:05:15.895666 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbcwt\" (UniqueName: \"kubernetes.io/projected/4329181b-2776-4c84-ab7f-29fd525811e7-kube-api-access-vbcwt\") pod \"kube-proxy-qn7js\" (UID: \"4329181b-2776-4c84-ab7f-29fd525811e7\") " pod="kube-system/kube-proxy-qn7js" Jan 29 16:05:15.917525 systemd[1]: Created slice kubepods-burstable-pod56162a04_30c2_4a10_8c9d_bf059cd76252.slice - libcontainer container kubepods-burstable-pod56162a04_30c2_4a10_8c9d_bf059cd76252.slice. Jan 29 16:05:15.921250 kubelet[3223]: W0129 16:05:15.921181 3223 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-23-241" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-241' and this object Jan 29 16:05:15.921400 kubelet[3223]: E0129 16:05:15.921283 3223 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-23-241\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-23-241' and this object" logger="UnhandledError" Jan 29 16:05:15.996783 kubelet[3223]: I0129 16:05:15.996624 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-hostproc\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:15.996977 kubelet[3223]: I0129 16:05:15.996857 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-host-proc-sys-net\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:15.997122 kubelet[3223]: I0129 16:05:15.997083 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-bpf-maps\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.001852 kubelet[3223]: I0129 16:05:15.998043 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56162a04-30c2-4a10-8c9d-bf059cd76252-hubble-tls\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.001852 kubelet[3223]: I0129 16:05:15.998114 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8z64\" (UniqueName: \"kubernetes.io/projected/56162a04-30c2-4a10-8c9d-bf059cd76252-kube-api-access-r8z64\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.001852 kubelet[3223]: I0129 16:05:15.998162 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-run\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.001852 kubelet[3223]: I0129 16:05:15.998202 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-host-proc-sys-kernel\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.001852 kubelet[3223]: I0129 16:05:15.998285 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cni-path\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.001852 kubelet[3223]: I0129 16:05:15.998331 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-cgroup\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.002289 kubelet[3223]: I0129 16:05:15.998390 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-lib-modules\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.002289 kubelet[3223]: I0129 16:05:15.998431 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56162a04-30c2-4a10-8c9d-bf059cd76252-clustermesh-secrets\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.002289 kubelet[3223]: I0129 16:05:15.998469 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-etc-cni-netd\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.002289 kubelet[3223]: I0129 16:05:15.998507 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-xtables-lock\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.002289 kubelet[3223]: I0129 16:05:15.998549 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-config-path\") pod \"cilium-vvn98\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " pod="kube-system/cilium-vvn98" Jan 29 16:05:16.042178 kubelet[3223]: E0129 16:05:16.042118 3223 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 16:05:16.042178 kubelet[3223]: E0129 16:05:16.042172 3223 projected.go:194] Error preparing data for projected volume kube-api-access-vbcwt for pod kube-system/kube-proxy-qn7js: configmap "kube-root-ca.crt" not found Jan 29 16:05:16.042426 kubelet[3223]: E0129 16:05:16.042279 3223 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4329181b-2776-4c84-ab7f-29fd525811e7-kube-api-access-vbcwt podName:4329181b-2776-4c84-ab7f-29fd525811e7 nodeName:}" failed. No retries permitted until 2025-01-29 16:05:16.542229583 +0000 UTC m=+5.061814915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vbcwt" (UniqueName: "kubernetes.io/projected/4329181b-2776-4c84-ab7f-29fd525811e7-kube-api-access-vbcwt") pod "kube-proxy-qn7js" (UID: "4329181b-2776-4c84-ab7f-29fd525811e7") : configmap "kube-root-ca.crt" not found Jan 29 16:05:16.313006 sudo[2281]: pam_unix(sudo:session): session closed for user root Jan 29 16:05:16.336682 sshd[2280]: Connection closed by 139.178.89.65 port 39254 Jan 29 16:05:16.336488 sshd-session[2278]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:16.346001 systemd[1]: sshd@6-172.31.23.241:22-139.178.89.65:39254.service: Deactivated successfully. Jan 29 16:05:16.357001 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:05:16.358917 systemd[1]: session-7.scope: Consumed 11.116s CPU time, 265.5M memory peak. Jan 29 16:05:16.363463 systemd-logind[1933]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:05:16.366408 systemd-logind[1933]: Removed session 7. Jan 29 16:05:16.718880 systemd[1]: Created slice kubepods-besteffort-podb18610d7_2a21_4355_823a_2848ce68a094.slice - libcontainer container kubepods-besteffort-podb18610d7_2a21_4355_823a_2848ce68a094.slice. Jan 29 16:05:16.767750 containerd[1961]: time="2025-01-29T16:05:16.767694663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qn7js,Uid:4329181b-2776-4c84-ab7f-29fd525811e7,Namespace:kube-system,Attempt:0,}" Jan 29 16:05:16.806053 kubelet[3223]: I0129 16:05:16.805001 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b18610d7-2a21-4355-823a-2848ce68a094-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bc4dn\" (UID: \"b18610d7-2a21-4355-823a-2848ce68a094\") " pod="kube-system/cilium-operator-6c4d7847fc-bc4dn" Jan 29 16:05:16.806053 kubelet[3223]: I0129 16:05:16.805787 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjszq\" (UniqueName: \"kubernetes.io/projected/b18610d7-2a21-4355-823a-2848ce68a094-kube-api-access-pjszq\") pod \"cilium-operator-6c4d7847fc-bc4dn\" (UID: \"b18610d7-2a21-4355-823a-2848ce68a094\") " pod="kube-system/cilium-operator-6c4d7847fc-bc4dn" Jan 29 16:05:16.819442 containerd[1961]: time="2025-01-29T16:05:16.818694975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:05:16.819442 containerd[1961]: time="2025-01-29T16:05:16.818835759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:05:16.819442 containerd[1961]: time="2025-01-29T16:05:16.818876715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:16.819442 containerd[1961]: time="2025-01-29T16:05:16.819025911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:16.852134 systemd[1]: Started cri-containerd-4621f0a350db622d75c51ed075318788e3fdd34ae0b204435ca0ac87dbecbee0.scope - libcontainer container 4621f0a350db622d75c51ed075318788e3fdd34ae0b204435ca0ac87dbecbee0. Jan 29 16:05:16.893161 containerd[1961]: time="2025-01-29T16:05:16.893106244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qn7js,Uid:4329181b-2776-4c84-ab7f-29fd525811e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4621f0a350db622d75c51ed075318788e3fdd34ae0b204435ca0ac87dbecbee0\"" Jan 29 16:05:16.902621 containerd[1961]: time="2025-01-29T16:05:16.901321180Z" level=info msg="CreateContainer within sandbox \"4621f0a350db622d75c51ed075318788e3fdd34ae0b204435ca0ac87dbecbee0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:05:16.943886 containerd[1961]: time="2025-01-29T16:05:16.943729096Z" level=info msg="CreateContainer within sandbox \"4621f0a350db622d75c51ed075318788e3fdd34ae0b204435ca0ac87dbecbee0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fbf8dd69c63dcf93a71c9f850835ad43ea3e36387671bd56ad35437cf88c84cb\"" Jan 29 16:05:16.944651 containerd[1961]: time="2025-01-29T16:05:16.944592688Z" level=info msg="StartContainer for \"fbf8dd69c63dcf93a71c9f850835ad43ea3e36387671bd56ad35437cf88c84cb\"" Jan 29 16:05:16.992119 systemd[1]: Started cri-containerd-fbf8dd69c63dcf93a71c9f850835ad43ea3e36387671bd56ad35437cf88c84cb.scope - libcontainer container fbf8dd69c63dcf93a71c9f850835ad43ea3e36387671bd56ad35437cf88c84cb. Jan 29 16:05:17.030094 containerd[1961]: time="2025-01-29T16:05:17.030023688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bc4dn,Uid:b18610d7-2a21-4355-823a-2848ce68a094,Namespace:kube-system,Attempt:0,}" Jan 29 16:05:17.061495 containerd[1961]: time="2025-01-29T16:05:17.061249584Z" level=info msg="StartContainer for \"fbf8dd69c63dcf93a71c9f850835ad43ea3e36387671bd56ad35437cf88c84cb\" returns successfully" Jan 29 16:05:17.100285 containerd[1961]: time="2025-01-29T16:05:17.099286801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:05:17.101324 containerd[1961]: time="2025-01-29T16:05:17.100233205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:05:17.102222 containerd[1961]: time="2025-01-29T16:05:17.101598481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:17.102222 containerd[1961]: time="2025-01-29T16:05:17.101825869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:17.138864 containerd[1961]: time="2025-01-29T16:05:17.138689017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvn98,Uid:56162a04-30c2-4a10-8c9d-bf059cd76252,Namespace:kube-system,Attempt:0,}" Jan 29 16:05:17.171122 systemd[1]: Started cri-containerd-66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827.scope - libcontainer container 66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827. Jan 29 16:05:17.243267 containerd[1961]: time="2025-01-29T16:05:17.242998429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:05:17.243730 containerd[1961]: time="2025-01-29T16:05:17.243116005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:05:17.244551 containerd[1961]: time="2025-01-29T16:05:17.244173169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:17.245629 containerd[1961]: time="2025-01-29T16:05:17.245506057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:17.308010 systemd[1]: Started cri-containerd-d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5.scope - libcontainer container d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5. Jan 29 16:05:17.319924 containerd[1961]: time="2025-01-29T16:05:17.318784022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bc4dn,Uid:b18610d7-2a21-4355-823a-2848ce68a094,Namespace:kube-system,Attempt:0,} returns sandbox id \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\"" Jan 29 16:05:17.324402 containerd[1961]: time="2025-01-29T16:05:17.324355070Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:05:17.373860 containerd[1961]: time="2025-01-29T16:05:17.373761626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vvn98,Uid:56162a04-30c2-4a10-8c9d-bf059cd76252,Namespace:kube-system,Attempt:0,} returns sandbox id \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\"" Jan 29 16:05:17.939729 kubelet[3223]: I0129 16:05:17.939557 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qn7js" podStartSLOduration=2.939506729 podStartE2EDuration="2.939506729s" podCreationTimestamp="2025-01-29 16:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:05:17.938860373 +0000 UTC m=+6.458445729" watchObservedRunningTime="2025-01-29 16:05:17.939506729 +0000 UTC m=+6.459092049" Jan 29 16:05:18.824742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325032673.mount: Deactivated successfully. Jan 29 16:05:19.626482 containerd[1961]: time="2025-01-29T16:05:19.625709885Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:05:19.627753 containerd[1961]: time="2025-01-29T16:05:19.627455129Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 16:05:19.629896 containerd[1961]: time="2025-01-29T16:05:19.629775941Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:05:19.634248 containerd[1961]: time="2025-01-29T16:05:19.634031021Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.309467607s" Jan 29 16:05:19.634248 containerd[1961]: time="2025-01-29T16:05:19.634093661Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 16:05:19.637770 containerd[1961]: time="2025-01-29T16:05:19.636420821Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:05:19.639861 containerd[1961]: time="2025-01-29T16:05:19.638340449Z" level=info msg="CreateContainer within sandbox \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:05:19.671248 containerd[1961]: time="2025-01-29T16:05:19.671185277Z" level=info msg="CreateContainer within sandbox \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\"" Jan 29 16:05:19.672852 containerd[1961]: time="2025-01-29T16:05:19.672385601Z" level=info msg="StartContainer for \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\"" Jan 29 16:05:19.721632 systemd[1]: Started cri-containerd-059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f.scope - libcontainer container 059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f. Jan 29 16:05:19.769594 containerd[1961]: time="2025-01-29T16:05:19.769391238Z" level=info msg="StartContainer for \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\" returns successfully" Jan 29 16:05:21.164611 kubelet[3223]: I0129 16:05:21.163607 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bc4dn" podStartSLOduration=2.850832394 podStartE2EDuration="5.163583309s" podCreationTimestamp="2025-01-29 16:05:16 +0000 UTC" firstStartedPulling="2025-01-29 16:05:17.322513634 +0000 UTC m=+5.842098954" lastFinishedPulling="2025-01-29 16:05:19.635264549 +0000 UTC m=+8.154849869" observedRunningTime="2025-01-29 16:05:19.966009259 +0000 UTC m=+8.485594675" watchObservedRunningTime="2025-01-29 16:05:21.163583309 +0000 UTC m=+9.683168653" Jan 29 16:05:25.694551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1401184057.mount: Deactivated successfully. Jan 29 16:05:28.408417 containerd[1961]: time="2025-01-29T16:05:28.408335785Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:05:28.410205 containerd[1961]: time="2025-01-29T16:05:28.410115517Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 16:05:28.413040 containerd[1961]: time="2025-01-29T16:05:28.412356997Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:05:28.423049 containerd[1961]: time="2025-01-29T16:05:28.422973325Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.786477876s" Jan 29 16:05:28.423049 containerd[1961]: time="2025-01-29T16:05:28.423046285Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 16:05:28.426679 containerd[1961]: time="2025-01-29T16:05:28.426622549Z" level=info msg="CreateContainer within sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:05:28.454063 containerd[1961]: time="2025-01-29T16:05:28.453917617Z" level=info msg="CreateContainer within sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9\"" Jan 29 16:05:28.455687 containerd[1961]: time="2025-01-29T16:05:28.455523877Z" level=info msg="StartContainer for \"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9\"" Jan 29 16:05:28.511143 systemd[1]: Started cri-containerd-e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9.scope - libcontainer container e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9. Jan 29 16:05:28.567111 containerd[1961]: time="2025-01-29T16:05:28.567008066Z" level=info msg="StartContainer for \"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9\" returns successfully" Jan 29 16:05:28.590479 systemd[1]: cri-containerd-e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9.scope: Deactivated successfully. Jan 29 16:05:29.441311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9-rootfs.mount: Deactivated successfully. Jan 29 16:05:29.958271 containerd[1961]: time="2025-01-29T16:05:29.958114432Z" level=info msg="shim disconnected" id=e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9 namespace=k8s.io Jan 29 16:05:29.958271 containerd[1961]: time="2025-01-29T16:05:29.958187416Z" level=warning msg="cleaning up after shim disconnected" id=e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9 namespace=k8s.io Jan 29 16:05:29.958271 containerd[1961]: time="2025-01-29T16:05:29.958207084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:05:30.982855 containerd[1961]: time="2025-01-29T16:05:30.981036234Z" level=info msg="CreateContainer within sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:05:31.026465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3116729443.mount: Deactivated successfully. Jan 29 16:05:31.027834 containerd[1961]: time="2025-01-29T16:05:31.027441386Z" level=info msg="CreateContainer within sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460\"" Jan 29 16:05:31.029625 containerd[1961]: time="2025-01-29T16:05:31.029562722Z" level=info msg="StartContainer for \"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460\"" Jan 29 16:05:31.087102 systemd[1]: Started cri-containerd-bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460.scope - libcontainer container bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460. Jan 29 16:05:31.148872 containerd[1961]: time="2025-01-29T16:05:31.148053362Z" level=info msg="StartContainer for \"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460\" returns successfully" Jan 29 16:05:31.172680 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:05:31.173747 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:05:31.174247 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:05:31.186969 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:05:31.187440 systemd[1]: cri-containerd-bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460.scope: Deactivated successfully. Jan 29 16:05:31.223131 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:05:31.251222 containerd[1961]: time="2025-01-29T16:05:31.250966407Z" level=info msg="shim disconnected" id=bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460 namespace=k8s.io Jan 29 16:05:31.251222 containerd[1961]: time="2025-01-29T16:05:31.251053515Z" level=warning msg="cleaning up after shim disconnected" id=bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460 namespace=k8s.io Jan 29 16:05:31.251222 containerd[1961]: time="2025-01-29T16:05:31.251074887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:05:31.987323 containerd[1961]: time="2025-01-29T16:05:31.987239455Z" level=info msg="CreateContainer within sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:05:32.010713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460-rootfs.mount: Deactivated successfully. Jan 29 16:05:32.032686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3485272031.mount: Deactivated successfully. Jan 29 16:05:32.039761 containerd[1961]: time="2025-01-29T16:05:32.039686643Z" level=info msg="CreateContainer within sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737\"" Jan 29 16:05:32.041028 containerd[1961]: time="2025-01-29T16:05:32.040932987Z" level=info msg="StartContainer for \"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737\"" Jan 29 16:05:32.107127 systemd[1]: Started cri-containerd-3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737.scope - libcontainer container 3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737. Jan 29 16:05:32.166886 containerd[1961]: time="2025-01-29T16:05:32.166688871Z" level=info msg="StartContainer for \"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737\" returns successfully" Jan 29 16:05:32.171640 systemd[1]: cri-containerd-3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737.scope: Deactivated successfully. Jan 29 16:05:32.233343 containerd[1961]: time="2025-01-29T16:05:32.233253724Z" level=info msg="shim disconnected" id=3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737 namespace=k8s.io Jan 29 16:05:32.233343 containerd[1961]: time="2025-01-29T16:05:32.233339152Z" level=warning msg="cleaning up after shim disconnected" id=3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737 namespace=k8s.io Jan 29 16:05:32.234112 containerd[1961]: time="2025-01-29T16:05:32.233360476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:05:32.994756 containerd[1961]: time="2025-01-29T16:05:32.994685864Z" level=info msg="CreateContainer within sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:05:33.007816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737-rootfs.mount: Deactivated successfully. Jan 29 16:05:33.031274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1301059641.mount: Deactivated successfully. Jan 29 16:05:33.044306 containerd[1961]: time="2025-01-29T16:05:33.042443944Z" level=info msg="CreateContainer within sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9\"" Jan 29 16:05:33.044782 containerd[1961]: time="2025-01-29T16:05:33.044267860Z" level=info msg="StartContainer for \"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9\"" Jan 29 16:05:33.103132 systemd[1]: Started cri-containerd-b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9.scope - libcontainer container b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9. Jan 29 16:05:33.149403 systemd[1]: cri-containerd-b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9.scope: Deactivated successfully. Jan 29 16:05:33.153307 containerd[1961]: time="2025-01-29T16:05:33.152362960Z" level=info msg="StartContainer for \"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9\" returns successfully" Jan 29 16:05:33.202295 containerd[1961]: time="2025-01-29T16:05:33.202184645Z" level=info msg="shim disconnected" id=b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9 namespace=k8s.io Jan 29 16:05:33.202295 containerd[1961]: time="2025-01-29T16:05:33.202275965Z" level=warning msg="cleaning up after shim disconnected" id=b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9 namespace=k8s.io Jan 29 16:05:33.202683 containerd[1961]: time="2025-01-29T16:05:33.202300289Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:05:34.012125 containerd[1961]: time="2025-01-29T16:05:34.012041273Z" level=info msg="CreateContainer within sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:05:34.012978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9-rootfs.mount: Deactivated successfully. Jan 29 16:05:34.055102 containerd[1961]: time="2025-01-29T16:05:34.054925001Z" level=info msg="CreateContainer within sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\"" Jan 29 16:05:34.057743 containerd[1961]: time="2025-01-29T16:05:34.056040641Z" level=info msg="StartContainer for \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\"" Jan 29 16:05:34.118101 systemd[1]: Started cri-containerd-68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123.scope - libcontainer container 68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123. Jan 29 16:05:34.179560 containerd[1961]: time="2025-01-29T16:05:34.179014013Z" level=info msg="StartContainer for \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\" returns successfully" Jan 29 16:05:34.383505 kubelet[3223]: I0129 16:05:34.381086 3223 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 16:05:34.444618 systemd[1]: Created slice kubepods-burstable-pod7dad2964_6d0d_40d5_9217_1bf54733a303.slice - libcontainer container kubepods-burstable-pod7dad2964_6d0d_40d5_9217_1bf54733a303.slice. Jan 29 16:05:34.464672 systemd[1]: Created slice kubepods-burstable-pode3196858_1cc2_44fc_941b_f85f10f65898.slice - libcontainer container kubepods-burstable-pode3196858_1cc2_44fc_941b_f85f10f65898.slice. Jan 29 16:05:34.551862 kubelet[3223]: I0129 16:05:34.551679 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3196858-1cc2-44fc-941b-f85f10f65898-config-volume\") pod \"coredns-668d6bf9bc-n7mff\" (UID: \"e3196858-1cc2-44fc-941b-f85f10f65898\") " pod="kube-system/coredns-668d6bf9bc-n7mff" Jan 29 16:05:34.552388 kubelet[3223]: I0129 16:05:34.552261 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dad2964-6d0d-40d5-9217-1bf54733a303-config-volume\") pod \"coredns-668d6bf9bc-ljtvc\" (UID: \"7dad2964-6d0d-40d5-9217-1bf54733a303\") " pod="kube-system/coredns-668d6bf9bc-ljtvc" Jan 29 16:05:34.552793 kubelet[3223]: I0129 16:05:34.552668 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kgz6\" (UniqueName: \"kubernetes.io/projected/7dad2964-6d0d-40d5-9217-1bf54733a303-kube-api-access-2kgz6\") pod \"coredns-668d6bf9bc-ljtvc\" (UID: \"7dad2964-6d0d-40d5-9217-1bf54733a303\") " pod="kube-system/coredns-668d6bf9bc-ljtvc" Jan 29 16:05:34.553447 kubelet[3223]: I0129 16:05:34.553399 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzxq\" (UniqueName: \"kubernetes.io/projected/e3196858-1cc2-44fc-941b-f85f10f65898-kube-api-access-wjzxq\") pod \"coredns-668d6bf9bc-n7mff\" (UID: \"e3196858-1cc2-44fc-941b-f85f10f65898\") " pod="kube-system/coredns-668d6bf9bc-n7mff" Jan 29 16:05:34.756721 containerd[1961]: time="2025-01-29T16:05:34.756207728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ljtvc,Uid:7dad2964-6d0d-40d5-9217-1bf54733a303,Namespace:kube-system,Attempt:0,}" Jan 29 16:05:34.774475 containerd[1961]: time="2025-01-29T16:05:34.774312476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n7mff,Uid:e3196858-1cc2-44fc-941b-f85f10f65898,Namespace:kube-system,Attempt:0,}" Jan 29 16:05:37.032706 (udev-worker)[4297]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:05:37.033719 systemd-networkd[1786]: cilium_host: Link UP Jan 29 16:05:37.037100 systemd-networkd[1786]: cilium_net: Link UP Jan 29 16:05:37.038952 systemd-networkd[1786]: cilium_net: Gained carrier Jan 29 16:05:37.039346 systemd-networkd[1786]: cilium_host: Gained carrier Jan 29 16:05:37.040469 (udev-worker)[4329]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:05:37.136153 systemd-networkd[1786]: cilium_net: Gained IPv6LL Jan 29 16:05:37.168120 systemd-networkd[1786]: cilium_host: Gained IPv6LL Jan 29 16:05:37.217698 (udev-worker)[4340]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:05:37.227709 systemd-networkd[1786]: cilium_vxlan: Link UP Jan 29 16:05:37.227724 systemd-networkd[1786]: cilium_vxlan: Gained carrier Jan 29 16:05:37.718224 kernel: NET: Registered PF_ALG protocol family Jan 29 16:05:39.002900 (udev-worker)[4298]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:05:39.032759 systemd-networkd[1786]: lxc_health: Link UP Jan 29 16:05:39.034363 systemd-networkd[1786]: cilium_vxlan: Gained IPv6LL Jan 29 16:05:39.043322 systemd-networkd[1786]: lxc_health: Gained carrier Jan 29 16:05:39.191253 kubelet[3223]: I0129 16:05:39.188792 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vvn98" podStartSLOduration=13.141734651 podStartE2EDuration="24.188768686s" podCreationTimestamp="2025-01-29 16:05:15 +0000 UTC" firstStartedPulling="2025-01-29 16:05:17.376945298 +0000 UTC m=+5.896530630" lastFinishedPulling="2025-01-29 16:05:28.423979333 +0000 UTC m=+16.943564665" observedRunningTime="2025-01-29 16:05:35.062335026 +0000 UTC m=+23.581920382" watchObservedRunningTime="2025-01-29 16:05:39.188768686 +0000 UTC m=+27.708354018" Jan 29 16:05:39.865861 kernel: eth0: renamed from tmp54d6c Jan 29 16:05:39.874778 systemd-networkd[1786]: lxc7f5394dcb3c4: Link UP Jan 29 16:05:39.877601 systemd-networkd[1786]: lxc7f5394dcb3c4: Gained carrier Jan 29 16:05:39.900013 systemd-networkd[1786]: lxc2c2bdeb3cf06: Link UP Jan 29 16:05:39.931386 kernel: eth0: renamed from tmp37048 Jan 29 16:05:39.935261 systemd-networkd[1786]: lxc2c2bdeb3cf06: Gained carrier Jan 29 16:05:39.945638 (udev-worker)[4341]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:05:40.937919 systemd-networkd[1786]: lxc7f5394dcb3c4: Gained IPv6LL Jan 29 16:05:41.065061 systemd-networkd[1786]: lxc_health: Gained IPv6LL Jan 29 16:05:41.576075 systemd-networkd[1786]: lxc2c2bdeb3cf06: Gained IPv6LL Jan 29 16:05:44.388515 ntpd[1926]: Listen normally on 8 cilium_host 192.168.0.206:123 Jan 29 16:05:44.389599 ntpd[1926]: 29 Jan 16:05:44 ntpd[1926]: Listen normally on 8 cilium_host 192.168.0.206:123 Jan 29 16:05:44.389599 ntpd[1926]: 29 Jan 16:05:44 ntpd[1926]: Listen normally on 9 cilium_net [fe80::e475:a0ff:fe5f:7e87%4]:123 Jan 29 16:05:44.389599 ntpd[1926]: 29 Jan 16:05:44 ntpd[1926]: Listen normally on 10 cilium_host [fe80::39:d4ff:feb2:8404%5]:123 Jan 29 16:05:44.389599 ntpd[1926]: 29 Jan 16:05:44 ntpd[1926]: Listen normally on 11 cilium_vxlan [fe80::cc60:58ff:fe9a:40cf%6]:123 Jan 29 16:05:44.389599 ntpd[1926]: 29 Jan 16:05:44 ntpd[1926]: Listen normally on 12 lxc_health [fe80::8451:54ff:fe78:1f3f%8]:123 Jan 29 16:05:44.389599 ntpd[1926]: 29 Jan 16:05:44 ntpd[1926]: Listen normally on 13 lxc7f5394dcb3c4 [fe80::fc1e:e3ff:fe83:1985%10]:123 Jan 29 16:05:44.389599 ntpd[1926]: 29 Jan 16:05:44 ntpd[1926]: Listen normally on 14 lxc2c2bdeb3cf06 [fe80::508d:b6ff:fe6e:20ef%12]:123 Jan 29 16:05:44.388649 ntpd[1926]: Listen normally on 9 cilium_net [fe80::e475:a0ff:fe5f:7e87%4]:123 Jan 29 16:05:44.388731 ntpd[1926]: Listen normally on 10 cilium_host [fe80::39:d4ff:feb2:8404%5]:123 Jan 29 16:05:44.388848 ntpd[1926]: Listen normally on 11 cilium_vxlan [fe80::cc60:58ff:fe9a:40cf%6]:123 Jan 29 16:05:44.388924 ntpd[1926]: Listen normally on 12 lxc_health [fe80::8451:54ff:fe78:1f3f%8]:123 Jan 29 16:05:44.388990 ntpd[1926]: Listen normally on 13 lxc7f5394dcb3c4 [fe80::fc1e:e3ff:fe83:1985%10]:123 Jan 29 16:05:44.389056 ntpd[1926]: Listen normally on 14 lxc2c2bdeb3cf06 [fe80::508d:b6ff:fe6e:20ef%12]:123 Jan 29 16:05:48.134664 containerd[1961]: time="2025-01-29T16:05:48.124670839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:05:48.134664 containerd[1961]: time="2025-01-29T16:05:48.125678839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:05:48.134664 containerd[1961]: time="2025-01-29T16:05:48.126564295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:48.134664 containerd[1961]: time="2025-01-29T16:05:48.126772291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:48.181840 containerd[1961]: time="2025-01-29T16:05:48.179473783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:05:48.182030 containerd[1961]: time="2025-01-29T16:05:48.181943743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:05:48.182147 containerd[1961]: time="2025-01-29T16:05:48.182031475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:48.182366 containerd[1961]: time="2025-01-29T16:05:48.182294947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:05:48.214158 systemd[1]: Started cri-containerd-370488f67a50e66567dbe36641c6a1840514d92394407ec26fd9171f949e21ae.scope - libcontainer container 370488f67a50e66567dbe36641c6a1840514d92394407ec26fd9171f949e21ae. Jan 29 16:05:48.265147 systemd[1]: Started cri-containerd-54d6cde4b4055ed870c4cad1a4a940dfafed0c3fcc3cc9c931a6ce923eb0477b.scope - libcontainer container 54d6cde4b4055ed870c4cad1a4a940dfafed0c3fcc3cc9c931a6ce923eb0477b. Jan 29 16:05:48.358624 containerd[1961]: time="2025-01-29T16:05:48.358446488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n7mff,Uid:e3196858-1cc2-44fc-941b-f85f10f65898,Namespace:kube-system,Attempt:0,} returns sandbox id \"370488f67a50e66567dbe36641c6a1840514d92394407ec26fd9171f949e21ae\"" Jan 29 16:05:48.369292 containerd[1961]: time="2025-01-29T16:05:48.368929760Z" level=info msg="CreateContainer within sandbox \"370488f67a50e66567dbe36641c6a1840514d92394407ec26fd9171f949e21ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:05:48.397528 containerd[1961]: time="2025-01-29T16:05:48.396392624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ljtvc,Uid:7dad2964-6d0d-40d5-9217-1bf54733a303,Namespace:kube-system,Attempt:0,} returns sandbox id \"54d6cde4b4055ed870c4cad1a4a940dfafed0c3fcc3cc9c931a6ce923eb0477b\"" Jan 29 16:05:48.407520 containerd[1961]: time="2025-01-29T16:05:48.407437016Z" level=info msg="CreateContainer within sandbox \"54d6cde4b4055ed870c4cad1a4a940dfafed0c3fcc3cc9c931a6ce923eb0477b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:05:48.426649 containerd[1961]: time="2025-01-29T16:05:48.426564788Z" level=info msg="CreateContainer within sandbox \"370488f67a50e66567dbe36641c6a1840514d92394407ec26fd9171f949e21ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"631d1f643b9fc4f2d9cb5730b1a83c5dfac5de93360d056ba1fd1f3b1f081824\"" Jan 29 16:05:48.428180 containerd[1961]: time="2025-01-29T16:05:48.428005928Z" level=info msg="StartContainer for \"631d1f643b9fc4f2d9cb5730b1a83c5dfac5de93360d056ba1fd1f3b1f081824\"" Jan 29 16:05:48.445752 containerd[1961]: time="2025-01-29T16:05:48.445584512Z" level=info msg="CreateContainer within sandbox \"54d6cde4b4055ed870c4cad1a4a940dfafed0c3fcc3cc9c931a6ce923eb0477b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90a0538828e7eabb8b97d2756ca5780ffc7d89cfd35f292c9ae8172c37fd8bee\"" Jan 29 16:05:48.449831 containerd[1961]: time="2025-01-29T16:05:48.448893200Z" level=info msg="StartContainer for \"90a0538828e7eabb8b97d2756ca5780ffc7d89cfd35f292c9ae8172c37fd8bee\"" Jan 29 16:05:48.519107 systemd[1]: Started cri-containerd-631d1f643b9fc4f2d9cb5730b1a83c5dfac5de93360d056ba1fd1f3b1f081824.scope - libcontainer container 631d1f643b9fc4f2d9cb5730b1a83c5dfac5de93360d056ba1fd1f3b1f081824. Jan 29 16:05:48.541103 systemd[1]: Started cri-containerd-90a0538828e7eabb8b97d2756ca5780ffc7d89cfd35f292c9ae8172c37fd8bee.scope - libcontainer container 90a0538828e7eabb8b97d2756ca5780ffc7d89cfd35f292c9ae8172c37fd8bee. Jan 29 16:05:48.639407 containerd[1961]: time="2025-01-29T16:05:48.638789433Z" level=info msg="StartContainer for \"631d1f643b9fc4f2d9cb5730b1a83c5dfac5de93360d056ba1fd1f3b1f081824\" returns successfully" Jan 29 16:05:48.661080 containerd[1961]: time="2025-01-29T16:05:48.660530217Z" level=info msg="StartContainer for \"90a0538828e7eabb8b97d2756ca5780ffc7d89cfd35f292c9ae8172c37fd8bee\" returns successfully" Jan 29 16:05:49.101857 kubelet[3223]: I0129 16:05:49.100995 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n7mff" podStartSLOduration=33.100951244 podStartE2EDuration="33.100951244s" podCreationTimestamp="2025-01-29 16:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:05:49.097976936 +0000 UTC m=+37.617562364" watchObservedRunningTime="2025-01-29 16:05:49.100951244 +0000 UTC m=+37.620536612" Jan 29 16:05:49.660295 systemd[1]: Started sshd@7-172.31.23.241:22-139.178.89.65:35346.service - OpenSSH per-connection server daemon (139.178.89.65:35346). Jan 29 16:05:49.845563 sshd[4865]: Accepted publickey for core from 139.178.89.65 port 35346 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:05:49.848227 sshd-session[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:49.857918 systemd-logind[1933]: New session 8 of user core. Jan 29 16:05:49.861263 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:05:50.128892 sshd[4867]: Connection closed by 139.178.89.65 port 35346 Jan 29 16:05:50.130131 sshd-session[4865]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:50.136968 systemd[1]: sshd@7-172.31.23.241:22-139.178.89.65:35346.service: Deactivated successfully. Jan 29 16:05:50.143789 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:05:50.147160 systemd-logind[1933]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:05:50.149575 systemd-logind[1933]: Removed session 8. Jan 29 16:05:55.171646 systemd[1]: Started sshd@8-172.31.23.241:22-139.178.89.65:50182.service - OpenSSH per-connection server daemon (139.178.89.65:50182). Jan 29 16:05:55.357019 sshd[4880]: Accepted publickey for core from 139.178.89.65 port 50182 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:05:55.359633 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:05:55.369202 systemd-logind[1933]: New session 9 of user core. Jan 29 16:05:55.377079 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:05:55.630277 sshd[4882]: Connection closed by 139.178.89.65 port 50182 Jan 29 16:05:55.631192 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Jan 29 16:05:55.637661 systemd[1]: sshd@8-172.31.23.241:22-139.178.89.65:50182.service: Deactivated successfully. Jan 29 16:05:55.642047 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:05:55.644736 systemd-logind[1933]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:05:55.646882 systemd-logind[1933]: Removed session 9. Jan 29 16:06:00.674370 systemd[1]: Started sshd@9-172.31.23.241:22-139.178.89.65:50188.service - OpenSSH per-connection server daemon (139.178.89.65:50188). Jan 29 16:06:00.860478 sshd[4895]: Accepted publickey for core from 139.178.89.65 port 50188 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:00.863420 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:00.874189 systemd-logind[1933]: New session 10 of user core. Jan 29 16:06:00.881166 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:06:01.134503 sshd[4897]: Connection closed by 139.178.89.65 port 50188 Jan 29 16:06:01.133467 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:01.139629 systemd-logind[1933]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:06:01.141739 systemd[1]: sshd@9-172.31.23.241:22-139.178.89.65:50188.service: Deactivated successfully. Jan 29 16:06:01.147175 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:06:01.151594 systemd-logind[1933]: Removed session 10. Jan 29 16:06:06.171371 systemd[1]: Started sshd@10-172.31.23.241:22-139.178.89.65:35458.service - OpenSSH per-connection server daemon (139.178.89.65:35458). Jan 29 16:06:06.355992 sshd[4911]: Accepted publickey for core from 139.178.89.65 port 35458 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:06.358527 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:06.366912 systemd-logind[1933]: New session 11 of user core. Jan 29 16:06:06.372071 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:06:06.637170 sshd[4913]: Connection closed by 139.178.89.65 port 35458 Jan 29 16:06:06.637468 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:06.642509 systemd[1]: sshd@10-172.31.23.241:22-139.178.89.65:35458.service: Deactivated successfully. Jan 29 16:06:06.645572 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:06:06.648962 systemd-logind[1933]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:06:06.652026 systemd-logind[1933]: Removed session 11. Jan 29 16:06:11.683324 systemd[1]: Started sshd@11-172.31.23.241:22-139.178.89.65:48430.service - OpenSSH per-connection server daemon (139.178.89.65:48430). Jan 29 16:06:11.865467 sshd[4927]: Accepted publickey for core from 139.178.89.65 port 48430 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:11.867970 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:11.877537 systemd-logind[1933]: New session 12 of user core. Jan 29 16:06:11.885110 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:06:12.143011 sshd[4931]: Connection closed by 139.178.89.65 port 48430 Jan 29 16:06:12.144121 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:12.152380 systemd[1]: sshd@11-172.31.23.241:22-139.178.89.65:48430.service: Deactivated successfully. Jan 29 16:06:12.157315 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:06:12.160203 systemd-logind[1933]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:06:12.162544 systemd-logind[1933]: Removed session 12. Jan 29 16:06:12.184350 systemd[1]: Started sshd@12-172.31.23.241:22-139.178.89.65:48440.service - OpenSSH per-connection server daemon (139.178.89.65:48440). Jan 29 16:06:12.377542 sshd[4944]: Accepted publickey for core from 139.178.89.65 port 48440 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:12.380182 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:12.389438 systemd-logind[1933]: New session 13 of user core. Jan 29 16:06:12.395102 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:06:12.716291 sshd[4946]: Connection closed by 139.178.89.65 port 48440 Jan 29 16:06:12.717086 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:12.728859 systemd-logind[1933]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:06:12.730228 systemd[1]: sshd@12-172.31.23.241:22-139.178.89.65:48440.service: Deactivated successfully. Jan 29 16:06:12.739785 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:06:12.760772 systemd-logind[1933]: Removed session 13. Jan 29 16:06:12.770516 systemd[1]: Started sshd@13-172.31.23.241:22-139.178.89.65:48450.service - OpenSSH per-connection server daemon (139.178.89.65:48450). Jan 29 16:06:12.961836 sshd[4955]: Accepted publickey for core from 139.178.89.65 port 48450 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:12.964850 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:12.973216 systemd-logind[1933]: New session 14 of user core. Jan 29 16:06:12.979049 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:06:13.235625 sshd[4958]: Connection closed by 139.178.89.65 port 48450 Jan 29 16:06:13.236737 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:13.244634 systemd[1]: sshd@13-172.31.23.241:22-139.178.89.65:48450.service: Deactivated successfully. Jan 29 16:06:13.248940 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:06:13.251439 systemd-logind[1933]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:06:13.253452 systemd-logind[1933]: Removed session 14. Jan 29 16:06:18.280325 systemd[1]: Started sshd@14-172.31.23.241:22-139.178.89.65:48454.service - OpenSSH per-connection server daemon (139.178.89.65:48454). Jan 29 16:06:18.472557 sshd[4977]: Accepted publickey for core from 139.178.89.65 port 48454 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:18.475068 sshd-session[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:18.484241 systemd-logind[1933]: New session 15 of user core. Jan 29 16:06:18.492081 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:06:18.728055 sshd[4979]: Connection closed by 139.178.89.65 port 48454 Jan 29 16:06:18.728905 sshd-session[4977]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:18.734661 systemd[1]: sshd@14-172.31.23.241:22-139.178.89.65:48454.service: Deactivated successfully. Jan 29 16:06:18.739600 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:06:18.742472 systemd-logind[1933]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:06:18.745192 systemd-logind[1933]: Removed session 15. Jan 29 16:06:23.771313 systemd[1]: Started sshd@15-172.31.23.241:22-139.178.89.65:60070.service - OpenSSH per-connection server daemon (139.178.89.65:60070). Jan 29 16:06:23.955567 sshd[4990]: Accepted publickey for core from 139.178.89.65 port 60070 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:23.958088 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:23.966749 systemd-logind[1933]: New session 16 of user core. Jan 29 16:06:23.973896 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:06:24.222880 sshd[4992]: Connection closed by 139.178.89.65 port 60070 Jan 29 16:06:24.223974 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:24.230623 systemd[1]: sshd@15-172.31.23.241:22-139.178.89.65:60070.service: Deactivated successfully. Jan 29 16:06:24.234354 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:06:24.236450 systemd-logind[1933]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:06:24.238544 systemd-logind[1933]: Removed session 16. Jan 29 16:06:29.266317 systemd[1]: Started sshd@16-172.31.23.241:22-139.178.89.65:60072.service - OpenSSH per-connection server daemon (139.178.89.65:60072). Jan 29 16:06:29.458876 sshd[5004]: Accepted publickey for core from 139.178.89.65 port 60072 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:29.461300 sshd-session[5004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:29.469122 systemd-logind[1933]: New session 17 of user core. Jan 29 16:06:29.477051 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:06:29.725439 sshd[5006]: Connection closed by 139.178.89.65 port 60072 Jan 29 16:06:29.726270 sshd-session[5004]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:29.732555 systemd[1]: sshd@16-172.31.23.241:22-139.178.89.65:60072.service: Deactivated successfully. Jan 29 16:06:29.736897 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:06:29.738611 systemd-logind[1933]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:06:29.740592 systemd-logind[1933]: Removed session 17. Jan 29 16:06:29.768327 systemd[1]: Started sshd@17-172.31.23.241:22-139.178.89.65:60086.service - OpenSSH per-connection server daemon (139.178.89.65:60086). Jan 29 16:06:29.960332 sshd[5018]: Accepted publickey for core from 139.178.89.65 port 60086 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:29.963043 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:29.971765 systemd-logind[1933]: New session 18 of user core. Jan 29 16:06:29.977108 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:06:30.273058 sshd[5020]: Connection closed by 139.178.89.65 port 60086 Jan 29 16:06:30.273908 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:30.280533 systemd[1]: sshd@17-172.31.23.241:22-139.178.89.65:60086.service: Deactivated successfully. Jan 29 16:06:30.285868 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:06:30.288127 systemd-logind[1933]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:06:30.290499 systemd-logind[1933]: Removed session 18. Jan 29 16:06:30.316305 systemd[1]: Started sshd@18-172.31.23.241:22-139.178.89.65:60100.service - OpenSSH per-connection server daemon (139.178.89.65:60100). Jan 29 16:06:30.510648 sshd[5029]: Accepted publickey for core from 139.178.89.65 port 60100 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:30.513747 sshd-session[5029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:30.522749 systemd-logind[1933]: New session 19 of user core. Jan 29 16:06:30.531092 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:06:31.697766 sshd[5031]: Connection closed by 139.178.89.65 port 60100 Jan 29 16:06:31.698828 sshd-session[5029]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:31.707398 systemd[1]: sshd@18-172.31.23.241:22-139.178.89.65:60100.service: Deactivated successfully. Jan 29 16:06:31.722230 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:06:31.724260 systemd-logind[1933]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:06:31.754339 systemd[1]: Started sshd@19-172.31.23.241:22-139.178.89.65:33322.service - OpenSSH per-connection server daemon (139.178.89.65:33322). Jan 29 16:06:31.756945 systemd-logind[1933]: Removed session 19. Jan 29 16:06:31.943363 sshd[5047]: Accepted publickey for core from 139.178.89.65 port 33322 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:31.945837 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:31.955289 systemd-logind[1933]: New session 20 of user core. Jan 29 16:06:31.961094 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:06:32.453296 sshd[5052]: Connection closed by 139.178.89.65 port 33322 Jan 29 16:06:32.453779 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:32.461018 systemd[1]: sshd@19-172.31.23.241:22-139.178.89.65:33322.service: Deactivated successfully. Jan 29 16:06:32.466144 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:06:32.469681 systemd-logind[1933]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:06:32.473416 systemd-logind[1933]: Removed session 20. Jan 29 16:06:32.505224 systemd[1]: Started sshd@20-172.31.23.241:22-139.178.89.65:33328.service - OpenSSH per-connection server daemon (139.178.89.65:33328). Jan 29 16:06:32.706612 sshd[5062]: Accepted publickey for core from 139.178.89.65 port 33328 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:32.708752 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:32.717641 systemd-logind[1933]: New session 21 of user core. Jan 29 16:06:32.728160 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:06:32.969066 sshd[5065]: Connection closed by 139.178.89.65 port 33328 Jan 29 16:06:32.970138 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:32.977019 systemd[1]: sshd@20-172.31.23.241:22-139.178.89.65:33328.service: Deactivated successfully. Jan 29 16:06:32.981228 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:06:32.983037 systemd-logind[1933]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:06:32.984973 systemd-logind[1933]: Removed session 21. Jan 29 16:06:38.011280 systemd[1]: Started sshd@21-172.31.23.241:22-139.178.89.65:33338.service - OpenSSH per-connection server daemon (139.178.89.65:33338). Jan 29 16:06:38.188298 sshd[5076]: Accepted publickey for core from 139.178.89.65 port 33338 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:38.190895 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:38.199293 systemd-logind[1933]: New session 22 of user core. Jan 29 16:06:38.209086 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:06:38.440387 sshd[5078]: Connection closed by 139.178.89.65 port 33338 Jan 29 16:06:38.442363 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:38.447409 systemd[1]: sshd@21-172.31.23.241:22-139.178.89.65:33338.service: Deactivated successfully. Jan 29 16:06:38.451785 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:06:38.455234 systemd-logind[1933]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:06:38.458069 systemd-logind[1933]: Removed session 22. Jan 29 16:06:43.483318 systemd[1]: Started sshd@22-172.31.23.241:22-139.178.89.65:56860.service - OpenSSH per-connection server daemon (139.178.89.65:56860). Jan 29 16:06:43.674521 sshd[5092]: Accepted publickey for core from 139.178.89.65 port 56860 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:43.677082 sshd-session[5092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:43.685475 systemd-logind[1933]: New session 23 of user core. Jan 29 16:06:43.694116 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:06:43.942205 sshd[5094]: Connection closed by 139.178.89.65 port 56860 Jan 29 16:06:43.943075 sshd-session[5092]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:43.951443 systemd[1]: sshd@22-172.31.23.241:22-139.178.89.65:56860.service: Deactivated successfully. Jan 29 16:06:43.957005 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:06:43.958457 systemd-logind[1933]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:06:43.960718 systemd-logind[1933]: Removed session 23. Jan 29 16:06:48.984295 systemd[1]: Started sshd@23-172.31.23.241:22-139.178.89.65:56874.service - OpenSSH per-connection server daemon (139.178.89.65:56874). Jan 29 16:06:49.167256 sshd[5109]: Accepted publickey for core from 139.178.89.65 port 56874 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:49.170455 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:49.180239 systemd-logind[1933]: New session 24 of user core. Jan 29 16:06:49.191060 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:06:49.444063 sshd[5111]: Connection closed by 139.178.89.65 port 56874 Jan 29 16:06:49.445015 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:49.451667 systemd-logind[1933]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:06:49.452473 systemd[1]: sshd@23-172.31.23.241:22-139.178.89.65:56874.service: Deactivated successfully. Jan 29 16:06:49.456717 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:06:49.458579 systemd-logind[1933]: Removed session 24. Jan 29 16:06:54.485316 systemd[1]: Started sshd@24-172.31.23.241:22-139.178.89.65:51110.service - OpenSSH per-connection server daemon (139.178.89.65:51110). Jan 29 16:06:54.674536 sshd[5123]: Accepted publickey for core from 139.178.89.65 port 51110 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:54.677060 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:54.686699 systemd-logind[1933]: New session 25 of user core. Jan 29 16:06:54.692085 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 16:06:54.927361 sshd[5125]: Connection closed by 139.178.89.65 port 51110 Jan 29 16:06:54.927240 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:54.933727 systemd[1]: sshd@24-172.31.23.241:22-139.178.89.65:51110.service: Deactivated successfully. Jan 29 16:06:54.938172 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 16:06:54.940506 systemd-logind[1933]: Session 25 logged out. Waiting for processes to exit. Jan 29 16:06:54.942616 systemd-logind[1933]: Removed session 25. Jan 29 16:06:54.975303 systemd[1]: Started sshd@25-172.31.23.241:22-139.178.89.65:51116.service - OpenSSH per-connection server daemon (139.178.89.65:51116). Jan 29 16:06:55.165985 sshd[5137]: Accepted publickey for core from 139.178.89.65 port 51116 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:55.168402 sshd-session[5137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:55.176495 systemd-logind[1933]: New session 26 of user core. Jan 29 16:06:55.184144 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 16:06:57.713950 kubelet[3223]: I0129 16:06:57.713756 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ljtvc" podStartSLOduration=101.712399864 podStartE2EDuration="1m41.712399864s" podCreationTimestamp="2025-01-29 16:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:05:49.1681583 +0000 UTC m=+37.687743668" watchObservedRunningTime="2025-01-29 16:06:57.712399864 +0000 UTC m=+106.231985220" Jan 29 16:06:57.762165 systemd[1]: run-containerd-runc-k8s.io-68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123-runc.FZQ0Qf.mount: Deactivated successfully. Jan 29 16:06:57.766976 containerd[1961]: time="2025-01-29T16:06:57.766831241Z" level=info msg="StopContainer for \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\" with timeout 30 (s)" Jan 29 16:06:57.772191 containerd[1961]: time="2025-01-29T16:06:57.768744917Z" level=info msg="Stop container \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\" with signal terminated" Jan 29 16:06:57.796771 systemd[1]: cri-containerd-059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f.scope: Deactivated successfully. Jan 29 16:06:57.800757 containerd[1961]: time="2025-01-29T16:06:57.800687297Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:06:57.819782 containerd[1961]: time="2025-01-29T16:06:57.819089309Z" level=info msg="StopContainer for \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\" with timeout 2 (s)" Jan 29 16:06:57.820364 containerd[1961]: time="2025-01-29T16:06:57.820253969Z" level=info msg="Stop container \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\" with signal terminated" Jan 29 16:06:57.844931 systemd-networkd[1786]: lxc_health: Link DOWN Jan 29 16:06:57.846863 systemd-networkd[1786]: lxc_health: Lost carrier Jan 29 16:06:57.872605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f-rootfs.mount: Deactivated successfully. Jan 29 16:06:57.876070 systemd[1]: cri-containerd-68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123.scope: Deactivated successfully. Jan 29 16:06:57.876638 systemd[1]: cri-containerd-68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123.scope: Consumed 14.158s CPU time, 124.8M memory peak, 144K read from disk, 12.9M written to disk. Jan 29 16:06:57.900894 containerd[1961]: time="2025-01-29T16:06:57.900731093Z" level=info msg="shim disconnected" id=059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f namespace=k8s.io Jan 29 16:06:57.901530 containerd[1961]: time="2025-01-29T16:06:57.901276649Z" level=warning msg="cleaning up after shim disconnected" id=059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f namespace=k8s.io Jan 29 16:06:57.901530 containerd[1961]: time="2025-01-29T16:06:57.901349513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:06:57.929225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123-rootfs.mount: Deactivated successfully. Jan 29 16:06:57.938859 containerd[1961]: time="2025-01-29T16:06:57.938734649Z" level=info msg="shim disconnected" id=68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123 namespace=k8s.io Jan 29 16:06:57.939109 containerd[1961]: time="2025-01-29T16:06:57.938878193Z" level=warning msg="cleaning up after shim disconnected" id=68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123 namespace=k8s.io Jan 29 16:06:57.939109 containerd[1961]: time="2025-01-29T16:06:57.938925317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:06:57.940823 containerd[1961]: time="2025-01-29T16:06:57.940587053Z" level=info msg="StopContainer for \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\" returns successfully" Jan 29 16:06:57.941979 containerd[1961]: time="2025-01-29T16:06:57.941741694Z" level=info msg="StopPodSandbox for \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\"" Jan 29 16:06:57.942159 containerd[1961]: time="2025-01-29T16:06:57.942117870Z" level=info msg="Container to stop \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:06:57.950494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827-shm.mount: Deactivated successfully. Jan 29 16:06:57.964279 systemd[1]: cri-containerd-66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827.scope: Deactivated successfully. Jan 29 16:06:57.993075 containerd[1961]: time="2025-01-29T16:06:57.993020586Z" level=info msg="StopContainer for \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\" returns successfully" Jan 29 16:06:57.994005 containerd[1961]: time="2025-01-29T16:06:57.993944286Z" level=info msg="StopPodSandbox for \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\"" Jan 29 16:06:57.994220 containerd[1961]: time="2025-01-29T16:06:57.994188342Z" level=info msg="Container to stop \"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:06:57.994358 containerd[1961]: time="2025-01-29T16:06:57.994328010Z" level=info msg="Container to stop \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:06:57.994598 containerd[1961]: time="2025-01-29T16:06:57.994443906Z" level=info msg="Container to stop \"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:06:57.994773 containerd[1961]: time="2025-01-29T16:06:57.994739154Z" level=info msg="Container to stop \"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:06:57.994949 containerd[1961]: time="2025-01-29T16:06:57.994913670Z" level=info msg="Container to stop \"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:06:58.010455 systemd[1]: cri-containerd-d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5.scope: Deactivated successfully. Jan 29 16:06:58.031292 containerd[1961]: time="2025-01-29T16:06:58.031173050Z" level=info msg="shim disconnected" id=66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827 namespace=k8s.io Jan 29 16:06:58.031705 containerd[1961]: time="2025-01-29T16:06:58.031568114Z" level=warning msg="cleaning up after shim disconnected" id=66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827 namespace=k8s.io Jan 29 16:06:58.032051 containerd[1961]: time="2025-01-29T16:06:58.031893290Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:06:58.067708 containerd[1961]: time="2025-01-29T16:06:58.067510370Z" level=info msg="shim disconnected" id=d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5 namespace=k8s.io Jan 29 16:06:58.067708 containerd[1961]: time="2025-01-29T16:06:58.067588910Z" level=warning msg="cleaning up after shim disconnected" id=d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5 namespace=k8s.io Jan 29 16:06:58.067708 containerd[1961]: time="2025-01-29T16:06:58.067610870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:06:58.069323 containerd[1961]: time="2025-01-29T16:06:58.068940674Z" level=info msg="TearDown network for sandbox \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\" successfully" Jan 29 16:06:58.069323 containerd[1961]: time="2025-01-29T16:06:58.069245198Z" level=info msg="StopPodSandbox for \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\" returns successfully" Jan 29 16:06:58.099039 containerd[1961]: time="2025-01-29T16:06:58.098518682Z" level=info msg="TearDown network for sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" successfully" Jan 29 16:06:58.099039 containerd[1961]: time="2025-01-29T16:06:58.098584934Z" level=info msg="StopPodSandbox for \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" returns successfully" Jan 29 16:06:58.193521 kubelet[3223]: I0129 16:06:58.192339 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-xtables-lock\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.193521 kubelet[3223]: I0129 16:06:58.192420 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-config-path\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.193521 kubelet[3223]: I0129 16:06:58.192457 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-run\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.193521 kubelet[3223]: I0129 16:06:58.192458 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:06:58.193521 kubelet[3223]: I0129 16:06:58.192521 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:06:58.193521 kubelet[3223]: I0129 16:06:58.192491 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-etc-cni-netd\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.194134 kubelet[3223]: I0129 16:06:58.192580 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b18610d7-2a21-4355-823a-2848ce68a094-cilium-config-path\") pod \"b18610d7-2a21-4355-823a-2848ce68a094\" (UID: \"b18610d7-2a21-4355-823a-2848ce68a094\") " Jan 29 16:06:58.194134 kubelet[3223]: I0129 16:06:58.192620 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-host-proc-sys-net\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.194134 kubelet[3223]: I0129 16:06:58.192659 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56162a04-30c2-4a10-8c9d-bf059cd76252-clustermesh-secrets\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.194134 kubelet[3223]: I0129 16:06:58.192697 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjszq\" (UniqueName: \"kubernetes.io/projected/b18610d7-2a21-4355-823a-2848ce68a094-kube-api-access-pjszq\") pod \"b18610d7-2a21-4355-823a-2848ce68a094\" (UID: \"b18610d7-2a21-4355-823a-2848ce68a094\") " Jan 29 16:06:58.194134 kubelet[3223]: I0129 16:06:58.192732 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-bpf-maps\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.194134 kubelet[3223]: I0129 16:06:58.192765 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-hostproc\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.194441 kubelet[3223]: I0129 16:06:58.192832 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8z64\" (UniqueName: \"kubernetes.io/projected/56162a04-30c2-4a10-8c9d-bf059cd76252-kube-api-access-r8z64\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.194441 kubelet[3223]: I0129 16:06:58.192874 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-host-proc-sys-kernel\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.194441 kubelet[3223]: I0129 16:06:58.192913 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-cgroup\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.194441 kubelet[3223]: I0129 16:06:58.192954 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56162a04-30c2-4a10-8c9d-bf059cd76252-hubble-tls\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.194441 kubelet[3223]: I0129 16:06:58.192987 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cni-path\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.194441 kubelet[3223]: I0129 16:06:58.193021 3223 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-lib-modules\") pod \"56162a04-30c2-4a10-8c9d-bf059cd76252\" (UID: \"56162a04-30c2-4a10-8c9d-bf059cd76252\") " Jan 29 16:06:58.194743 kubelet[3223]: I0129 16:06:58.193095 3223 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-xtables-lock\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.194743 kubelet[3223]: I0129 16:06:58.193119 3223 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-etc-cni-netd\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.194743 kubelet[3223]: I0129 16:06:58.193156 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:06:58.199727 kubelet[3223]: I0129 16:06:58.199171 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18610d7-2a21-4355-823a-2848ce68a094-kube-api-access-pjszq" (OuterVolumeSpecName: "kube-api-access-pjszq") pod "b18610d7-2a21-4355-823a-2848ce68a094" (UID: "b18610d7-2a21-4355-823a-2848ce68a094"). InnerVolumeSpecName "kube-api-access-pjszq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:06:58.199727 kubelet[3223]: I0129 16:06:58.199267 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:06:58.199727 kubelet[3223]: I0129 16:06:58.199308 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-hostproc" (OuterVolumeSpecName: "hostproc") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:06:58.200177 kubelet[3223]: I0129 16:06:58.200139 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:06:58.202178 kubelet[3223]: I0129 16:06:58.202129 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:06:58.205047 kubelet[3223]: I0129 16:06:58.202559 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:06:58.205047 kubelet[3223]: I0129 16:06:58.203074 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:06:58.208565 kubelet[3223]: I0129 16:06:58.208505 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 16:06:58.211486 kubelet[3223]: I0129 16:06:58.208778 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cni-path" (OuterVolumeSpecName: "cni-path") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 16:06:58.211675 kubelet[3223]: I0129 16:06:58.209668 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56162a04-30c2-4a10-8c9d-bf059cd76252-kube-api-access-r8z64" (OuterVolumeSpecName: "kube-api-access-r8z64") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "kube-api-access-r8z64". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:06:58.212678 kubelet[3223]: I0129 16:06:58.212589 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56162a04-30c2-4a10-8c9d-bf059cd76252-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 16:06:58.216453 kubelet[3223]: I0129 16:06:58.216394 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b18610d7-2a21-4355-823a-2848ce68a094-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b18610d7-2a21-4355-823a-2848ce68a094" (UID: "b18610d7-2a21-4355-823a-2848ce68a094"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 16:06:58.218237 kubelet[3223]: I0129 16:06:58.218178 3223 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56162a04-30c2-4a10-8c9d-bf059cd76252-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "56162a04-30c2-4a10-8c9d-bf059cd76252" (UID: "56162a04-30c2-4a10-8c9d-bf059cd76252"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 29 16:06:58.272950 kubelet[3223]: I0129 16:06:58.272899 3223 scope.go:117] "RemoveContainer" containerID="68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123" Jan 29 16:06:58.276954 containerd[1961]: time="2025-01-29T16:06:58.276454671Z" level=info msg="RemoveContainer for \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\"" Jan 29 16:06:58.288851 containerd[1961]: time="2025-01-29T16:06:58.288219735Z" level=info msg="RemoveContainer for \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\" returns successfully" Jan 29 16:06:58.290520 kubelet[3223]: I0129 16:06:58.290469 3223 scope.go:117] "RemoveContainer" containerID="b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9" Jan 29 16:06:58.293881 kubelet[3223]: I0129 16:06:58.293846 3223 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-config-path\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.294635 kubelet[3223]: I0129 16:06:58.294551 3223 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-run\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.294751 kubelet[3223]: I0129 16:06:58.294596 3223 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b18610d7-2a21-4355-823a-2848ce68a094-cilium-config-path\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.294751 kubelet[3223]: I0129 16:06:58.294732 3223 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-host-proc-sys-net\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.294913 kubelet[3223]: I0129 16:06:58.294756 3223 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56162a04-30c2-4a10-8c9d-bf059cd76252-clustermesh-secrets\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.294913 kubelet[3223]: I0129 16:06:58.294846 3223 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-bpf-maps\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.294913 kubelet[3223]: I0129 16:06:58.294871 3223 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-hostproc\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.295075 kubelet[3223]: I0129 16:06:58.294917 3223 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r8z64\" (UniqueName: \"kubernetes.io/projected/56162a04-30c2-4a10-8c9d-bf059cd76252-kube-api-access-r8z64\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.295075 kubelet[3223]: I0129 16:06:58.294944 3223 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-host-proc-sys-kernel\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.295075 kubelet[3223]: I0129 16:06:58.294967 3223 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pjszq\" (UniqueName: \"kubernetes.io/projected/b18610d7-2a21-4355-823a-2848ce68a094-kube-api-access-pjszq\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.295075 kubelet[3223]: I0129 16:06:58.295018 3223 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cilium-cgroup\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.295075 kubelet[3223]: I0129 16:06:58.295047 3223 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56162a04-30c2-4a10-8c9d-bf059cd76252-hubble-tls\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.295330 kubelet[3223]: I0129 16:06:58.295093 3223 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-cni-path\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.295330 kubelet[3223]: I0129 16:06:58.295117 3223 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56162a04-30c2-4a10-8c9d-bf059cd76252-lib-modules\") on node \"ip-172-31-23-241\" DevicePath \"\"" Jan 29 16:06:58.299356 containerd[1961]: time="2025-01-29T16:06:58.296936967Z" level=info msg="RemoveContainer for \"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9\"" Jan 29 16:06:58.298025 systemd[1]: Removed slice kubepods-besteffort-podb18610d7_2a21_4355_823a_2848ce68a094.slice - libcontainer container kubepods-besteffort-podb18610d7_2a21_4355_823a_2848ce68a094.slice. Jan 29 16:06:58.306920 containerd[1961]: time="2025-01-29T16:06:58.305482671Z" level=info msg="RemoveContainer for \"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9\" returns successfully" Jan 29 16:06:58.307201 kubelet[3223]: I0129 16:06:58.305865 3223 scope.go:117] "RemoveContainer" containerID="3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737" Jan 29 16:06:58.308852 containerd[1961]: time="2025-01-29T16:06:58.308208591Z" level=info msg="RemoveContainer for \"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737\"" Jan 29 16:06:58.311371 systemd[1]: Removed slice kubepods-burstable-pod56162a04_30c2_4a10_8c9d_bf059cd76252.slice - libcontainer container kubepods-burstable-pod56162a04_30c2_4a10_8c9d_bf059cd76252.slice. Jan 29 16:06:58.311602 systemd[1]: kubepods-burstable-pod56162a04_30c2_4a10_8c9d_bf059cd76252.slice: Consumed 14.307s CPU time, 125.2M memory peak, 144K read from disk, 12.9M written to disk. Jan 29 16:06:58.318186 containerd[1961]: time="2025-01-29T16:06:58.318040623Z" level=info msg="RemoveContainer for \"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737\" returns successfully" Jan 29 16:06:58.322573 kubelet[3223]: I0129 16:06:58.322424 3223 scope.go:117] "RemoveContainer" containerID="bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460" Jan 29 16:06:58.331385 containerd[1961]: time="2025-01-29T16:06:58.330720147Z" level=info msg="RemoveContainer for \"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460\"" Jan 29 16:06:58.338294 containerd[1961]: time="2025-01-29T16:06:58.338160207Z" level=info msg="RemoveContainer for \"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460\" returns successfully" Jan 29 16:06:58.339138 kubelet[3223]: I0129 16:06:58.338629 3223 scope.go:117] "RemoveContainer" containerID="e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9" Jan 29 16:06:58.342443 containerd[1961]: time="2025-01-29T16:06:58.342143787Z" level=info msg="RemoveContainer for \"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9\"" Jan 29 16:06:58.348933 containerd[1961]: time="2025-01-29T16:06:58.348015280Z" level=info msg="RemoveContainer for \"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9\" returns successfully" Jan 29 16:06:58.350960 kubelet[3223]: I0129 16:06:58.349396 3223 scope.go:117] "RemoveContainer" containerID="68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123" Jan 29 16:06:58.351610 containerd[1961]: time="2025-01-29T16:06:58.351464164Z" level=error msg="ContainerStatus for \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\": not found" Jan 29 16:06:58.352208 kubelet[3223]: E0129 16:06:58.352161 3223 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\": not found" containerID="68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123" Jan 29 16:06:58.353908 kubelet[3223]: I0129 16:06:58.352340 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123"} err="failed to get container status \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\": rpc error: code = NotFound desc = an error occurred when try to find container \"68151d523f0e1d8fdde1f57340852d379946029a312fcbfd669d91f7ca919123\": not found" Jan 29 16:06:58.354678 kubelet[3223]: I0129 16:06:58.354640 3223 scope.go:117] "RemoveContainer" containerID="b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9" Jan 29 16:06:58.355555 containerd[1961]: time="2025-01-29T16:06:58.355492504Z" level=error msg="ContainerStatus for \"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9\": not found" Jan 29 16:06:58.356115 kubelet[3223]: E0129 16:06:58.356072 3223 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9\": not found" containerID="b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9" Jan 29 16:06:58.356441 kubelet[3223]: I0129 16:06:58.356299 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9"} err="failed to get container status \"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"b78442e9d6d51e44537775b66b3f769a08ba34d8670bab60f3a85d02aa2ab7b9\": not found" Jan 29 16:06:58.356441 kubelet[3223]: I0129 16:06:58.356346 3223 scope.go:117] "RemoveContainer" containerID="3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737" Jan 29 16:06:58.357392 containerd[1961]: time="2025-01-29T16:06:58.356911696Z" level=error msg="ContainerStatus for \"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737\": not found" Jan 29 16:06:58.357538 kubelet[3223]: E0129 16:06:58.357176 3223 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737\": not found" containerID="3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737" Jan 29 16:06:58.357538 kubelet[3223]: I0129 16:06:58.357241 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737"} err="failed to get container status \"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b09fb00294c0d4b2b78a8b7fed9b1dbb5fbf4f91b9394ad761cddd470a70737\": not found" Jan 29 16:06:58.357538 kubelet[3223]: I0129 16:06:58.357277 3223 scope.go:117] "RemoveContainer" containerID="bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460" Jan 29 16:06:58.357713 containerd[1961]: time="2025-01-29T16:06:58.357599428Z" level=error msg="ContainerStatus for \"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460\": not found" Jan 29 16:06:58.358160 kubelet[3223]: E0129 16:06:58.357863 3223 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460\": not found" containerID="bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460" Jan 29 16:06:58.358160 kubelet[3223]: I0129 16:06:58.357929 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460"} err="failed to get container status \"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc3fce2570b489d70e2aa48655eb952763bf650268fda982884a59b4b68ad460\": not found" Jan 29 16:06:58.358160 kubelet[3223]: I0129 16:06:58.357978 3223 scope.go:117] "RemoveContainer" containerID="e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9" Jan 29 16:06:58.358415 containerd[1961]: time="2025-01-29T16:06:58.358340572Z" level=error msg="ContainerStatus for \"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9\": not found" Jan 29 16:06:58.358607 kubelet[3223]: E0129 16:06:58.358550 3223 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9\": not found" containerID="e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9" Jan 29 16:06:58.358607 kubelet[3223]: I0129 16:06:58.358590 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9"} err="failed to get container status \"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8ce4850e9089361af251908d2dd98aa9a9fef075fed0233489fbd8abe4027d9\": not found" Jan 29 16:06:58.358883 kubelet[3223]: I0129 16:06:58.358622 3223 scope.go:117] "RemoveContainer" containerID="059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f" Jan 29 16:06:58.360737 containerd[1961]: time="2025-01-29T16:06:58.360675436Z" level=info msg="RemoveContainer for \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\"" Jan 29 16:06:58.368581 containerd[1961]: time="2025-01-29T16:06:58.368495476Z" level=info msg="RemoveContainer for \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\" returns successfully" Jan 29 16:06:58.369054 kubelet[3223]: I0129 16:06:58.368871 3223 scope.go:117] "RemoveContainer" containerID="059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f" Jan 29 16:06:58.369312 containerd[1961]: time="2025-01-29T16:06:58.369238324Z" level=error msg="ContainerStatus for \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\": not found" Jan 29 16:06:58.369612 kubelet[3223]: E0129 16:06:58.369468 3223 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\": not found" containerID="059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f" Jan 29 16:06:58.369612 kubelet[3223]: I0129 16:06:58.369512 3223 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f"} err="failed to get container status \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\": rpc error: code = NotFound desc = an error occurred when try to find container \"059103da835776d564dcc631f1d86027eafac73b66d9562db008ff97f1e6a12f\": not found" Jan 29 16:06:58.749545 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5-rootfs.mount: Deactivated successfully. Jan 29 16:06:58.749730 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5-shm.mount: Deactivated successfully. Jan 29 16:06:58.749915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827-rootfs.mount: Deactivated successfully. Jan 29 16:06:58.750085 systemd[1]: var-lib-kubelet-pods-56162a04\x2d30c2\x2d4a10\x2d8c9d\x2dbf059cd76252-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:06:58.750228 systemd[1]: var-lib-kubelet-pods-b18610d7\x2d2a21\x2d4355\x2d823a\x2d2848ce68a094-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpjszq.mount: Deactivated successfully. Jan 29 16:06:58.750367 systemd[1]: var-lib-kubelet-pods-56162a04\x2d30c2\x2d4a10\x2d8c9d\x2dbf059cd76252-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr8z64.mount: Deactivated successfully. Jan 29 16:06:58.750503 systemd[1]: var-lib-kubelet-pods-56162a04\x2d30c2\x2d4a10\x2d8c9d\x2dbf059cd76252-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:06:59.661830 sshd[5139]: Connection closed by 139.178.89.65 port 51116 Jan 29 16:06:59.662754 sshd-session[5137]: pam_unix(sshd:session): session closed for user core Jan 29 16:06:59.668593 systemd[1]: sshd@25-172.31.23.241:22-139.178.89.65:51116.service: Deactivated successfully. Jan 29 16:06:59.673861 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 16:06:59.674874 systemd[1]: session-26.scope: Consumed 1.788s CPU time, 23.4M memory peak. Jan 29 16:06:59.677655 systemd-logind[1933]: Session 26 logged out. Waiting for processes to exit. Jan 29 16:06:59.680306 systemd-logind[1933]: Removed session 26. Jan 29 16:06:59.711277 systemd[1]: Started sshd@26-172.31.23.241:22-139.178.89.65:51118.service - OpenSSH per-connection server daemon (139.178.89.65:51118). Jan 29 16:06:59.821843 kubelet[3223]: I0129 16:06:59.820391 3223 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56162a04-30c2-4a10-8c9d-bf059cd76252" path="/var/lib/kubelet/pods/56162a04-30c2-4a10-8c9d-bf059cd76252/volumes" Jan 29 16:06:59.821843 kubelet[3223]: I0129 16:06:59.821733 3223 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b18610d7-2a21-4355-823a-2848ce68a094" path="/var/lib/kubelet/pods/b18610d7-2a21-4355-823a-2848ce68a094/volumes" Jan 29 16:06:59.893201 sshd[5299]: Accepted publickey for core from 139.178.89.65 port 51118 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:06:59.895661 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:06:59.904782 systemd-logind[1933]: New session 27 of user core. Jan 29 16:06:59.910089 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 16:07:00.388500 ntpd[1926]: Deleting interface #12 lxc_health, fe80::8451:54ff:fe78:1f3f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Jan 29 16:07:00.389027 ntpd[1926]: 29 Jan 16:07:00 ntpd[1926]: Deleting interface #12 lxc_health, fe80::8451:54ff:fe78:1f3f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Jan 29 16:07:01.164986 sshd[5301]: Connection closed by 139.178.89.65 port 51118 Jan 29 16:07:01.166116 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Jan 29 16:07:01.177476 systemd[1]: sshd@26-172.31.23.241:22-139.178.89.65:51118.service: Deactivated successfully. Jan 29 16:07:01.185873 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 16:07:01.192191 kubelet[3223]: I0129 16:07:01.189078 3223 memory_manager.go:355] "RemoveStaleState removing state" podUID="b18610d7-2a21-4355-823a-2848ce68a094" containerName="cilium-operator" Jan 29 16:07:01.192191 kubelet[3223]: I0129 16:07:01.189121 3223 memory_manager.go:355] "RemoveStaleState removing state" podUID="56162a04-30c2-4a10-8c9d-bf059cd76252" containerName="cilium-agent" Jan 29 16:07:01.190115 systemd[1]: session-27.scope: Consumed 1.065s CPU time, 23.5M memory peak. Jan 29 16:07:01.196702 systemd-logind[1933]: Session 27 logged out. Waiting for processes to exit. Jan 29 16:07:01.236941 systemd[1]: Started sshd@27-172.31.23.241:22-139.178.89.65:40732.service - OpenSSH per-connection server daemon (139.178.89.65:40732). Jan 29 16:07:01.243633 systemd-logind[1933]: Removed session 27. Jan 29 16:07:01.276348 systemd[1]: Created slice kubepods-burstable-pod79875698_860d_4e83_83c6_042aa8b70c95.slice - libcontainer container kubepods-burstable-pod79875698_860d_4e83_83c6_042aa8b70c95.slice. Jan 29 16:07:01.315666 kubelet[3223]: I0129 16:07:01.315424 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/79875698-860d-4e83-83c6-042aa8b70c95-cilium-ipsec-secrets\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.315666 kubelet[3223]: I0129 16:07:01.315604 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79875698-860d-4e83-83c6-042aa8b70c95-cilium-run\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317426 kubelet[3223]: I0129 16:07:01.315879 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79875698-860d-4e83-83c6-042aa8b70c95-cni-path\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317426 kubelet[3223]: I0129 16:07:01.315939 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79875698-860d-4e83-83c6-042aa8b70c95-cilium-cgroup\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317426 kubelet[3223]: I0129 16:07:01.315979 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79875698-860d-4e83-83c6-042aa8b70c95-etc-cni-netd\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317426 kubelet[3223]: I0129 16:07:01.316018 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79875698-860d-4e83-83c6-042aa8b70c95-cilium-config-path\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317426 kubelet[3223]: I0129 16:07:01.316054 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79875698-860d-4e83-83c6-042aa8b70c95-hubble-tls\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317426 kubelet[3223]: I0129 16:07:01.316089 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slgqx\" (UniqueName: \"kubernetes.io/projected/79875698-860d-4e83-83c6-042aa8b70c95-kube-api-access-slgqx\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317758 kubelet[3223]: I0129 16:07:01.316156 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79875698-860d-4e83-83c6-042aa8b70c95-hostproc\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317758 kubelet[3223]: I0129 16:07:01.316196 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79875698-860d-4e83-83c6-042aa8b70c95-xtables-lock\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317758 kubelet[3223]: I0129 16:07:01.316231 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79875698-860d-4e83-83c6-042aa8b70c95-clustermesh-secrets\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317758 kubelet[3223]: I0129 16:07:01.316272 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79875698-860d-4e83-83c6-042aa8b70c95-bpf-maps\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317758 kubelet[3223]: I0129 16:07:01.316308 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79875698-860d-4e83-83c6-042aa8b70c95-lib-modules\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.317758 kubelet[3223]: I0129 16:07:01.316356 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79875698-860d-4e83-83c6-042aa8b70c95-host-proc-sys-net\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.318701 kubelet[3223]: I0129 16:07:01.316392 3223 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79875698-860d-4e83-83c6-042aa8b70c95-host-proc-sys-kernel\") pod \"cilium-q2kgs\" (UID: \"79875698-860d-4e83-83c6-042aa8b70c95\") " pod="kube-system/cilium-q2kgs" Jan 29 16:07:01.477054 sshd[5311]: Accepted publickey for core from 139.178.89.65 port 40732 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:07:01.484181 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:07:01.501538 systemd-logind[1933]: New session 28 of user core. Jan 29 16:07:01.510110 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 16:07:01.589455 containerd[1961]: time="2025-01-29T16:07:01.589386032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2kgs,Uid:79875698-860d-4e83-83c6-042aa8b70c95,Namespace:kube-system,Attempt:0,}" Jan 29 16:07:01.636326 containerd[1961]: time="2025-01-29T16:07:01.635972216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:07:01.636326 containerd[1961]: time="2025-01-29T16:07:01.636171056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:07:01.636326 containerd[1961]: time="2025-01-29T16:07:01.636231200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:07:01.637001 containerd[1961]: time="2025-01-29T16:07:01.636840584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:07:01.645596 sshd[5321]: Connection closed by 139.178.89.65 port 40732 Jan 29 16:07:01.648241 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Jan 29 16:07:01.654757 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 16:07:01.657427 systemd[1]: sshd@27-172.31.23.241:22-139.178.89.65:40732.service: Deactivated successfully. Jan 29 16:07:01.668049 systemd-logind[1933]: Session 28 logged out. Waiting for processes to exit. Jan 29 16:07:01.693115 systemd[1]: Started cri-containerd-bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772.scope - libcontainer container bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772. Jan 29 16:07:01.696569 systemd[1]: Started sshd@28-172.31.23.241:22-139.178.89.65:40734.service - OpenSSH per-connection server daemon (139.178.89.65:40734). Jan 29 16:07:01.700571 systemd-logind[1933]: Removed session 28. Jan 29 16:07:01.748937 containerd[1961]: time="2025-01-29T16:07:01.748341068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q2kgs,Uid:79875698-860d-4e83-83c6-042aa8b70c95,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772\"" Jan 29 16:07:01.757223 containerd[1961]: time="2025-01-29T16:07:01.757140752Z" level=info msg="CreateContainer within sandbox \"bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:07:01.782701 containerd[1961]: time="2025-01-29T16:07:01.782607057Z" level=info msg="CreateContainer within sandbox \"bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b1617de9e70c8ab311309fb3e1abf37ed18e1d6b6726223de02d22086a83a4e\"" Jan 29 16:07:01.783794 containerd[1961]: time="2025-01-29T16:07:01.783733701Z" level=info msg="StartContainer for \"7b1617de9e70c8ab311309fb3e1abf37ed18e1d6b6726223de02d22086a83a4e\"" Jan 29 16:07:01.837126 systemd[1]: Started cri-containerd-7b1617de9e70c8ab311309fb3e1abf37ed18e1d6b6726223de02d22086a83a4e.scope - libcontainer container 7b1617de9e70c8ab311309fb3e1abf37ed18e1d6b6726223de02d22086a83a4e. Jan 29 16:07:01.890585 containerd[1961]: time="2025-01-29T16:07:01.890487381Z" level=info msg="StartContainer for \"7b1617de9e70c8ab311309fb3e1abf37ed18e1d6b6726223de02d22086a83a4e\" returns successfully" Jan 29 16:07:01.911032 systemd[1]: cri-containerd-7b1617de9e70c8ab311309fb3e1abf37ed18e1d6b6726223de02d22086a83a4e.scope: Deactivated successfully. Jan 29 16:07:01.926139 sshd[5356]: Accepted publickey for core from 139.178.89.65 port 40734 ssh2: RSA SHA256:p0zN5Ay/t+n+pcpkWsttHCw95i2kqVoS6Ap9zWCihDo Jan 29 16:07:01.930654 sshd-session[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:07:01.942995 systemd-logind[1933]: New session 29 of user core. Jan 29 16:07:01.949080 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 16:07:01.985765 containerd[1961]: time="2025-01-29T16:07:01.985529266Z" level=info msg="shim disconnected" id=7b1617de9e70c8ab311309fb3e1abf37ed18e1d6b6726223de02d22086a83a4e namespace=k8s.io Jan 29 16:07:01.985765 containerd[1961]: time="2025-01-29T16:07:01.985618414Z" level=warning msg="cleaning up after shim disconnected" id=7b1617de9e70c8ab311309fb3e1abf37ed18e1d6b6726223de02d22086a83a4e namespace=k8s.io Jan 29 16:07:01.985765 containerd[1961]: time="2025-01-29T16:07:01.985638814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:07:02.102351 kubelet[3223]: E0129 16:07:02.102229 3223 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:07:02.305851 containerd[1961]: time="2025-01-29T16:07:02.305483479Z" level=info msg="CreateContainer within sandbox \"bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:07:02.326893 containerd[1961]: time="2025-01-29T16:07:02.326831047Z" level=info msg="CreateContainer within sandbox \"bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5df40bdc874d8e366a221219f52b09a6cbf00aa23a313f383dc7d62ccf75d66b\"" Jan 29 16:07:02.329199 containerd[1961]: time="2025-01-29T16:07:02.329019307Z" level=info msg="StartContainer for \"5df40bdc874d8e366a221219f52b09a6cbf00aa23a313f383dc7d62ccf75d66b\"" Jan 29 16:07:02.378173 systemd[1]: Started cri-containerd-5df40bdc874d8e366a221219f52b09a6cbf00aa23a313f383dc7d62ccf75d66b.scope - libcontainer container 5df40bdc874d8e366a221219f52b09a6cbf00aa23a313f383dc7d62ccf75d66b. Jan 29 16:07:02.430002 containerd[1961]: time="2025-01-29T16:07:02.427931084Z" level=info msg="StartContainer for \"5df40bdc874d8e366a221219f52b09a6cbf00aa23a313f383dc7d62ccf75d66b\" returns successfully" Jan 29 16:07:02.451630 systemd[1]: cri-containerd-5df40bdc874d8e366a221219f52b09a6cbf00aa23a313f383dc7d62ccf75d66b.scope: Deactivated successfully. Jan 29 16:07:02.487417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5df40bdc874d8e366a221219f52b09a6cbf00aa23a313f383dc7d62ccf75d66b-rootfs.mount: Deactivated successfully. Jan 29 16:07:02.501563 containerd[1961]: time="2025-01-29T16:07:02.501466604Z" level=info msg="shim disconnected" id=5df40bdc874d8e366a221219f52b09a6cbf00aa23a313f383dc7d62ccf75d66b namespace=k8s.io Jan 29 16:07:02.501563 containerd[1961]: time="2025-01-29T16:07:02.501544280Z" level=warning msg="cleaning up after shim disconnected" id=5df40bdc874d8e366a221219f52b09a6cbf00aa23a313f383dc7d62ccf75d66b namespace=k8s.io Jan 29 16:07:02.501563 containerd[1961]: time="2025-01-29T16:07:02.501564536Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:07:02.816841 kubelet[3223]: E0129 16:07:02.816759 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-ljtvc" podUID="7dad2964-6d0d-40d5-9217-1bf54733a303" Jan 29 16:07:03.307961 containerd[1961]: time="2025-01-29T16:07:03.307532552Z" level=info msg="CreateContainer within sandbox \"bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:07:03.356406 containerd[1961]: time="2025-01-29T16:07:03.356325248Z" level=info msg="CreateContainer within sandbox \"bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6aebe0dae6176a11d0cd0de47ba0745e905c82be1569390dc07e013a2c10209c\"" Jan 29 16:07:03.358337 containerd[1961]: time="2025-01-29T16:07:03.358261220Z" level=info msg="StartContainer for \"6aebe0dae6176a11d0cd0de47ba0745e905c82be1569390dc07e013a2c10209c\"" Jan 29 16:07:03.410124 systemd[1]: Started cri-containerd-6aebe0dae6176a11d0cd0de47ba0745e905c82be1569390dc07e013a2c10209c.scope - libcontainer container 6aebe0dae6176a11d0cd0de47ba0745e905c82be1569390dc07e013a2c10209c. Jan 29 16:07:03.472304 containerd[1961]: time="2025-01-29T16:07:03.472224081Z" level=info msg="StartContainer for \"6aebe0dae6176a11d0cd0de47ba0745e905c82be1569390dc07e013a2c10209c\" returns successfully" Jan 29 16:07:03.477323 systemd[1]: cri-containerd-6aebe0dae6176a11d0cd0de47ba0745e905c82be1569390dc07e013a2c10209c.scope: Deactivated successfully. Jan 29 16:07:03.518462 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aebe0dae6176a11d0cd0de47ba0745e905c82be1569390dc07e013a2c10209c-rootfs.mount: Deactivated successfully. Jan 29 16:07:03.531541 containerd[1961]: time="2025-01-29T16:07:03.531462945Z" level=info msg="shim disconnected" id=6aebe0dae6176a11d0cd0de47ba0745e905c82be1569390dc07e013a2c10209c namespace=k8s.io Jan 29 16:07:03.531541 containerd[1961]: time="2025-01-29T16:07:03.531537645Z" level=warning msg="cleaning up after shim disconnected" id=6aebe0dae6176a11d0cd0de47ba0745e905c82be1569390dc07e013a2c10209c namespace=k8s.io Jan 29 16:07:03.532141 containerd[1961]: time="2025-01-29T16:07:03.531559725Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:07:04.315718 containerd[1961]: time="2025-01-29T16:07:04.315438489Z" level=info msg="CreateContainer within sandbox \"bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:07:04.347983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767563737.mount: Deactivated successfully. Jan 29 16:07:04.359672 containerd[1961]: time="2025-01-29T16:07:04.359617317Z" level=info msg="CreateContainer within sandbox \"bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8735ca96e7778e0947aee8b709c812692fbbe15da8dcf78b5df84bdf2d95f4ff\"" Jan 29 16:07:04.363746 containerd[1961]: time="2025-01-29T16:07:04.363699081Z" level=info msg="StartContainer for \"8735ca96e7778e0947aee8b709c812692fbbe15da8dcf78b5df84bdf2d95f4ff\"" Jan 29 16:07:04.414128 systemd[1]: Started cri-containerd-8735ca96e7778e0947aee8b709c812692fbbe15da8dcf78b5df84bdf2d95f4ff.scope - libcontainer container 8735ca96e7778e0947aee8b709c812692fbbe15da8dcf78b5df84bdf2d95f4ff. Jan 29 16:07:04.460672 systemd[1]: cri-containerd-8735ca96e7778e0947aee8b709c812692fbbe15da8dcf78b5df84bdf2d95f4ff.scope: Deactivated successfully. Jan 29 16:07:04.466240 containerd[1961]: time="2025-01-29T16:07:04.466049950Z" level=info msg="StartContainer for \"8735ca96e7778e0947aee8b709c812692fbbe15da8dcf78b5df84bdf2d95f4ff\" returns successfully" Jan 29 16:07:04.504206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8735ca96e7778e0947aee8b709c812692fbbe15da8dcf78b5df84bdf2d95f4ff-rootfs.mount: Deactivated successfully. Jan 29 16:07:04.513981 containerd[1961]: time="2025-01-29T16:07:04.513889786Z" level=info msg="shim disconnected" id=8735ca96e7778e0947aee8b709c812692fbbe15da8dcf78b5df84bdf2d95f4ff namespace=k8s.io Jan 29 16:07:04.513981 containerd[1961]: time="2025-01-29T16:07:04.513977470Z" level=warning msg="cleaning up after shim disconnected" id=8735ca96e7778e0947aee8b709c812692fbbe15da8dcf78b5df84bdf2d95f4ff namespace=k8s.io Jan 29 16:07:04.514289 containerd[1961]: time="2025-01-29T16:07:04.514001218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:07:04.684668 kubelet[3223]: I0129 16:07:04.683649 3223 setters.go:602] "Node became not ready" node="ip-172-31-23-241" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:07:04Z","lastTransitionTime":"2025-01-29T16:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:07:04.817001 kubelet[3223]: E0129 16:07:04.816905 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-ljtvc" podUID="7dad2964-6d0d-40d5-9217-1bf54733a303" Jan 29 16:07:05.322328 containerd[1961]: time="2025-01-29T16:07:05.322130650Z" level=info msg="CreateContainer within sandbox \"bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:07:05.357752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount179034001.mount: Deactivated successfully. Jan 29 16:07:05.364109 containerd[1961]: time="2025-01-29T16:07:05.364042798Z" level=info msg="CreateContainer within sandbox \"bfc674e86aae8923833f2867567e823f886a055c7aa8133b52ba616b91c6d772\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1a170e3cfdde163bd8d3c9d9b714f40016db705191caa8aa8ba80d5f00329e7e\"" Jan 29 16:07:05.365329 containerd[1961]: time="2025-01-29T16:07:05.365256298Z" level=info msg="StartContainer for \"1a170e3cfdde163bd8d3c9d9b714f40016db705191caa8aa8ba80d5f00329e7e\"" Jan 29 16:07:05.416099 systemd[1]: Started cri-containerd-1a170e3cfdde163bd8d3c9d9b714f40016db705191caa8aa8ba80d5f00329e7e.scope - libcontainer container 1a170e3cfdde163bd8d3c9d9b714f40016db705191caa8aa8ba80d5f00329e7e. Jan 29 16:07:05.474399 containerd[1961]: time="2025-01-29T16:07:05.474324671Z" level=info msg="StartContainer for \"1a170e3cfdde163bd8d3c9d9b714f40016db705191caa8aa8ba80d5f00329e7e\" returns successfully" Jan 29 16:07:06.300852 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 16:07:06.816646 kubelet[3223]: E0129 16:07:06.816548 3223 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-ljtvc" podUID="7dad2964-6d0d-40d5-9217-1bf54733a303" Jan 29 16:07:10.455865 systemd-networkd[1786]: lxc_health: Link UP Jan 29 16:07:10.469880 (udev-worker)[6143]: Network interface NamePolicy= disabled on kernel command line. Jan 29 16:07:10.471554 systemd-networkd[1786]: lxc_health: Gained carrier Jan 29 16:07:10.695404 systemd[1]: run-containerd-runc-k8s.io-1a170e3cfdde163bd8d3c9d9b714f40016db705191caa8aa8ba80d5f00329e7e-runc.ccG33o.mount: Deactivated successfully. Jan 29 16:07:11.633529 kubelet[3223]: I0129 16:07:11.633415 3223 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q2kgs" podStartSLOduration=10.63339111 podStartE2EDuration="10.63339111s" podCreationTimestamp="2025-01-29 16:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:07:06.361004291 +0000 UTC m=+114.880589647" watchObservedRunningTime="2025-01-29 16:07:11.63339111 +0000 UTC m=+120.152976442" Jan 29 16:07:11.778044 containerd[1961]: time="2025-01-29T16:07:11.777979986Z" level=info msg="StopPodSandbox for \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\"" Jan 29 16:07:11.778637 containerd[1961]: time="2025-01-29T16:07:11.778127202Z" level=info msg="TearDown network for sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" successfully" Jan 29 16:07:11.778637 containerd[1961]: time="2025-01-29T16:07:11.778150710Z" level=info msg="StopPodSandbox for \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" returns successfully" Jan 29 16:07:11.781025 containerd[1961]: time="2025-01-29T16:07:11.780219030Z" level=info msg="RemovePodSandbox for \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\"" Jan 29 16:07:11.781025 containerd[1961]: time="2025-01-29T16:07:11.780273846Z" level=info msg="Forcibly stopping sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\"" Jan 29 16:07:11.781025 containerd[1961]: time="2025-01-29T16:07:11.780387234Z" level=info msg="TearDown network for sandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" successfully" Jan 29 16:07:11.789887 containerd[1961]: time="2025-01-29T16:07:11.789598314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:07:11.789887 containerd[1961]: time="2025-01-29T16:07:11.789695562Z" level=info msg="RemovePodSandbox \"d46e5ddca43a5cecdef7135189df3500f4d2a6247fab6262f1b402de6bef60b5\" returns successfully" Jan 29 16:07:11.790466 containerd[1961]: time="2025-01-29T16:07:11.790418790Z" level=info msg="StopPodSandbox for \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\"" Jan 29 16:07:11.790584 containerd[1961]: time="2025-01-29T16:07:11.790543590Z" level=info msg="TearDown network for sandbox \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\" successfully" Jan 29 16:07:11.790652 containerd[1961]: time="2025-01-29T16:07:11.790575822Z" level=info msg="StopPodSandbox for \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\" returns successfully" Jan 29 16:07:11.793733 containerd[1961]: time="2025-01-29T16:07:11.791769150Z" level=info msg="RemovePodSandbox for \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\"" Jan 29 16:07:11.793733 containerd[1961]: time="2025-01-29T16:07:11.791864406Z" level=info msg="Forcibly stopping sandbox \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\"" Jan 29 16:07:11.793733 containerd[1961]: time="2025-01-29T16:07:11.791970198Z" level=info msg="TearDown network for sandbox \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\" successfully" Jan 29 16:07:11.800698 containerd[1961]: time="2025-01-29T16:07:11.800637882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:07:11.800969 containerd[1961]: time="2025-01-29T16:07:11.800937318Z" level=info msg="RemovePodSandbox \"66d6f9ea8d222fb64accf9cec4992817846b23e800cd02dcd8baa5c37d6e0827\" returns successfully" Jan 29 16:07:11.816128 systemd-networkd[1786]: lxc_health: Gained IPv6LL Jan 29 16:07:14.388569 ntpd[1926]: Listen normally on 15 lxc_health [fe80::a8e5:afff:fe11:bc34%14]:123 Jan 29 16:07:14.389202 ntpd[1926]: 29 Jan 16:07:14 ntpd[1926]: Listen normally on 15 lxc_health [fe80::a8e5:afff:fe11:bc34%14]:123 Jan 29 16:07:15.349525 systemd[1]: run-containerd-runc-k8s.io-1a170e3cfdde163bd8d3c9d9b714f40016db705191caa8aa8ba80d5f00329e7e-runc.u5iT2I.mount: Deactivated successfully. Jan 29 16:07:15.460168 kubelet[3223]: E0129 16:07:15.460096 3223 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43122->127.0.0.1:37059: write tcp 127.0.0.1:43122->127.0.0.1:37059: write: broken pipe Jan 29 16:07:17.720483 systemd[1]: run-containerd-runc-k8s.io-1a170e3cfdde163bd8d3c9d9b714f40016db705191caa8aa8ba80d5f00329e7e-runc.4IRgqN.mount: Deactivated successfully. Jan 29 16:07:17.853364 sshd[5421]: Connection closed by 139.178.89.65 port 40734 Jan 29 16:07:17.853239 sshd-session[5356]: pam_unix(sshd:session): session closed for user core Jan 29 16:07:17.861426 systemd-logind[1933]: Session 29 logged out. Waiting for processes to exit. Jan 29 16:07:17.865688 systemd[1]: sshd@28-172.31.23.241:22-139.178.89.65:40734.service: Deactivated successfully. Jan 29 16:07:17.872224 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 16:07:17.875640 systemd-logind[1933]: Removed session 29. Jan 29 16:07:31.477633 systemd[1]: cri-containerd-8c88e92ece21529f8fb727b75f18844b643994ea6cf0503f07ce2a94a0cfe42c.scope: Deactivated successfully. Jan 29 16:07:31.478821 systemd[1]: cri-containerd-8c88e92ece21529f8fb727b75f18844b643994ea6cf0503f07ce2a94a0cfe42c.scope: Consumed 5.168s CPU time, 57.9M memory peak. Jan 29 16:07:31.523337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c88e92ece21529f8fb727b75f18844b643994ea6cf0503f07ce2a94a0cfe42c-rootfs.mount: Deactivated successfully. Jan 29 16:07:31.543265 containerd[1961]: time="2025-01-29T16:07:31.543166440Z" level=info msg="shim disconnected" id=8c88e92ece21529f8fb727b75f18844b643994ea6cf0503f07ce2a94a0cfe42c namespace=k8s.io Jan 29 16:07:31.543265 containerd[1961]: time="2025-01-29T16:07:31.543260076Z" level=warning msg="cleaning up after shim disconnected" id=8c88e92ece21529f8fb727b75f18844b643994ea6cf0503f07ce2a94a0cfe42c namespace=k8s.io Jan 29 16:07:31.544180 containerd[1961]: time="2025-01-29T16:07:31.543281028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:07:32.402622 kubelet[3223]: I0129 16:07:32.402360 3223 scope.go:117] "RemoveContainer" containerID="8c88e92ece21529f8fb727b75f18844b643994ea6cf0503f07ce2a94a0cfe42c" Jan 29 16:07:32.407136 containerd[1961]: time="2025-01-29T16:07:32.407062765Z" level=info msg="CreateContainer within sandbox \"ff6905b3fa70a768f4a1316a540f06555dd9b227c6253f82a5cff5c63893e34c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 16:07:32.439336 containerd[1961]: time="2025-01-29T16:07:32.439204645Z" level=info msg="CreateContainer within sandbox \"ff6905b3fa70a768f4a1316a540f06555dd9b227c6253f82a5cff5c63893e34c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"09aa001ba24c3631d20b35d582cd55483885aac2130beb08670b7194e04d330f\"" Jan 29 16:07:32.440909 containerd[1961]: time="2025-01-29T16:07:32.440031925Z" level=info msg="StartContainer for \"09aa001ba24c3631d20b35d582cd55483885aac2130beb08670b7194e04d330f\"" Jan 29 16:07:32.502342 systemd[1]: Started cri-containerd-09aa001ba24c3631d20b35d582cd55483885aac2130beb08670b7194e04d330f.scope - libcontainer container 09aa001ba24c3631d20b35d582cd55483885aac2130beb08670b7194e04d330f. Jan 29 16:07:32.589518 containerd[1961]: time="2025-01-29T16:07:32.588476126Z" level=info msg="StartContainer for \"09aa001ba24c3631d20b35d582cd55483885aac2130beb08670b7194e04d330f\" returns successfully" Jan 29 16:07:34.574339 kubelet[3223]: E0129 16:07:34.573434 3223 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.241:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-241?timeout=10s\": context deadline exceeded" Jan 29 16:07:36.706047 systemd[1]: cri-containerd-01e8a9abffb7f70a9f13e61a4cf48da2554aa9a5c47a506ed73062eae0b36574.scope: Deactivated successfully. Jan 29 16:07:36.706575 systemd[1]: cri-containerd-01e8a9abffb7f70a9f13e61a4cf48da2554aa9a5c47a506ed73062eae0b36574.scope: Consumed 4.633s CPU time, 22.5M memory peak. Jan 29 16:07:36.749883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01e8a9abffb7f70a9f13e61a4cf48da2554aa9a5c47a506ed73062eae0b36574-rootfs.mount: Deactivated successfully. Jan 29 16:07:36.762953 containerd[1961]: time="2025-01-29T16:07:36.762877398Z" level=info msg="shim disconnected" id=01e8a9abffb7f70a9f13e61a4cf48da2554aa9a5c47a506ed73062eae0b36574 namespace=k8s.io Jan 29 16:07:36.764046 containerd[1961]: time="2025-01-29T16:07:36.763394646Z" level=warning msg="cleaning up after shim disconnected" id=01e8a9abffb7f70a9f13e61a4cf48da2554aa9a5c47a506ed73062eae0b36574 namespace=k8s.io Jan 29 16:07:36.764046 containerd[1961]: time="2025-01-29T16:07:36.763423434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:07:37.420355 kubelet[3223]: I0129 16:07:37.420313 3223 scope.go:117] "RemoveContainer" containerID="01e8a9abffb7f70a9f13e61a4cf48da2554aa9a5c47a506ed73062eae0b36574" Jan 29 16:07:37.423352 containerd[1961]: time="2025-01-29T16:07:37.423145506Z" level=info msg="CreateContainer within sandbox \"66ab943257f5772f34c433f2b72cb3bbf378ff2741c92c9bb3031a4d38bee6bc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 16:07:37.455131 containerd[1961]: time="2025-01-29T16:07:37.455012550Z" level=info msg="CreateContainer within sandbox \"66ab943257f5772f34c433f2b72cb3bbf378ff2741c92c9bb3031a4d38bee6bc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ff8162666af09b4b1340c4c6831255f8f33725054df261af22b7079fb963199d\"" Jan 29 16:07:37.456712 containerd[1961]: time="2025-01-29T16:07:37.456291846Z" level=info msg="StartContainer for \"ff8162666af09b4b1340c4c6831255f8f33725054df261af22b7079fb963199d\"" Jan 29 16:07:37.513122 systemd[1]: Started cri-containerd-ff8162666af09b4b1340c4c6831255f8f33725054df261af22b7079fb963199d.scope - libcontainer container ff8162666af09b4b1340c4c6831255f8f33725054df261af22b7079fb963199d. Jan 29 16:07:37.577732 containerd[1961]: time="2025-01-29T16:07:37.577576518Z" level=info msg="StartContainer for \"ff8162666af09b4b1340c4c6831255f8f33725054df261af22b7079fb963199d\" returns successfully"