Jan 30 13:11:24.186337 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 30 13:11:24.186385 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:11:24.186411 kernel: KASLR disabled due to lack of seed Jan 30 13:11:24.186428 kernel: efi: EFI v2.7 by EDK II Jan 30 13:11:24.186444 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Jan 30 13:11:24.186460 kernel: secureboot: Secure boot disabled Jan 30 13:11:24.186477 kernel: ACPI: Early table checksum verification disabled Jan 30 13:11:24.186494 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 30 13:11:24.186509 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 30 13:11:24.186525 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 30 13:11:24.186546 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 30 13:11:24.186563 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 30 13:11:24.186578 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 30 13:11:24.186594 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 30 13:11:24.186613 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 30 13:11:24.186634 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 30 13:11:24.186651 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 30 13:11:24.186668 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 30 13:11:24.186684 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 30 13:11:24.186701 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 30 13:11:24.186717 kernel: printk: bootconsole [uart0] enabled Jan 30 13:11:24.186733 kernel: NUMA: Failed to initialise from firmware Jan 30 13:11:24.186750 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 30 13:11:24.186767 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 30 13:11:24.186783 kernel: Zone ranges: Jan 30 13:11:24.186800 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 13:11:24.186821 kernel: DMA32 empty Jan 30 13:11:24.186838 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 30 13:11:24.186854 kernel: Movable zone start for each node Jan 30 13:11:24.186870 kernel: Early memory node ranges Jan 30 13:11:24.186887 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 30 13:11:24.186906 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 30 13:11:24.186923 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 30 13:11:24.186940 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 30 13:11:24.186957 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 30 13:11:24.186975 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 30 13:11:24.186993 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 30 13:11:24.187011 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 30 13:11:24.187034 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 30 13:11:24.187053 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 30 13:11:24.187078 kernel: psci: probing for conduit method from ACPI. Jan 30 13:11:24.187097 kernel: psci: PSCIv1.0 detected in firmware. Jan 30 13:11:24.187115 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:11:24.187138 kernel: psci: Trusted OS migration not required Jan 30 13:11:24.187235 kernel: psci: SMC Calling Convention v1.1 Jan 30 13:11:24.187260 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:11:24.187278 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:11:24.187296 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 13:11:24.187313 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:11:24.187330 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:11:24.187347 kernel: CPU features: detected: Spectre-v2 Jan 30 13:11:24.187364 kernel: CPU features: detected: Spectre-v3a Jan 30 13:11:24.187381 kernel: CPU features: detected: Spectre-BHB Jan 30 13:11:24.187398 kernel: CPU features: detected: ARM erratum 1742098 Jan 30 13:11:24.187415 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 30 13:11:24.187440 kernel: alternatives: applying boot alternatives Jan 30 13:11:24.187459 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:11:24.187478 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:11:24.187496 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:11:24.187513 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:11:24.187530 kernel: Fallback order for Node 0: 0 Jan 30 13:11:24.187547 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 30 13:11:24.187564 kernel: Policy zone: Normal Jan 30 13:11:24.187581 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:11:24.187598 kernel: software IO TLB: area num 2. Jan 30 13:11:24.187620 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 30 13:11:24.187638 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Jan 30 13:11:24.187655 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:11:24.187672 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:11:24.187690 kernel: rcu: RCU event tracing is enabled. Jan 30 13:11:24.187708 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:11:24.187725 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:11:24.187743 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:11:24.187760 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:11:24.187777 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:11:24.187794 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:11:24.187816 kernel: GICv3: 96 SPIs implemented Jan 30 13:11:24.187833 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:11:24.187850 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:11:24.187867 kernel: GICv3: GICv3 features: 16 PPIs Jan 30 13:11:24.187884 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 30 13:11:24.187901 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 30 13:11:24.187918 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 13:11:24.187936 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 30 13:11:24.187954 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 30 13:11:24.187971 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 30 13:11:24.187988 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 30 13:11:24.188006 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:11:24.188028 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 30 13:11:24.188046 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 30 13:11:24.188063 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 30 13:11:24.188081 kernel: Console: colour dummy device 80x25 Jan 30 13:11:24.188099 kernel: printk: console [tty1] enabled Jan 30 13:11:24.188116 kernel: ACPI: Core revision 20230628 Jan 30 13:11:24.188134 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 30 13:11:24.188152 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:11:24.188193 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:11:24.188212 kernel: landlock: Up and running. Jan 30 13:11:24.188236 kernel: SELinux: Initializing. Jan 30 13:11:24.188254 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:11:24.188272 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:11:24.188290 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:11:24.188307 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:11:24.188325 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:11:24.188342 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:11:24.188360 kernel: Platform MSI: ITS@0x10080000 domain created Jan 30 13:11:24.188381 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 30 13:11:24.188399 kernel: Remapping and enabling EFI services. Jan 30 13:11:24.188417 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:11:24.188434 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:11:24.188451 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 30 13:11:24.188469 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 30 13:11:24.188486 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 30 13:11:24.188503 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:11:24.188520 kernel: SMP: Total of 2 processors activated. Jan 30 13:11:24.188537 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:11:24.188560 kernel: CPU features: detected: 32-bit EL1 Support Jan 30 13:11:24.188578 kernel: CPU features: detected: CRC32 instructions Jan 30 13:11:24.188606 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:11:24.188629 kernel: alternatives: applying system-wide alternatives Jan 30 13:11:24.188646 kernel: devtmpfs: initialized Jan 30 13:11:24.188665 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:11:24.188683 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:11:24.188701 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:11:24.188719 kernel: SMBIOS 3.0.0 present. Jan 30 13:11:24.188742 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 30 13:11:24.188782 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:11:24.188804 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:11:24.188823 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:11:24.188842 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:11:24.188860 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:11:24.188878 kernel: audit: type=2000 audit(0.222:1): state=initialized audit_enabled=0 res=1 Jan 30 13:11:24.188902 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:11:24.188921 kernel: cpuidle: using governor menu Jan 30 13:11:24.188939 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:11:24.188957 kernel: ASID allocator initialised with 65536 entries Jan 30 13:11:24.188975 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:11:24.188994 kernel: Serial: AMBA PL011 UART driver Jan 30 13:11:24.189012 kernel: Modules: 17360 pages in range for non-PLT usage Jan 30 13:11:24.189030 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:11:24.189048 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:11:24.189071 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:11:24.189090 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:11:24.189109 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:11:24.189127 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:11:24.189146 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:11:24.189193 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:11:24.189216 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:11:24.189235 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:11:24.189254 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:11:24.189282 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:11:24.189301 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:11:24.189320 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:11:24.189339 kernel: ACPI: Interpreter enabled Jan 30 13:11:24.189357 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:11:24.189375 kernel: ACPI: MCFG table detected, 1 entries Jan 30 13:11:24.189394 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 30 13:11:24.189724 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:11:24.189944 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 13:11:24.190185 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 13:11:24.190415 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 30 13:11:24.190616 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 30 13:11:24.190641 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 30 13:11:24.190660 kernel: acpiphp: Slot [1] registered Jan 30 13:11:24.190679 kernel: acpiphp: Slot [2] registered Jan 30 13:11:24.190697 kernel: acpiphp: Slot [3] registered Jan 30 13:11:24.190723 kernel: acpiphp: Slot [4] registered Jan 30 13:11:24.190741 kernel: acpiphp: Slot [5] registered Jan 30 13:11:24.190760 kernel: acpiphp: Slot [6] registered Jan 30 13:11:24.190778 kernel: acpiphp: Slot [7] registered Jan 30 13:11:24.190795 kernel: acpiphp: Slot [8] registered Jan 30 13:11:24.190814 kernel: acpiphp: Slot [9] registered Jan 30 13:11:24.190832 kernel: acpiphp: Slot [10] registered Jan 30 13:11:24.190852 kernel: acpiphp: Slot [11] registered Jan 30 13:11:24.190871 kernel: acpiphp: Slot [12] registered Jan 30 13:11:24.190891 kernel: acpiphp: Slot [13] registered Jan 30 13:11:24.190917 kernel: acpiphp: Slot [14] registered Jan 30 13:11:24.190936 kernel: acpiphp: Slot [15] registered Jan 30 13:11:24.190955 kernel: acpiphp: Slot [16] registered Jan 30 13:11:24.190973 kernel: acpiphp: Slot [17] registered Jan 30 13:11:24.190992 kernel: acpiphp: Slot [18] registered Jan 30 13:11:24.191010 kernel: acpiphp: Slot [19] registered Jan 30 13:11:24.191029 kernel: acpiphp: Slot [20] registered Jan 30 13:11:24.191047 kernel: acpiphp: Slot [21] registered Jan 30 13:11:24.191065 kernel: acpiphp: Slot [22] registered Jan 30 13:11:24.191089 kernel: acpiphp: Slot [23] registered Jan 30 13:11:24.191108 kernel: acpiphp: Slot [24] registered Jan 30 13:11:24.191127 kernel: acpiphp: Slot [25] registered Jan 30 13:11:24.191145 kernel: acpiphp: Slot [26] registered Jan 30 13:11:24.191559 kernel: acpiphp: Slot [27] registered Jan 30 13:11:24.191584 kernel: acpiphp: Slot [28] registered Jan 30 13:11:24.191602 kernel: acpiphp: Slot [29] registered Jan 30 13:11:24.191621 kernel: acpiphp: Slot [30] registered Jan 30 13:11:24.191639 kernel: acpiphp: Slot [31] registered Jan 30 13:11:24.191657 kernel: PCI host bridge to bus 0000:00 Jan 30 13:11:24.191921 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 30 13:11:24.192115 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 13:11:24.192336 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 30 13:11:24.192529 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 30 13:11:24.192794 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 30 13:11:24.193027 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 30 13:11:24.194145 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 30 13:11:24.194441 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 30 13:11:24.194659 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 30 13:11:24.194877 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:11:24.195280 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 30 13:11:24.195566 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 30 13:11:24.195776 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 30 13:11:24.195995 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 30 13:11:24.196238 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:11:24.196450 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 30 13:11:24.196664 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 30 13:11:24.196899 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 30 13:11:24.197111 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 30 13:11:24.197355 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 30 13:11:24.197564 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 30 13:11:24.197752 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 13:11:24.197939 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 30 13:11:24.197966 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 13:11:24.197986 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 13:11:24.198020 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 13:11:24.198041 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 13:11:24.198059 kernel: iommu: Default domain type: Translated Jan 30 13:11:24.198085 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:11:24.198103 kernel: efivars: Registered efivars operations Jan 30 13:11:24.198122 kernel: vgaarb: loaded Jan 30 13:11:24.198140 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:11:24.198282 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:11:24.198308 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:11:24.198327 kernel: pnp: PnP ACPI init Jan 30 13:11:24.198562 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 30 13:11:24.198599 kernel: pnp: PnP ACPI: found 1 devices Jan 30 13:11:24.198618 kernel: NET: Registered PF_INET protocol family Jan 30 13:11:24.198637 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:11:24.198656 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:11:24.198675 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:11:24.198694 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:11:24.198712 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:11:24.198730 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:11:24.198749 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:11:24.198773 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:11:24.198793 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:11:24.198811 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:11:24.198830 kernel: kvm [1]: HYP mode not available Jan 30 13:11:24.198849 kernel: Initialise system trusted keyrings Jan 30 13:11:24.198868 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:11:24.198886 kernel: Key type asymmetric registered Jan 30 13:11:24.198904 kernel: Asymmetric key parser 'x509' registered Jan 30 13:11:24.198922 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:11:24.198945 kernel: io scheduler mq-deadline registered Jan 30 13:11:24.198964 kernel: io scheduler kyber registered Jan 30 13:11:24.198982 kernel: io scheduler bfq registered Jan 30 13:11:24.201338 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 30 13:11:24.201389 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 13:11:24.201409 kernel: ACPI: button: Power Button [PWRB] Jan 30 13:11:24.201429 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 30 13:11:24.201447 kernel: ACPI: button: Sleep Button [SLPB] Jan 30 13:11:24.201476 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:11:24.201496 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 13:11:24.201738 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 30 13:11:24.201766 kernel: printk: console [ttyS0] disabled Jan 30 13:11:24.201786 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 30 13:11:24.201805 kernel: printk: console [ttyS0] enabled Jan 30 13:11:24.201823 kernel: printk: bootconsole [uart0] disabled Jan 30 13:11:24.201842 kernel: thunder_xcv, ver 1.0 Jan 30 13:11:24.201860 kernel: thunder_bgx, ver 1.0 Jan 30 13:11:24.201878 kernel: nicpf, ver 1.0 Jan 30 13:11:24.201902 kernel: nicvf, ver 1.0 Jan 30 13:11:24.202115 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:11:24.202343 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:11:23 UTC (1738242683) Jan 30 13:11:24.202550 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:11:24.202573 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 30 13:11:24.202592 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:11:24.202610 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:11:24.202637 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:11:24.202656 kernel: Segment Routing with IPv6 Jan 30 13:11:24.202674 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:11:24.202692 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:11:24.202710 kernel: Key type dns_resolver registered Jan 30 13:11:24.202729 kernel: registered taskstats version 1 Jan 30 13:11:24.202747 kernel: Loading compiled-in X.509 certificates Jan 30 13:11:24.202765 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:11:24.202783 kernel: Key type .fscrypt registered Jan 30 13:11:24.202801 kernel: Key type fscrypt-provisioning registered Jan 30 13:11:24.202825 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:11:24.202843 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:11:24.202861 kernel: ima: No architecture policies found Jan 30 13:11:24.202879 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:11:24.202897 kernel: clk: Disabling unused clocks Jan 30 13:11:24.202915 kernel: Freeing unused kernel memory: 39936K Jan 30 13:11:24.202933 kernel: Run /init as init process Jan 30 13:11:24.202952 kernel: with arguments: Jan 30 13:11:24.202970 kernel: /init Jan 30 13:11:24.202992 kernel: with environment: Jan 30 13:11:24.203011 kernel: HOME=/ Jan 30 13:11:24.203030 kernel: TERM=linux Jan 30 13:11:24.203048 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:11:24.203070 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:11:24.203094 systemd[1]: Detected virtualization amazon. Jan 30 13:11:24.203114 systemd[1]: Detected architecture arm64. Jan 30 13:11:24.203138 systemd[1]: Running in initrd. Jan 30 13:11:24.205219 systemd[1]: No hostname configured, using default hostname. Jan 30 13:11:24.205271 systemd[1]: Hostname set to . Jan 30 13:11:24.205294 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:11:24.205315 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:11:24.205335 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:11:24.205356 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:11:24.205377 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:11:24.205411 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:11:24.205432 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:11:24.205453 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:11:24.205477 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:11:24.205499 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:11:24.205519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:11:24.205539 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:11:24.205564 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:11:24.205585 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:11:24.205605 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:11:24.205625 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:11:24.205646 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:11:24.205666 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:11:24.205686 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:11:24.205706 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:11:24.205727 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:11:24.205754 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:11:24.205774 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:11:24.205794 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:11:24.205814 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:11:24.205833 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:11:24.205853 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:11:24.205873 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:11:24.205892 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:11:24.205917 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:11:24.205937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:11:24.205957 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:11:24.205977 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:11:24.205997 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:11:24.206069 systemd-journald[251]: Collecting audit messages is disabled. Jan 30 13:11:24.206120 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:11:24.206141 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:11:24.206187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:11:24.206263 systemd-journald[251]: Journal started Jan 30 13:11:24.206313 systemd-journald[251]: Runtime Journal (/run/log/journal/ec26f725fafb4d001b50451360351225) is 8.0M, max 75.3M, 67.3M free. Jan 30 13:11:24.206390 kernel: Bridge firewalling registered Jan 30 13:11:24.154182 systemd-modules-load[252]: Inserted module 'overlay' Jan 30 13:11:24.208333 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 30 13:11:24.220820 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:11:24.223195 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:11:24.232580 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:11:24.234084 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:11:24.242458 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:11:24.256626 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:11:24.261426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:11:24.299649 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:11:24.307855 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:11:24.314258 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:11:24.317228 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:11:24.332477 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:11:24.346395 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:11:24.377898 dracut-cmdline[287]: dracut-dracut-053 Jan 30 13:11:24.384412 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:11:24.425707 systemd-resolved[288]: Positive Trust Anchors: Jan 30 13:11:24.425743 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:11:24.425805 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:11:24.518199 kernel: SCSI subsystem initialized Jan 30 13:11:24.525268 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:11:24.538367 kernel: iscsi: registered transport (tcp) Jan 30 13:11:24.560200 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:11:24.560280 kernel: QLogic iSCSI HBA Driver Jan 30 13:11:24.663557 kernel: random: crng init done Jan 30 13:11:24.663409 systemd-resolved[288]: Defaulting to hostname 'linux'. Jan 30 13:11:24.666817 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:11:24.685027 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:11:24.694078 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:11:24.703487 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:11:24.744278 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:11:24.744354 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:11:24.744381 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:11:24.811220 kernel: raid6: neonx8 gen() 6440 MB/s Jan 30 13:11:24.828195 kernel: raid6: neonx4 gen() 6446 MB/s Jan 30 13:11:24.845190 kernel: raid6: neonx2 gen() 5383 MB/s Jan 30 13:11:24.862192 kernel: raid6: neonx1 gen() 3919 MB/s Jan 30 13:11:24.879190 kernel: raid6: int64x8 gen() 3584 MB/s Jan 30 13:11:24.896192 kernel: raid6: int64x4 gen() 3675 MB/s Jan 30 13:11:24.913190 kernel: raid6: int64x2 gen() 3552 MB/s Jan 30 13:11:24.930952 kernel: raid6: int64x1 gen() 2746 MB/s Jan 30 13:11:24.930993 kernel: raid6: using algorithm neonx4 gen() 6446 MB/s Jan 30 13:11:24.948951 kernel: raid6: .... xor() 4978 MB/s, rmw enabled Jan 30 13:11:24.948990 kernel: raid6: using neon recovery algorithm Jan 30 13:11:24.956198 kernel: xor: measuring software checksum speed Jan 30 13:11:24.957191 kernel: 8regs : 11915 MB/sec Jan 30 13:11:24.958191 kernel: 32regs : 11844 MB/sec Jan 30 13:11:24.960193 kernel: arm64_neon : 8810 MB/sec Jan 30 13:11:24.960227 kernel: xor: using function: 8regs (11915 MB/sec) Jan 30 13:11:25.043203 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:11:25.064234 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:11:25.073490 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:11:25.113792 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jan 30 13:11:25.122185 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:11:25.130667 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:11:25.167836 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jan 30 13:11:25.223058 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:11:25.237398 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:11:25.347557 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:11:25.361431 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:11:25.405677 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:11:25.413678 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:11:25.419149 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:11:25.435673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:11:25.446447 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:11:25.494888 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:11:25.528007 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 13:11:25.528072 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 30 13:11:25.561443 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 30 13:11:25.561731 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 30 13:11:25.561980 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:eb:bd:11:a4:b9 Jan 30 13:11:25.564795 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:11:25.565056 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:11:25.569347 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:11:25.571630 (udev-worker)[539]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:11:25.587288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:11:25.587580 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:11:25.604347 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:11:25.617243 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 13:11:25.617695 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:11:25.620818 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 30 13:11:25.630202 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 30 13:11:25.643184 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:11:25.643261 kernel: GPT:9289727 != 16777215 Jan 30 13:11:25.643287 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:11:25.643312 kernel: GPT:9289727 != 16777215 Jan 30 13:11:25.643335 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:11:25.644574 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:11:25.650222 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:11:25.663602 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:11:25.717964 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:11:25.734212 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (515) Jan 30 13:11:25.773387 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (520) Jan 30 13:11:25.843711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:11:25.863074 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 30 13:11:25.884296 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 30 13:11:25.919001 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 30 13:11:25.921672 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 30 13:11:25.951512 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:11:25.964622 disk-uuid[661]: Primary Header is updated. Jan 30 13:11:25.964622 disk-uuid[661]: Secondary Entries is updated. Jan 30 13:11:25.964622 disk-uuid[661]: Secondary Header is updated. Jan 30 13:11:25.972216 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:11:26.992377 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 13:11:26.992445 disk-uuid[662]: The operation has completed successfully. Jan 30 13:11:27.176998 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:11:27.177245 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:11:27.231398 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:11:27.240057 sh[922]: Success Jan 30 13:11:27.258356 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:11:27.398969 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:11:27.405345 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:11:27.415613 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:11:27.445584 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:11:27.445655 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:11:27.445682 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:11:27.447269 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:11:27.448462 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:11:27.568200 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:11:27.605410 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:11:27.609329 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:11:27.617468 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:11:27.630637 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:11:27.663043 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:11:27.663128 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:11:27.664663 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:11:27.676596 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:11:27.694140 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:11:27.696469 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:11:27.705991 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:11:27.717601 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:11:27.797889 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:11:27.811531 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:11:27.865108 systemd-networkd[1115]: lo: Link UP Jan 30 13:11:27.865128 systemd-networkd[1115]: lo: Gained carrier Jan 30 13:11:27.870201 systemd-networkd[1115]: Enumeration completed Jan 30 13:11:27.870358 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:11:27.873842 systemd[1]: Reached target network.target - Network. Jan 30 13:11:27.879288 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:11:27.879306 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:11:27.887696 systemd-networkd[1115]: eth0: Link UP Jan 30 13:11:27.887710 systemd-networkd[1115]: eth0: Gained carrier Jan 30 13:11:27.887726 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:11:27.904255 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.25.221/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:11:28.074527 ignition[1045]: Ignition 2.20.0 Jan 30 13:11:28.074557 ignition[1045]: Stage: fetch-offline Jan 30 13:11:28.075024 ignition[1045]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:11:28.075049 ignition[1045]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:11:28.077728 ignition[1045]: Ignition finished successfully Jan 30 13:11:28.084485 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:11:28.094446 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:11:28.126136 ignition[1125]: Ignition 2.20.0 Jan 30 13:11:28.126188 ignition[1125]: Stage: fetch Jan 30 13:11:28.126955 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:11:28.126990 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:11:28.127613 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:11:28.137097 ignition[1125]: PUT result: OK Jan 30 13:11:28.140302 ignition[1125]: parsed url from cmdline: "" Jan 30 13:11:28.140324 ignition[1125]: no config URL provided Jan 30 13:11:28.140340 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:11:28.140366 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:11:28.140401 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:11:28.142036 ignition[1125]: PUT result: OK Jan 30 13:11:28.142109 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 30 13:11:28.146261 ignition[1125]: GET result: OK Jan 30 13:11:28.146544 ignition[1125]: parsing config with SHA512: 1d871f4050c9b5d8fd912bc6c8fbe6e4f1c54c1893184454e87855a2fc180732686a4617ba5e21c4abb6a9be76024b165c6e7832fef5483255a0f473f33e27c8 Jan 30 13:11:28.153527 unknown[1125]: fetched base config from "system" Jan 30 13:11:28.154178 unknown[1125]: fetched base config from "system" Jan 30 13:11:28.154603 ignition[1125]: fetch: fetch complete Jan 30 13:11:28.154196 unknown[1125]: fetched user config from "aws" Jan 30 13:11:28.154614 ignition[1125]: fetch: fetch passed Jan 30 13:11:28.154696 ignition[1125]: Ignition finished successfully Jan 30 13:11:28.166244 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:11:28.174523 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:11:28.201609 ignition[1131]: Ignition 2.20.0 Jan 30 13:11:28.201638 ignition[1131]: Stage: kargs Jan 30 13:11:28.203195 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:11:28.203224 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:11:28.203454 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:11:28.210363 ignition[1131]: PUT result: OK Jan 30 13:11:28.214776 ignition[1131]: kargs: kargs passed Jan 30 13:11:28.214879 ignition[1131]: Ignition finished successfully Jan 30 13:11:28.219405 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:11:28.231464 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:11:28.256580 ignition[1137]: Ignition 2.20.0 Jan 30 13:11:28.256601 ignition[1137]: Stage: disks Jan 30 13:11:28.257780 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:11:28.257805 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:11:28.257968 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:11:28.261619 ignition[1137]: PUT result: OK Jan 30 13:11:28.271056 ignition[1137]: disks: disks passed Jan 30 13:11:28.271150 ignition[1137]: Ignition finished successfully Jan 30 13:11:28.275850 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:11:28.278655 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:11:28.280828 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:11:28.283082 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:11:28.291662 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:11:28.293521 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:11:28.300479 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:11:28.348417 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:11:28.352869 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:11:28.364377 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:11:28.444216 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:11:28.445041 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:11:28.448619 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:11:28.461356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:11:28.468406 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:11:28.471945 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:11:28.472058 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:11:28.472109 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:11:28.495734 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:11:28.501294 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:11:28.519216 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1165) Jan 30 13:11:28.523570 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:11:28.523621 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:11:28.523648 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:11:28.530182 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:11:28.533379 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:11:28.906670 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:11:28.941330 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:11:28.950409 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:11:28.958232 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:11:29.363932 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:11:29.373371 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:11:29.377562 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:11:29.406077 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:11:29.408473 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:11:29.442748 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:11:29.453552 ignition[1278]: INFO : Ignition 2.20.0 Jan 30 13:11:29.455456 ignition[1278]: INFO : Stage: mount Jan 30 13:11:29.457427 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:11:29.459369 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:11:29.459369 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:11:29.464540 ignition[1278]: INFO : PUT result: OK Jan 30 13:11:29.468954 ignition[1278]: INFO : mount: mount passed Jan 30 13:11:29.471323 ignition[1278]: INFO : Ignition finished successfully Jan 30 13:11:29.474067 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:11:29.486750 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:11:29.509581 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:11:29.533393 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1289) Jan 30 13:11:29.537451 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:11:29.537498 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:11:29.538674 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 13:11:29.545201 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 13:11:29.548599 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:11:29.587139 ignition[1306]: INFO : Ignition 2.20.0 Jan 30 13:11:29.587139 ignition[1306]: INFO : Stage: files Jan 30 13:11:29.590405 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:11:29.590405 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:11:29.590405 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:11:29.597258 ignition[1306]: INFO : PUT result: OK Jan 30 13:11:29.601359 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:11:29.603882 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:11:29.603882 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:11:29.640624 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:11:29.643357 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:11:29.646273 unknown[1306]: wrote ssh authorized keys file for user: core Jan 30 13:11:29.648641 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:11:29.659320 systemd-networkd[1115]: eth0: Gained IPv6LL Jan 30 13:11:29.666242 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:11:29.669612 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:11:29.669612 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:11:29.669612 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:11:29.669612 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:11:29.669612 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:11:29.669612 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:11:29.669612 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 13:11:30.162448 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:11:30.524233 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:11:30.528201 ignition[1306]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:11:30.528201 ignition[1306]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:11:30.528201 ignition[1306]: INFO : files: files passed Jan 30 13:11:30.528201 ignition[1306]: INFO : Ignition finished successfully Jan 30 13:11:30.539317 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:11:30.555507 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:11:30.562519 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:11:30.569015 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:11:30.569236 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:11:30.603050 initrd-setup-root-after-ignition[1334]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:11:30.603050 initrd-setup-root-after-ignition[1334]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:11:30.611081 initrd-setup-root-after-ignition[1338]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:11:30.618278 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:11:30.621497 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:11:30.641538 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:11:30.705078 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:11:30.705547 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:11:30.710149 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:11:30.715380 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:11:30.717544 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:11:30.734503 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:11:30.758620 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:11:30.768524 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:11:30.801297 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:11:30.805758 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:11:30.807005 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:11:30.807940 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:11:30.808290 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:11:30.809724 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:11:30.810110 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:11:30.810693 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:11:30.811293 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:11:30.811881 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:11:30.812216 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:11:30.812801 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:11:30.813415 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:11:30.814011 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:11:30.814870 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:11:30.815085 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:11:30.815381 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:11:30.816299 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:11:30.816686 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:11:30.817478 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:11:30.834044 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:11:30.838636 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:11:30.839098 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:11:30.845927 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:11:30.847865 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:11:30.851868 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:11:30.852079 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:11:30.890176 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:11:30.894537 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:11:30.895719 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:11:30.896657 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:11:30.898044 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:11:30.898280 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:11:30.931363 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:11:30.945584 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:11:30.945816 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:11:30.982049 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:11:30.986006 ignition[1358]: INFO : Ignition 2.20.0 Jan 30 13:11:30.986006 ignition[1358]: INFO : Stage: umount Jan 30 13:11:30.992767 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:11:30.992767 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 13:11:30.992767 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 13:11:30.986110 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:11:31.001241 ignition[1358]: INFO : PUT result: OK Jan 30 13:11:31.005200 ignition[1358]: INFO : umount: umount passed Jan 30 13:11:31.007362 ignition[1358]: INFO : Ignition finished successfully Jan 30 13:11:31.009517 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:11:31.009737 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:11:31.020297 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:11:31.020480 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:11:31.023352 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:11:31.023436 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:11:31.026978 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:11:31.027059 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:11:31.028897 systemd[1]: Stopped target network.target - Network. Jan 30 13:11:31.030519 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:11:31.030598 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:11:31.033004 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:11:31.034692 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:11:31.039808 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:11:31.039926 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:11:31.045332 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:11:31.047102 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:11:31.047200 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:11:31.049015 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:11:31.049085 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:11:31.050949 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:11:31.051030 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:11:31.052850 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:11:31.052926 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:11:31.054892 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:11:31.054970 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:11:31.057237 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:11:31.059398 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:11:31.066613 systemd-networkd[1115]: eth0: DHCPv6 lease lost Jan 30 13:11:31.096752 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:11:31.096975 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:11:31.125761 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:11:31.127352 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:11:31.133911 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:11:31.135265 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:11:31.143322 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:11:31.145120 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:11:31.145285 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:11:31.157032 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:11:31.157177 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:11:31.162369 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:11:31.162460 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:11:31.164475 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:11:31.164554 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:11:31.166999 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:11:31.196862 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:11:31.199240 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:11:31.207016 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:11:31.207147 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:11:31.212823 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:11:31.212906 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:11:31.215748 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:11:31.215849 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:11:31.226180 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:11:31.226291 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:11:31.229132 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:11:31.229346 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:11:31.248404 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:11:31.253257 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:11:31.253382 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:11:31.255835 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:11:31.255936 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:11:31.258456 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:11:31.258535 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:11:31.269569 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:11:31.269658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:11:31.275314 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:11:31.275503 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:11:31.285701 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:11:31.285902 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:11:31.303572 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:11:31.320536 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:11:31.336478 systemd[1]: Switching root. Jan 30 13:11:31.369393 systemd-journald[251]: Journal stopped Jan 30 13:11:33.622504 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 30 13:11:33.622629 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:11:33.622673 kernel: SELinux: policy capability open_perms=1 Jan 30 13:11:33.622710 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:11:33.622741 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:11:33.622769 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:11:33.622799 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:11:33.622844 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:11:33.622874 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:11:33.622906 kernel: audit: type=1403 audit(1738242691.895:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:11:33.622945 systemd[1]: Successfully loaded SELinux policy in 47.814ms. Jan 30 13:11:33.622984 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.276ms. Jan 30 13:11:33.623022 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:11:33.623052 systemd[1]: Detected virtualization amazon. Jan 30 13:11:33.623083 systemd[1]: Detected architecture arm64. Jan 30 13:11:33.623111 systemd[1]: Detected first boot. Jan 30 13:11:33.623145 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:11:33.623367 zram_generator::config[1401]: No configuration found. Jan 30 13:11:33.623409 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:11:33.623441 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:11:33.623470 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:11:33.623503 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:11:33.623535 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:11:33.623564 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:11:33.623604 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:11:33.623635 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:11:33.623670 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:11:33.623710 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:11:33.623744 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:11:33.623774 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:11:33.623805 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:11:33.623836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:11:33.623868 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:11:33.623899 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:11:33.623934 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:11:33.623963 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:11:33.623997 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:11:33.624028 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:11:33.624057 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:11:33.624085 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:11:33.624116 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:11:33.624145 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:11:33.625271 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:11:33.625314 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:11:33.625344 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:11:33.625374 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:11:33.625403 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:11:33.625432 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:11:33.625460 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:11:33.625491 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:11:33.625522 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:11:33.625557 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:11:33.625588 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:11:33.625619 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:11:33.625650 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:11:33.625679 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:11:33.625709 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:11:33.625738 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:11:33.625768 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:11:33.625797 systemd[1]: Reached target machines.target - Containers. Jan 30 13:11:33.625833 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:11:33.625863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:11:33.625892 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:11:33.625922 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:11:33.625953 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:11:33.625983 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:11:33.626012 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:11:33.626044 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:11:33.626077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:11:33.626113 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:11:33.627220 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:11:33.627281 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:11:33.627312 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:11:33.627340 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:11:33.627369 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:11:33.627398 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:11:33.627427 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:11:33.627463 kernel: loop: module loaded Jan 30 13:11:33.627492 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:11:33.627521 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:11:33.627550 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:11:33.627582 systemd[1]: Stopped verity-setup.service. Jan 30 13:11:33.627611 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:11:33.627640 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:11:33.627668 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:11:33.627696 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:11:33.627730 kernel: fuse: init (API version 7.39) Jan 30 13:11:33.627758 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:11:33.627787 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:11:33.627819 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:11:33.627850 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:11:33.627884 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:11:33.627913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:11:33.627944 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:11:33.627973 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:11:33.628002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:11:33.628034 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:11:33.628064 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:11:33.628096 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:11:33.628130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:11:33.629266 kernel: ACPI: bus type drm_connector registered Jan 30 13:11:33.629311 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:11:33.629342 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:11:33.629372 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:11:33.629401 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:11:33.629438 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:11:33.629468 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:11:33.629500 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:11:33.629575 systemd-journald[1486]: Collecting audit messages is disabled. Jan 30 13:11:33.629623 systemd-journald[1486]: Journal started Jan 30 13:11:33.629679 systemd-journald[1486]: Runtime Journal (/run/log/journal/ec26f725fafb4d001b50451360351225) is 8.0M, max 75.3M, 67.3M free. Jan 30 13:11:33.034942 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:11:33.055467 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 30 13:11:33.056278 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:11:33.637251 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:11:33.647576 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:11:33.647923 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:11:33.656846 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:11:33.670203 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:11:33.685629 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:11:33.689695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:11:33.699251 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:11:33.703216 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:11:33.716234 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:11:33.719325 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:11:33.736382 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:11:33.748740 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:11:33.785252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:11:33.785346 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:11:33.777258 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:11:33.779856 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:11:33.782351 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:11:33.785707 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:11:33.814497 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:11:33.860075 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:11:33.874671 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:11:33.886467 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:11:33.897213 kernel: loop0: detected capacity change from 0 to 53784 Jan 30 13:11:33.926689 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:11:33.935408 systemd-journald[1486]: Time spent on flushing to /var/log/journal/ec26f725fafb4d001b50451360351225 is 138.078ms for 896 entries. Jan 30 13:11:33.935408 systemd-journald[1486]: System Journal (/var/log/journal/ec26f725fafb4d001b50451360351225) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:11:34.095520 systemd-journald[1486]: Received client request to flush runtime journal. Jan 30 13:11:34.095603 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:11:34.095655 kernel: loop1: detected capacity change from 0 to 194096 Jan 30 13:11:33.994493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:11:33.997700 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Jan 30 13:11:33.997725 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Jan 30 13:11:34.005699 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:11:34.034933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:11:34.047514 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:11:34.077079 udevadm[1544]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:11:34.101638 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:11:34.138618 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:11:34.147291 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:11:34.182247 kernel: loop2: detected capacity change from 0 to 113552 Jan 30 13:11:34.203630 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:11:34.217504 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:11:34.293918 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Jan 30 13:11:34.293952 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. Jan 30 13:11:34.310463 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:11:34.365560 kernel: loop3: detected capacity change from 0 to 116784 Jan 30 13:11:34.489253 kernel: loop4: detected capacity change from 0 to 53784 Jan 30 13:11:34.524347 kernel: loop5: detected capacity change from 0 to 194096 Jan 30 13:11:34.565836 kernel: loop6: detected capacity change from 0 to 113552 Jan 30 13:11:34.585205 kernel: loop7: detected capacity change from 0 to 116784 Jan 30 13:11:34.604865 (sd-merge)[1562]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 30 13:11:34.614606 (sd-merge)[1562]: Merged extensions into '/usr'. Jan 30 13:11:34.634545 systemd[1]: Reloading requested from client PID 1512 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:11:34.634834 systemd[1]: Reloading... Jan 30 13:11:34.709113 ldconfig[1508]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:11:34.859259 zram_generator::config[1594]: No configuration found. Jan 30 13:11:35.107943 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:11:35.218826 systemd[1]: Reloading finished in 582 ms. Jan 30 13:11:35.260266 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:11:35.263522 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:11:35.267841 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:11:35.283590 systemd[1]: Starting ensure-sysext.service... Jan 30 13:11:35.297389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:11:35.308459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:11:35.335256 systemd[1]: Reloading requested from client PID 1641 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:11:35.335306 systemd[1]: Reloading... Jan 30 13:11:35.341055 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:11:35.342234 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:11:35.344228 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:11:35.344986 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Jan 30 13:11:35.345303 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Jan 30 13:11:35.352421 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:11:35.352623 systemd-tmpfiles[1642]: Skipping /boot Jan 30 13:11:35.374499 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:11:35.374709 systemd-tmpfiles[1642]: Skipping /boot Jan 30 13:11:35.433992 systemd-udevd[1643]: Using default interface naming scheme 'v255'. Jan 30 13:11:35.525001 zram_generator::config[1675]: No configuration found. Jan 30 13:11:35.724728 (udev-worker)[1693]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:11:35.839209 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1697) Jan 30 13:11:35.952037 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:11:36.112986 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:11:36.113479 systemd[1]: Reloading finished in 777 ms. Jan 30 13:11:36.149540 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:11:36.155247 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:11:36.196816 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:11:36.247032 systemd[1]: Finished ensure-sysext.service. Jan 30 13:11:36.253613 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 13:11:36.264490 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:11:36.280153 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:11:36.282670 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:11:36.286498 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:11:36.302529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:11:36.306959 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:11:36.327760 lvm[1839]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:11:36.326429 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:11:36.332290 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:11:36.334546 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:11:36.337458 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:11:36.344450 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:11:36.353487 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:11:36.362486 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:11:36.364894 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:11:36.371460 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:11:36.379387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:11:36.384126 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:11:36.386276 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:11:36.420387 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:11:36.420775 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:11:36.439497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:11:36.441333 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:11:36.444206 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:11:36.464906 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:11:36.468397 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:11:36.476091 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:11:36.482047 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:11:36.496465 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:11:36.498427 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:11:36.499805 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:11:36.501335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:11:36.508085 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:11:36.528132 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:11:36.547409 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:11:36.554222 lvm[1866]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:11:36.578474 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:11:36.597417 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:11:36.600983 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:11:36.605660 augenrules[1884]: No rules Jan 30 13:11:36.608885 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:11:36.612250 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:11:36.634578 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:11:36.671873 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:11:36.681068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:11:36.786875 systemd-networkd[1852]: lo: Link UP Jan 30 13:11:36.787501 systemd-networkd[1852]: lo: Gained carrier Jan 30 13:11:36.790613 systemd-networkd[1852]: Enumeration completed Jan 30 13:11:36.791003 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:11:36.795670 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:11:36.795826 systemd-networkd[1852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:11:36.795921 systemd-resolved[1853]: Positive Trust Anchors: Jan 30 13:11:36.795942 systemd-resolved[1853]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:11:36.796004 systemd-resolved[1853]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:11:36.798693 systemd-networkd[1852]: eth0: Link UP Jan 30 13:11:36.799057 systemd-networkd[1852]: eth0: Gained carrier Jan 30 13:11:36.799096 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:11:36.800507 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:11:36.811308 systemd-networkd[1852]: eth0: DHCPv4 address 172.31.25.221/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 13:11:36.812064 systemd-resolved[1853]: Defaulting to hostname 'linux'. Jan 30 13:11:36.823904 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:11:36.826445 systemd[1]: Reached target network.target - Network. Jan 30 13:11:36.830371 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:11:36.835408 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:11:36.837987 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:11:36.840434 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:11:36.843113 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:11:36.845350 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:11:36.847959 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:11:36.850234 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:11:36.850284 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:11:36.851925 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:11:36.855471 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:11:36.860133 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:11:36.874555 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:11:36.877582 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:11:36.879773 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:11:36.881665 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:11:36.883424 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:11:36.883492 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:11:36.889386 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:11:36.906815 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:11:36.914544 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:11:36.930299 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:11:36.941405 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:11:36.943417 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:11:36.949537 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:11:36.958555 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 13:11:36.963684 jq[1908]: false Jan 30 13:11:36.967683 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 13:11:36.980470 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:11:36.993784 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:11:37.002698 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:11:37.005939 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:11:37.006785 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:11:37.011496 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:11:37.017656 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:11:37.025856 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:11:37.028806 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:11:37.058003 (ntainerd)[1927]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:11:37.132933 jq[1923]: true Jan 30 13:11:37.140988 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:11:37.143611 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:11:37.167832 jq[1936]: true Jan 30 13:11:37.182783 extend-filesystems[1909]: Found loop4 Jan 30 13:11:37.182783 extend-filesystems[1909]: Found loop5 Jan 30 13:11:37.182783 extend-filesystems[1909]: Found loop6 Jan 30 13:11:37.182783 extend-filesystems[1909]: Found loop7 Jan 30 13:11:37.182783 extend-filesystems[1909]: Found nvme0n1 Jan 30 13:11:37.182783 extend-filesystems[1909]: Found nvme0n1p1 Jan 30 13:11:37.182783 extend-filesystems[1909]: Found nvme0n1p2 Jan 30 13:11:37.182783 extend-filesystems[1909]: Found nvme0n1p3 Jan 30 13:11:37.182783 extend-filesystems[1909]: Found usr Jan 30 13:11:37.182783 extend-filesystems[1909]: Found nvme0n1p4 Jan 30 13:11:37.182783 extend-filesystems[1909]: Found nvme0n1p6 Jan 30 13:11:37.182783 extend-filesystems[1909]: Found nvme0n1p7 Jan 30 13:11:37.182783 extend-filesystems[1909]: Found nvme0n1p9 Jan 30 13:11:37.182783 extend-filesystems[1909]: Checking size of /dev/nvme0n1p9 Jan 30 13:11:37.190337 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:11:37.193476 dbus-daemon[1907]: [system] SELinux support is enabled Jan 30 13:11:37.190810 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:11:37.273376 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:00:26 UTC 2025 (1): Starting Jan 30 13:11:37.273376 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:11:37.273376 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: ---------------------------------------------------- Jan 30 13:11:37.228903 dbus-daemon[1907]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1852 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 13:11:37.195351 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:11:37.252569 dbus-daemon[1907]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:11:37.239758 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:11:37.272065 ntpd[1913]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:00:26 UTC 2025 (1): Starting Jan 30 13:11:37.242461 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:11:37.272110 ntpd[1913]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 13:11:37.242507 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:11:37.272130 ntpd[1913]: ---------------------------------------------------- Jan 30 13:11:37.245003 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:11:37.272149 ntpd[1913]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:11:37.288763 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: ntp-4 is maintained by Network Time Foundation, Jan 30 13:11:37.288763 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:11:37.288763 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: corporation. Support and training for ntp-4 are Jan 30 13:11:37.288763 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: available at https://www.nwtime.org/support Jan 30 13:11:37.288763 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: ---------------------------------------------------- Jan 30 13:11:37.245039 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:11:37.284588 ntpd[1913]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 13:11:37.279560 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 13:11:37.284619 ntpd[1913]: corporation. Support and training for ntp-4 are Jan 30 13:11:37.284638 ntpd[1913]: available at https://www.nwtime.org/support Jan 30 13:11:37.284656 ntpd[1913]: ---------------------------------------------------- Jan 30 13:11:37.294462 ntpd[1913]: proto: precision = 0.096 usec (-23) Jan 30 13:11:37.300321 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: proto: precision = 0.096 usec (-23) Jan 30 13:11:37.300321 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: basedate set to 2025-01-17 Jan 30 13:11:37.300321 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: gps base set to 2025-01-19 (week 2350) Jan 30 13:11:37.296710 ntpd[1913]: basedate set to 2025-01-17 Jan 30 13:11:37.296751 ntpd[1913]: gps base set to 2025-01-19 (week 2350) Jan 30 13:11:37.302783 update_engine[1921]: I20250130 13:11:37.300079 1921 main.cc:92] Flatcar Update Engine starting Jan 30 13:11:37.306584 ntpd[1913]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:11:37.307847 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 13:11:37.308066 ntpd[1913]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:11:37.308817 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 13:11:37.312004 extend-filesystems[1909]: Resized partition /dev/nvme0n1p9 Jan 30 13:11:37.311033 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:11:37.309449 ntpd[1913]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:11:37.315549 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 13:11:37.315549 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: Listen normally on 3 eth0 172.31.25.221:123 Jan 30 13:11:37.315549 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: Listen normally on 4 lo [::1]:123 Jan 30 13:11:37.315549 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: bind(21) AF_INET6 fe80::4eb:bdff:fe11:a4b9%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:11:37.315549 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: unable to create socket on eth0 (5) for fe80::4eb:bdff:fe11:a4b9%2#123 Jan 30 13:11:37.315549 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: failed to init interface for address fe80::4eb:bdff:fe11:a4b9%2 Jan 30 13:11:37.315549 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: Listening on routing socket on fd #21 for interface updates Jan 30 13:11:37.315845 update_engine[1921]: I20250130 13:11:37.314247 1921 update_check_scheduler.cc:74] Next update check in 6m35s Jan 30 13:11:37.309521 ntpd[1913]: Listen normally on 3 eth0 172.31.25.221:123 Jan 30 13:11:37.309589 ntpd[1913]: Listen normally on 4 lo [::1]:123 Jan 30 13:11:37.309668 ntpd[1913]: bind(21) AF_INET6 fe80::4eb:bdff:fe11:a4b9%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 13:11:37.309709 ntpd[1913]: unable to create socket on eth0 (5) for fe80::4eb:bdff:fe11:a4b9%2#123 Jan 30 13:11:37.309736 ntpd[1913]: failed to init interface for address fe80::4eb:bdff:fe11:a4b9%2 Jan 30 13:11:37.309802 ntpd[1913]: Listening on routing socket on fd #21 for interface updates Jan 30 13:11:37.320649 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:11:37.319118 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:11:37.336193 extend-filesystems[1958]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:11:37.366860 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 30 13:11:37.366934 ntpd[1913]: 30 Jan 13:11:37 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:11:37.349224 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 13:11:37.337637 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:11:37.381190 coreos-metadata[1906]: Jan 30 13:11:37.378 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:11:37.381545 systemd-logind[1919]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 13:11:37.381584 systemd-logind[1919]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 30 13:11:37.386785 coreos-metadata[1906]: Jan 30 13:11:37.383 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 30 13:11:37.386463 systemd-logind[1919]: New seat seat0. Jan 30 13:11:37.393336 coreos-metadata[1906]: Jan 30 13:11:37.393 INFO Fetch successful Jan 30 13:11:37.393336 coreos-metadata[1906]: Jan 30 13:11:37.393 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 30 13:11:37.394377 coreos-metadata[1906]: Jan 30 13:11:37.394 INFO Fetch successful Jan 30 13:11:37.394377 coreos-metadata[1906]: Jan 30 13:11:37.394 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 30 13:11:37.395339 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:11:37.402460 coreos-metadata[1906]: Jan 30 13:11:37.400 INFO Fetch successful Jan 30 13:11:37.402460 coreos-metadata[1906]: Jan 30 13:11:37.400 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 30 13:11:37.404835 coreos-metadata[1906]: Jan 30 13:11:37.404 INFO Fetch successful Jan 30 13:11:37.404835 coreos-metadata[1906]: Jan 30 13:11:37.404 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 30 13:11:37.414059 coreos-metadata[1906]: Jan 30 13:11:37.413 INFO Fetch failed with 404: resource not found Jan 30 13:11:37.414059 coreos-metadata[1906]: Jan 30 13:11:37.414 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 30 13:11:37.414850 coreos-metadata[1906]: Jan 30 13:11:37.414 INFO Fetch successful Jan 30 13:11:37.415107 coreos-metadata[1906]: Jan 30 13:11:37.414 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 30 13:11:37.420522 coreos-metadata[1906]: Jan 30 13:11:37.420 INFO Fetch successful Jan 30 13:11:37.420522 coreos-metadata[1906]: Jan 30 13:11:37.420 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 30 13:11:37.465912 coreos-metadata[1906]: Jan 30 13:11:37.421 INFO Fetch successful Jan 30 13:11:37.465912 coreos-metadata[1906]: Jan 30 13:11:37.421 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 30 13:11:37.465912 coreos-metadata[1906]: Jan 30 13:11:37.427 INFO Fetch successful Jan 30 13:11:37.465912 coreos-metadata[1906]: Jan 30 13:11:37.427 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 30 13:11:37.465912 coreos-metadata[1906]: Jan 30 13:11:37.428 INFO Fetch successful Jan 30 13:11:37.463719 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 13:11:37.479864 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 30 13:11:37.514102 extend-filesystems[1958]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 30 13:11:37.514102 extend-filesystems[1958]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:11:37.514102 extend-filesystems[1958]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 30 13:11:37.526670 extend-filesystems[1909]: Resized filesystem in /dev/nvme0n1p9 Jan 30 13:11:37.528457 bash[1972]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:11:37.528784 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:11:37.529173 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:11:37.536966 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:11:37.565714 systemd[1]: Starting sshkeys.service... Jan 30 13:11:37.580497 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:11:37.588862 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:11:37.632913 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:11:37.644208 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:11:37.722263 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1685) Jan 30 13:11:37.724959 locksmithd[1960]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:11:37.759271 containerd[1927]: time="2025-01-30T13:11:37.758077378Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:11:37.773741 dbus-daemon[1907]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 13:11:37.774152 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 13:11:37.778463 dbus-daemon[1907]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1950 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 13:11:37.805523 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 13:11:37.828678 polkitd[2023]: Started polkitd version 121 Jan 30 13:11:37.848776 polkitd[2023]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 13:11:37.848898 polkitd[2023]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 13:11:37.850487 polkitd[2023]: Finished loading, compiling and executing 2 rules Jan 30 13:11:37.851294 dbus-daemon[1907]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 13:11:37.851962 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 13:11:37.855081 polkitd[2023]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 13:11:37.899568 systemd-resolved[1853]: System hostname changed to 'ip-172-31-25-221'. Jan 30 13:11:37.899699 systemd-hostnamed[1950]: Hostname set to (transient) Jan 30 13:11:37.932378 containerd[1927]: time="2025-01-30T13:11:37.929836823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:11:37.939142 containerd[1927]: time="2025-01-30T13:11:37.939053543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:11:37.939142 containerd[1927]: time="2025-01-30T13:11:37.939122903Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:11:37.940277 containerd[1927]: time="2025-01-30T13:11:37.940205939Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:11:37.940599 containerd[1927]: time="2025-01-30T13:11:37.940556747Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:11:37.940657 containerd[1927]: time="2025-01-30T13:11:37.940604075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:11:37.940797 containerd[1927]: time="2025-01-30T13:11:37.940748915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:11:37.940853 containerd[1927]: time="2025-01-30T13:11:37.940795475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:11:37.941178 containerd[1927]: time="2025-01-30T13:11:37.941119955Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:11:37.941257 containerd[1927]: time="2025-01-30T13:11:37.941191487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:11:37.941257 containerd[1927]: time="2025-01-30T13:11:37.941238455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:11:37.941337 containerd[1927]: time="2025-01-30T13:11:37.941264663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:11:37.941471 containerd[1927]: time="2025-01-30T13:11:37.941433131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:11:37.941890 containerd[1927]: time="2025-01-30T13:11:37.941848571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:11:37.943186 containerd[1927]: time="2025-01-30T13:11:37.942059111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:11:37.943186 containerd[1927]: time="2025-01-30T13:11:37.942106199Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:11:37.943186 containerd[1927]: time="2025-01-30T13:11:37.942371267Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:11:37.943186 containerd[1927]: time="2025-01-30T13:11:37.942485807Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:11:37.944831 coreos-metadata[1997]: Jan 30 13:11:37.944 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 13:11:37.949079 coreos-metadata[1997]: Jan 30 13:11:37.947 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 30 13:11:37.950092 coreos-metadata[1997]: Jan 30 13:11:37.949 INFO Fetch successful Jan 30 13:11:37.950092 coreos-metadata[1997]: Jan 30 13:11:37.949 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 13:11:37.951710 containerd[1927]: time="2025-01-30T13:11:37.950730251Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:11:37.951710 containerd[1927]: time="2025-01-30T13:11:37.950842187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:11:37.951710 containerd[1927]: time="2025-01-30T13:11:37.951267359Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:11:37.951710 containerd[1927]: time="2025-01-30T13:11:37.951320207Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:11:37.951710 containerd[1927]: time="2025-01-30T13:11:37.951359675Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:11:37.951962 coreos-metadata[1997]: Jan 30 13:11:37.951 INFO Fetch successful Jan 30 13:11:37.953727 containerd[1927]: time="2025-01-30T13:11:37.952193075Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:11:37.953727 containerd[1927]: time="2025-01-30T13:11:37.952773143Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:11:37.953727 containerd[1927]: time="2025-01-30T13:11:37.953004011Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:11:37.953727 containerd[1927]: time="2025-01-30T13:11:37.953036927Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:11:37.953727 containerd[1927]: time="2025-01-30T13:11:37.953070707Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:11:37.953727 containerd[1927]: time="2025-01-30T13:11:37.953101259Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:11:37.953727 containerd[1927]: time="2025-01-30T13:11:37.953133083Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954299375Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954368111Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954407363Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954445283Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954475715Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954504263Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954546251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954580211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954614339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954645371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954674735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954705911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954733439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.954905 containerd[1927]: time="2025-01-30T13:11:37.954764243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.954796991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.955442075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.955505579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.955637015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.955690859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.955747499Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.955814411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.955861451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.955900211Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.956077319Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.956129387Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.956188955Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:11:37.957657 containerd[1927]: time="2025-01-30T13:11:37.956232551Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:11:37.958222 containerd[1927]: time="2025-01-30T13:11:37.956266727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.958222 containerd[1927]: time="2025-01-30T13:11:37.956298719Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:11:37.958222 containerd[1927]: time="2025-01-30T13:11:37.956332559Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:11:37.958222 containerd[1927]: time="2025-01-30T13:11:37.956370299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:11:37.958385 containerd[1927]: time="2025-01-30T13:11:37.956948723Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:11:37.958385 containerd[1927]: time="2025-01-30T13:11:37.957055559Z" level=info msg="Connect containerd service" Jan 30 13:11:37.958804 unknown[1997]: wrote ssh authorized keys file for user: core Jan 30 13:11:37.967452 containerd[1927]: time="2025-01-30T13:11:37.957152927Z" level=info msg="using legacy CRI server" Jan 30 13:11:37.967452 containerd[1927]: time="2025-01-30T13:11:37.966740891Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:11:37.967452 containerd[1927]: time="2025-01-30T13:11:37.967042679Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:11:37.984451 containerd[1927]: time="2025-01-30T13:11:37.981069035Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:11:37.984451 containerd[1927]: time="2025-01-30T13:11:37.981460991Z" level=info msg="Start subscribing containerd event" Jan 30 13:11:37.984451 containerd[1927]: time="2025-01-30T13:11:37.981545459Z" level=info msg="Start recovering state" Jan 30 13:11:37.984451 containerd[1927]: time="2025-01-30T13:11:37.981702275Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:11:37.984451 containerd[1927]: time="2025-01-30T13:11:37.982260419Z" level=info msg="Start event monitor" Jan 30 13:11:37.984451 containerd[1927]: time="2025-01-30T13:11:37.982302527Z" level=info msg="Start snapshots syncer" Jan 30 13:11:37.984451 containerd[1927]: time="2025-01-30T13:11:37.982327787Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:11:37.984451 containerd[1927]: time="2025-01-30T13:11:37.982346543Z" level=info msg="Start streaming server" Jan 30 13:11:37.984451 containerd[1927]: time="2025-01-30T13:11:37.984328979Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:11:37.988351 containerd[1927]: time="2025-01-30T13:11:37.988303739Z" level=info msg="containerd successfully booted in 0.236222s" Jan 30 13:11:37.988440 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:11:38.025192 update-ssh-keys[2077]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:11:38.032232 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:11:38.038698 systemd[1]: Finished sshkeys.service. Jan 30 13:11:38.171307 systemd-networkd[1852]: eth0: Gained IPv6LL Jan 30 13:11:38.177810 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:11:38.185301 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:11:38.198095 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 30 13:11:38.211470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:11:38.225300 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:11:38.315232 amazon-ssm-agent[2109]: Initializing new seelog logger Jan 30 13:11:38.315232 amazon-ssm-agent[2109]: New Seelog Logger Creation Complete Jan 30 13:11:38.315232 amazon-ssm-agent[2109]: 2025/01/30 13:11:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:11:38.315232 amazon-ssm-agent[2109]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:11:38.315232 amazon-ssm-agent[2109]: 2025/01/30 13:11:38 processing appconfig overrides Jan 30 13:11:38.315976 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:11:38.319904 amazon-ssm-agent[2109]: 2025/01/30 13:11:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:11:38.319904 amazon-ssm-agent[2109]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:11:38.319904 amazon-ssm-agent[2109]: 2025/01/30 13:11:38 processing appconfig overrides Jan 30 13:11:38.322412 amazon-ssm-agent[2109]: 2025/01/30 13:11:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:11:38.322412 amazon-ssm-agent[2109]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:11:38.322587 amazon-ssm-agent[2109]: 2025/01/30 13:11:38 processing appconfig overrides Jan 30 13:11:38.323066 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO Proxy environment variables: Jan 30 13:11:38.326180 amazon-ssm-agent[2109]: 2025/01/30 13:11:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:11:38.326180 amazon-ssm-agent[2109]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 13:11:38.326354 amazon-ssm-agent[2109]: 2025/01/30 13:11:38 processing appconfig overrides Jan 30 13:11:38.425249 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO https_proxy: Jan 30 13:11:38.525183 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO http_proxy: Jan 30 13:11:38.623713 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO no_proxy: Jan 30 13:11:38.722469 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO Checking if agent identity type OnPrem can be assumed Jan 30 13:11:38.820797 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO Checking if agent identity type EC2 can be assumed Jan 30 13:11:38.920316 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO Agent will take identity from EC2 Jan 30 13:11:39.019070 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:11:39.121179 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:11:39.220871 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 13:11:39.319465 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 30 13:11:39.357110 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 30 13:11:39.358238 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO [amazon-ssm-agent] Starting Core Agent Jan 30 13:11:39.358379 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 30 13:11:39.358492 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO [Registrar] Starting registrar module Jan 30 13:11:39.358605 amazon-ssm-agent[2109]: 2025-01-30 13:11:38 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 30 13:11:39.358716 amazon-ssm-agent[2109]: 2025-01-30 13:11:39 INFO [EC2Identity] EC2 registration was successful. Jan 30 13:11:39.358826 amazon-ssm-agent[2109]: 2025-01-30 13:11:39 INFO [CredentialRefresher] credentialRefresher has started Jan 30 13:11:39.358955 amazon-ssm-agent[2109]: 2025-01-30 13:11:39 INFO [CredentialRefresher] Starting credentials refresher loop Jan 30 13:11:39.359064 amazon-ssm-agent[2109]: 2025-01-30 13:11:39 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 30 13:11:39.420519 amazon-ssm-agent[2109]: 2025-01-30 13:11:39 INFO [CredentialRefresher] Next credential rotation will be in 30.258291577333335 minutes Jan 30 13:11:39.572656 sshd_keygen[1959]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:11:39.616985 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:11:39.628787 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:11:39.638075 systemd[1]: Started sshd@0-172.31.25.221:22-139.178.68.195:43604.service - OpenSSH per-connection server daemon (139.178.68.195:43604). Jan 30 13:11:39.658534 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:11:39.660305 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:11:39.673709 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:11:39.721885 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:11:39.733081 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:11:39.745808 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:11:39.749126 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:11:39.889844 sshd[2136]: Accepted publickey for core from 139.178.68.195 port 43604 ssh2: RSA SHA256:IIBz/o2IbjR31YTBk0KuifCBKNY8VNSxNJe4HmctfY0 Jan 30 13:11:39.892809 sshd-session[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:39.914990 systemd-logind[1919]: New session 1 of user core. Jan 30 13:11:39.916936 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:11:39.926879 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:11:39.969842 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:11:39.979984 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:11:40.003614 (systemd)[2147]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:11:40.226370 systemd[2147]: Queued start job for default target default.target. Jan 30 13:11:40.237309 systemd[2147]: Created slice app.slice - User Application Slice. Jan 30 13:11:40.237377 systemd[2147]: Reached target paths.target - Paths. Jan 30 13:11:40.237411 systemd[2147]: Reached target timers.target - Timers. Jan 30 13:11:40.239866 systemd[2147]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:11:40.276479 systemd[2147]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:11:40.277048 systemd[2147]: Reached target sockets.target - Sockets. Jan 30 13:11:40.277099 systemd[2147]: Reached target basic.target - Basic System. Jan 30 13:11:40.277276 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:11:40.279325 systemd[2147]: Reached target default.target - Main User Target. Jan 30 13:11:40.279421 systemd[2147]: Startup finished in 263ms. Jan 30 13:11:40.285275 ntpd[1913]: Listen normally on 6 eth0 [fe80::4eb:bdff:fe11:a4b9%2]:123 Jan 30 13:11:40.287506 ntpd[1913]: 30 Jan 13:11:40 ntpd[1913]: Listen normally on 6 eth0 [fe80::4eb:bdff:fe11:a4b9%2]:123 Jan 30 13:11:40.287514 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:11:40.390234 amazon-ssm-agent[2109]: 2025-01-30 13:11:40 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 30 13:11:40.455482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:11:40.459804 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:11:40.480359 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:11:40.480633 systemd[1]: Started sshd@1-172.31.25.221:22-139.178.68.195:43616.service - OpenSSH per-connection server daemon (139.178.68.195:43616). Jan 30 13:11:40.484142 systemd[1]: Startup finished in 1.085s (kernel) + 8.111s (initrd) + 8.634s (userspace) = 17.831s. Jan 30 13:11:40.495968 amazon-ssm-agent[2109]: 2025-01-30 13:11:40 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2160) started Jan 30 13:11:40.534086 agetty[2142]: failed to open credentials directory Jan 30 13:11:40.543148 agetty[2144]: failed to open credentials directory Jan 30 13:11:40.595386 amazon-ssm-agent[2109]: 2025-01-30 13:11:40 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 30 13:11:40.735494 sshd[2169]: Accepted publickey for core from 139.178.68.195 port 43616 ssh2: RSA SHA256:IIBz/o2IbjR31YTBk0KuifCBKNY8VNSxNJe4HmctfY0 Jan 30 13:11:40.736141 sshd-session[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:40.744501 systemd-logind[1919]: New session 2 of user core. Jan 30 13:11:40.750486 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:11:40.884611 sshd[2180]: Connection closed by 139.178.68.195 port 43616 Jan 30 13:11:40.885500 sshd-session[2169]: pam_unix(sshd:session): session closed for user core Jan 30 13:11:40.891014 systemd[1]: sshd@1-172.31.25.221:22-139.178.68.195:43616.service: Deactivated successfully. Jan 30 13:11:40.894809 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:11:40.898944 systemd-logind[1919]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:11:40.900857 systemd-logind[1919]: Removed session 2. Jan 30 13:11:40.922677 systemd[1]: Started sshd@2-172.31.25.221:22-139.178.68.195:43626.service - OpenSSH per-connection server daemon (139.178.68.195:43626). Jan 30 13:11:41.098996 sshd[2189]: Accepted publickey for core from 139.178.68.195 port 43626 ssh2: RSA SHA256:IIBz/o2IbjR31YTBk0KuifCBKNY8VNSxNJe4HmctfY0 Jan 30 13:11:41.102052 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:41.110584 systemd-logind[1919]: New session 3 of user core. Jan 30 13:11:41.119490 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:11:41.237562 sshd[2191]: Connection closed by 139.178.68.195 port 43626 Jan 30 13:11:41.238615 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Jan 30 13:11:41.246092 systemd[1]: sshd@2-172.31.25.221:22-139.178.68.195:43626.service: Deactivated successfully. Jan 30 13:11:41.246929 systemd-logind[1919]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:11:41.251935 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:11:41.257815 systemd-logind[1919]: Removed session 3. Jan 30 13:11:41.278840 systemd[1]: Started sshd@3-172.31.25.221:22-139.178.68.195:43632.service - OpenSSH per-connection server daemon (139.178.68.195:43632). Jan 30 13:11:41.463639 sshd[2196]: Accepted publickey for core from 139.178.68.195 port 43632 ssh2: RSA SHA256:IIBz/o2IbjR31YTBk0KuifCBKNY8VNSxNJe4HmctfY0 Jan 30 13:11:41.466907 sshd-session[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:41.477307 systemd-logind[1919]: New session 4 of user core. Jan 30 13:11:41.484467 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:11:41.613850 sshd[2198]: Connection closed by 139.178.68.195 port 43632 Jan 30 13:11:41.614684 sshd-session[2196]: pam_unix(sshd:session): session closed for user core Jan 30 13:11:41.621823 systemd[1]: sshd@3-172.31.25.221:22-139.178.68.195:43632.service: Deactivated successfully. Jan 30 13:11:41.625813 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:11:41.628114 systemd-logind[1919]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:11:41.630258 systemd-logind[1919]: Removed session 4. Jan 30 13:11:41.661488 systemd[1]: Started sshd@4-172.31.25.221:22-139.178.68.195:43638.service - OpenSSH per-connection server daemon (139.178.68.195:43638). Jan 30 13:11:41.848051 sshd[2205]: Accepted publickey for core from 139.178.68.195 port 43638 ssh2: RSA SHA256:IIBz/o2IbjR31YTBk0KuifCBKNY8VNSxNJe4HmctfY0 Jan 30 13:11:41.850063 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:41.863397 systemd-logind[1919]: New session 5 of user core. Jan 30 13:11:41.869502 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:11:41.872776 kubelet[2167]: E0130 13:11:41.872692 2167 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:11:41.874002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:11:41.874344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:11:41.874767 systemd[1]: kubelet.service: Consumed 1.319s CPU time. Jan 30 13:11:41.995721 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:11:41.996577 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:11:42.012265 sudo[2210]: pam_unix(sudo:session): session closed for user root Jan 30 13:11:42.036001 sshd[2209]: Connection closed by 139.178.68.195 port 43638 Jan 30 13:11:42.037101 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Jan 30 13:11:42.043983 systemd[1]: sshd@4-172.31.25.221:22-139.178.68.195:43638.service: Deactivated successfully. Jan 30 13:11:42.047390 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:11:42.050236 systemd-logind[1919]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:11:42.052361 systemd-logind[1919]: Removed session 5. Jan 30 13:11:42.077705 systemd[1]: Started sshd@5-172.31.25.221:22-139.178.68.195:43646.service - OpenSSH per-connection server daemon (139.178.68.195:43646). Jan 30 13:11:42.273296 sshd[2215]: Accepted publickey for core from 139.178.68.195 port 43646 ssh2: RSA SHA256:IIBz/o2IbjR31YTBk0KuifCBKNY8VNSxNJe4HmctfY0 Jan 30 13:11:42.276725 sshd-session[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:42.285497 systemd-logind[1919]: New session 6 of user core. Jan 30 13:11:42.297432 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:11:42.402045 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:11:42.402848 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:11:42.409073 sudo[2219]: pam_unix(sudo:session): session closed for user root Jan 30 13:11:42.419210 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:11:42.419835 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:11:42.439725 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:11:42.500801 augenrules[2241]: No rules Jan 30 13:11:42.502928 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:11:42.503395 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:11:42.505530 sudo[2218]: pam_unix(sudo:session): session closed for user root Jan 30 13:11:42.529310 sshd[2217]: Connection closed by 139.178.68.195 port 43646 Jan 30 13:11:42.530032 sshd-session[2215]: pam_unix(sshd:session): session closed for user core Jan 30 13:11:42.534993 systemd[1]: sshd@5-172.31.25.221:22-139.178.68.195:43646.service: Deactivated successfully. Jan 30 13:11:42.538494 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:11:42.541488 systemd-logind[1919]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:11:42.543611 systemd-logind[1919]: Removed session 6. Jan 30 13:11:42.567676 systemd[1]: Started sshd@6-172.31.25.221:22-139.178.68.195:43654.service - OpenSSH per-connection server daemon (139.178.68.195:43654). Jan 30 13:11:42.750927 sshd[2249]: Accepted publickey for core from 139.178.68.195 port 43654 ssh2: RSA SHA256:IIBz/o2IbjR31YTBk0KuifCBKNY8VNSxNJe4HmctfY0 Jan 30 13:11:42.753446 sshd-session[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:11:42.760913 systemd-logind[1919]: New session 7 of user core. Jan 30 13:11:42.770418 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:11:42.872358 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:11:42.873654 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:11:44.054714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:11:44.055067 systemd[1]: kubelet.service: Consumed 1.319s CPU time. Jan 30 13:11:44.062694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:11:44.106816 systemd[1]: Reloading requested from client PID 2290 ('systemctl') (unit session-7.scope)... Jan 30 13:11:44.107000 systemd[1]: Reloading... Jan 30 13:11:44.350210 zram_generator::config[2333]: No configuration found. Jan 30 13:11:44.594706 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:11:44.765109 systemd[1]: Reloading finished in 657 ms. Jan 30 13:11:44.855892 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:11:44.856071 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:11:44.856956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:11:44.862696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:11:45.147541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:11:45.167720 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:11:45.247208 kubelet[2393]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:11:45.247208 kubelet[2393]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:11:45.247208 kubelet[2393]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:11:45.247760 kubelet[2393]: I0130 13:11:45.247349 2393 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:11:46.262216 kubelet[2393]: I0130 13:11:46.260778 2393 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:11:46.262216 kubelet[2393]: I0130 13:11:46.260820 2393 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:11:46.262216 kubelet[2393]: I0130 13:11:46.261155 2393 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:11:46.297628 kubelet[2393]: I0130 13:11:46.297155 2393 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:11:46.312547 kubelet[2393]: I0130 13:11:46.312503 2393 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:11:46.313072 kubelet[2393]: I0130 13:11:46.313014 2393 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:11:46.313409 kubelet[2393]: I0130 13:11:46.313076 2393 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.25.221","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:11:46.313587 kubelet[2393]: I0130 13:11:46.313450 2393 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:11:46.313587 kubelet[2393]: I0130 13:11:46.313472 2393 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:11:46.313744 kubelet[2393]: I0130 13:11:46.313710 2393 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:11:46.315407 kubelet[2393]: I0130 13:11:46.315368 2393 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:11:46.315407 kubelet[2393]: I0130 13:11:46.315408 2393 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:11:46.315576 kubelet[2393]: I0130 13:11:46.315527 2393 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:11:46.315629 kubelet[2393]: I0130 13:11:46.315604 2393 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:11:46.317486 kubelet[2393]: E0130 13:11:46.316444 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:46.317486 kubelet[2393]: E0130 13:11:46.316568 2393 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:46.317882 kubelet[2393]: I0130 13:11:46.317779 2393 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:11:46.318441 kubelet[2393]: I0130 13:11:46.318398 2393 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:11:46.318524 kubelet[2393]: W0130 13:11:46.318506 2393 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:11:46.319872 kubelet[2393]: I0130 13:11:46.319826 2393 server.go:1264] "Started kubelet" Jan 30 13:11:46.323334 kubelet[2393]: I0130 13:11:46.323248 2393 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:11:46.325885 kubelet[2393]: I0130 13:11:46.325085 2393 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:11:46.325885 kubelet[2393]: I0130 13:11:46.325652 2393 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:11:46.327802 kubelet[2393]: I0130 13:11:46.327744 2393 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:11:46.338215 kubelet[2393]: I0130 13:11:46.338030 2393 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:11:46.342911 kubelet[2393]: I0130 13:11:46.342810 2393 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:11:46.345117 kubelet[2393]: I0130 13:11:46.344829 2393 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:11:46.345117 kubelet[2393]: I0130 13:11:46.344978 2393 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:11:46.347684 kubelet[2393]: E0130 13:11:46.347460 2393 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.221.181f7a88239adf15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.221,UID:172.31.25.221,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.25.221,},FirstTimestamp:2025-01-30 13:11:46.319793941 +0000 UTC m=+1.145741882,LastTimestamp:2025-01-30 13:11:46.319793941 +0000 UTC m=+1.145741882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.221,}" Jan 30 13:11:46.347975 kubelet[2393]: W0130 13:11:46.347715 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:11:46.347975 kubelet[2393]: E0130 13:11:46.347755 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:11:46.347975 kubelet[2393]: W0130 13:11:46.347912 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.25.221" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:11:46.347975 kubelet[2393]: E0130 13:11:46.347938 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.25.221" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:11:46.350516 kubelet[2393]: E0130 13:11:46.349997 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.25.221\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 30 13:11:46.350516 kubelet[2393]: W0130 13:11:46.350214 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:11:46.350516 kubelet[2393]: E0130 13:11:46.350255 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:11:46.350516 kubelet[2393]: I0130 13:11:46.350441 2393 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:11:46.350838 kubelet[2393]: I0130 13:11:46.350601 2393 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:11:46.351655 kubelet[2393]: E0130 13:11:46.351539 2393 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:11:46.353137 kubelet[2393]: E0130 13:11:46.352900 2393 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.25.221.181f7a88257efff1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.25.221,UID:172.31.25.221,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.25.221,},FirstTimestamp:2025-01-30 13:11:46.351521777 +0000 UTC m=+1.177469742,LastTimestamp:2025-01-30 13:11:46.351521777 +0000 UTC m=+1.177469742,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.25.221,}" Jan 30 13:11:46.357026 kubelet[2393]: I0130 13:11:46.356955 2393 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:11:46.399644 kubelet[2393]: I0130 13:11:46.399612 2393 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:11:46.399986 kubelet[2393]: I0130 13:11:46.399886 2393 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:11:46.400240 kubelet[2393]: I0130 13:11:46.399925 2393 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:11:46.402990 kubelet[2393]: I0130 13:11:46.402955 2393 policy_none.go:49] "None policy: Start" Jan 30 13:11:46.406261 kubelet[2393]: I0130 13:11:46.405640 2393 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:11:46.406261 kubelet[2393]: I0130 13:11:46.405683 2393 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:11:46.420587 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:11:46.439726 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:11:46.448434 kubelet[2393]: I0130 13:11:46.446801 2393 kubelet_node_status.go:73] "Attempting to register node" node="172.31.25.221" Jan 30 13:11:46.449494 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:11:46.453827 kubelet[2393]: I0130 13:11:46.453771 2393 kubelet_node_status.go:76] "Successfully registered node" node="172.31.25.221" Jan 30 13:11:46.458056 kubelet[2393]: I0130 13:11:46.457963 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:11:46.462246 kubelet[2393]: I0130 13:11:46.462144 2393 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:11:46.462637 kubelet[2393]: I0130 13:11:46.462480 2393 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:11:46.462765 kubelet[2393]: I0130 13:11:46.462733 2393 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:11:46.463090 kubelet[2393]: I0130 13:11:46.462988 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:11:46.463090 kubelet[2393]: I0130 13:11:46.463056 2393 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:11:46.463090 kubelet[2393]: I0130 13:11:46.463089 2393 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:11:46.468063 kubelet[2393]: E0130 13:11:46.463154 2393 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 13:11:46.537419 kubelet[2393]: E0130 13:11:46.537276 2393 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.25.221\" not found" Jan 30 13:11:46.558276 kubelet[2393]: E0130 13:11:46.558240 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.221\" not found" Jan 30 13:11:46.658607 kubelet[2393]: E0130 13:11:46.658546 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.221\" not found" Jan 30 13:11:46.685743 sudo[2252]: pam_unix(sudo:session): session closed for user root Jan 30 13:11:46.708136 sshd[2251]: Connection closed by 139.178.68.195 port 43654 Jan 30 13:11:46.708978 sshd-session[2249]: pam_unix(sshd:session): session closed for user core Jan 30 13:11:46.715881 systemd[1]: sshd@6-172.31.25.221:22-139.178.68.195:43654.service: Deactivated successfully. Jan 30 13:11:46.719699 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:11:46.721565 systemd-logind[1919]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:11:46.723294 systemd-logind[1919]: Removed session 7. Jan 30 13:11:46.759272 kubelet[2393]: E0130 13:11:46.759212 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.221\" not found" Jan 30 13:11:46.860238 kubelet[2393]: E0130 13:11:46.860086 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.221\" not found" Jan 30 13:11:46.960766 kubelet[2393]: E0130 13:11:46.960702 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.221\" not found" Jan 30 13:11:47.061360 kubelet[2393]: E0130 13:11:47.061312 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.221\" not found" Jan 30 13:11:47.162099 kubelet[2393]: E0130 13:11:47.161979 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.221\" not found" Jan 30 13:11:47.262737 kubelet[2393]: E0130 13:11:47.262679 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.221\" not found" Jan 30 13:11:47.272019 kubelet[2393]: I0130 13:11:47.271975 2393 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:11:47.272368 kubelet[2393]: W0130 13:11:47.272235 2393 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:11:47.317523 kubelet[2393]: E0130 13:11:47.317457 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:47.363441 kubelet[2393]: E0130 13:11:47.363382 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.221\" not found" Jan 30 13:11:47.464586 kubelet[2393]: E0130 13:11:47.464082 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.25.221\" not found" Jan 30 13:11:47.566127 kubelet[2393]: I0130 13:11:47.566031 2393 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:11:47.566544 containerd[1927]: time="2025-01-30T13:11:47.566488177Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:11:47.567320 kubelet[2393]: I0130 13:11:47.566819 2393 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:11:48.317955 kubelet[2393]: E0130 13:11:48.317888 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:48.318907 kubelet[2393]: I0130 13:11:48.318599 2393 apiserver.go:52] "Watching apiserver" Jan 30 13:11:48.324767 kubelet[2393]: I0130 13:11:48.324695 2393 topology_manager.go:215] "Topology Admit Handler" podUID="fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" podNamespace="kube-system" podName="cilium-g95b9" Jan 30 13:11:48.324912 kubelet[2393]: I0130 13:11:48.324891 2393 topology_manager.go:215] "Topology Admit Handler" podUID="5479001c-34d5-4774-823a-06873678c71c" podNamespace="kube-system" podName="kube-proxy-srhrn" Jan 30 13:11:48.337804 systemd[1]: Created slice kubepods-burstable-podfd552757_6fc0_4c32_a9b4_1a6f1bd0dbfe.slice - libcontainer container kubepods-burstable-podfd552757_6fc0_4c32_a9b4_1a6f1bd0dbfe.slice. Jan 30 13:11:48.345876 kubelet[2393]: I0130 13:11:48.345822 2393 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:11:48.355843 kubelet[2393]: I0130 13:11:48.355786 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-clustermesh-secrets\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.355996 kubelet[2393]: I0130 13:11:48.355851 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-hubble-tls\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.355996 kubelet[2393]: I0130 13:11:48.355894 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5479001c-34d5-4774-823a-06873678c71c-kube-proxy\") pod \"kube-proxy-srhrn\" (UID: \"5479001c-34d5-4774-823a-06873678c71c\") " pod="kube-system/kube-proxy-srhrn" Jan 30 13:11:48.355996 kubelet[2393]: I0130 13:11:48.355930 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5479001c-34d5-4774-823a-06873678c71c-xtables-lock\") pod \"kube-proxy-srhrn\" (UID: \"5479001c-34d5-4774-823a-06873678c71c\") " pod="kube-system/kube-proxy-srhrn" Jan 30 13:11:48.356150 kubelet[2393]: I0130 13:11:48.356001 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-run\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.356150 kubelet[2393]: I0130 13:11:48.356036 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-bpf-maps\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.356150 kubelet[2393]: I0130 13:11:48.356074 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-577rs\" (UniqueName: \"kubernetes.io/projected/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-kube-api-access-577rs\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.356150 kubelet[2393]: I0130 13:11:48.356111 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cni-path\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.356150 kubelet[2393]: I0130 13:11:48.356145 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-etc-cni-netd\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.356150 kubelet[2393]: I0130 13:11:48.356209 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-lib-modules\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.356964 kubelet[2393]: I0130 13:11:48.356260 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-xtables-lock\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.356964 kubelet[2393]: I0130 13:11:48.356294 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-config-path\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.356964 kubelet[2393]: I0130 13:11:48.356328 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-host-proc-sys-net\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.356964 kubelet[2393]: I0130 13:11:48.356366 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-host-proc-sys-kernel\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.356964 kubelet[2393]: I0130 13:11:48.356402 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-hostproc\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.356964 kubelet[2393]: I0130 13:11:48.356434 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-cgroup\") pod \"cilium-g95b9\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " pod="kube-system/cilium-g95b9" Jan 30 13:11:48.357271 kubelet[2393]: I0130 13:11:48.356498 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5479001c-34d5-4774-823a-06873678c71c-lib-modules\") pod \"kube-proxy-srhrn\" (UID: \"5479001c-34d5-4774-823a-06873678c71c\") " pod="kube-system/kube-proxy-srhrn" Jan 30 13:11:48.357271 kubelet[2393]: I0130 13:11:48.356536 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpnb5\" (UniqueName: \"kubernetes.io/projected/5479001c-34d5-4774-823a-06873678c71c-kube-api-access-fpnb5\") pod \"kube-proxy-srhrn\" (UID: \"5479001c-34d5-4774-823a-06873678c71c\") " pod="kube-system/kube-proxy-srhrn" Jan 30 13:11:48.366844 systemd[1]: Created slice kubepods-besteffort-pod5479001c_34d5_4774_823a_06873678c71c.slice - libcontainer container kubepods-besteffort-pod5479001c_34d5_4774_823a_06873678c71c.slice. Jan 30 13:11:48.663516 containerd[1927]: time="2025-01-30T13:11:48.662520079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g95b9,Uid:fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe,Namespace:kube-system,Attempt:0,}" Jan 30 13:11:48.681394 containerd[1927]: time="2025-01-30T13:11:48.681323672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srhrn,Uid:5479001c-34d5-4774-823a-06873678c71c,Namespace:kube-system,Attempt:0,}" Jan 30 13:11:49.227271 containerd[1927]: time="2025-01-30T13:11:49.227189192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:11:49.230347 containerd[1927]: time="2025-01-30T13:11:49.230272105Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 30 13:11:49.231717 containerd[1927]: time="2025-01-30T13:11:49.231665413Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:11:49.235203 containerd[1927]: time="2025-01-30T13:11:49.233212002Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:11:49.237323 containerd[1927]: time="2025-01-30T13:11:49.237262418Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:11:49.247016 containerd[1927]: time="2025-01-30T13:11:49.246943661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:11:49.249225 containerd[1927]: time="2025-01-30T13:11:49.249149223Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.706019ms" Jan 30 13:11:49.253751 containerd[1927]: time="2025-01-30T13:11:49.253673204Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.029883ms" Jan 30 13:11:49.318980 kubelet[2393]: E0130 13:11:49.318902 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:49.421967 containerd[1927]: time="2025-01-30T13:11:49.420548513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:11:49.421967 containerd[1927]: time="2025-01-30T13:11:49.421563752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:11:49.421967 containerd[1927]: time="2025-01-30T13:11:49.421608223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:11:49.422757 containerd[1927]: time="2025-01-30T13:11:49.422481167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:11:49.431186 containerd[1927]: time="2025-01-30T13:11:49.430774275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:11:49.431186 containerd[1927]: time="2025-01-30T13:11:49.430894383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:11:49.431186 containerd[1927]: time="2025-01-30T13:11:49.430930942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:11:49.431913 containerd[1927]: time="2025-01-30T13:11:49.431522813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:11:49.484886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount335321146.mount: Deactivated successfully. Jan 30 13:11:49.555953 systemd[1]: Started cri-containerd-2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0.scope - libcontainer container 2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0. Jan 30 13:11:49.570077 systemd[1]: Started cri-containerd-5cc47627ff10c9346eabe3833b624f72b37c848fee4c240fa79ba4ed3955aabf.scope - libcontainer container 5cc47627ff10c9346eabe3833b624f72b37c848fee4c240fa79ba4ed3955aabf. Jan 30 13:11:49.629355 containerd[1927]: time="2025-01-30T13:11:49.629234021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g95b9,Uid:fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\"" Jan 30 13:11:49.637213 containerd[1927]: time="2025-01-30T13:11:49.636934934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srhrn,Uid:5479001c-34d5-4774-823a-06873678c71c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cc47627ff10c9346eabe3833b624f72b37c848fee4c240fa79ba4ed3955aabf\"" Jan 30 13:11:49.637213 containerd[1927]: time="2025-01-30T13:11:49.636941081Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:11:50.320050 kubelet[2393]: E0130 13:11:50.319988 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:51.320480 kubelet[2393]: E0130 13:11:51.320426 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:52.321106 kubelet[2393]: E0130 13:11:52.320944 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:53.322091 kubelet[2393]: E0130 13:11:53.322026 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:54.323029 kubelet[2393]: E0130 13:11:54.322964 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:55.248510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4167512710.mount: Deactivated successfully. Jan 30 13:11:55.323460 kubelet[2393]: E0130 13:11:55.323371 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:56.324493 kubelet[2393]: E0130 13:11:56.324275 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:57.325316 kubelet[2393]: E0130 13:11:57.325214 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:57.586473 containerd[1927]: time="2025-01-30T13:11:57.586139513Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:11:57.588104 containerd[1927]: time="2025-01-30T13:11:57.588030170Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 13:11:57.592212 containerd[1927]: time="2025-01-30T13:11:57.591607922Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:11:57.595424 containerd[1927]: time="2025-01-30T13:11:57.594920713Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.957689892s" Jan 30 13:11:57.595424 containerd[1927]: time="2025-01-30T13:11:57.594980527Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 13:11:57.599793 containerd[1927]: time="2025-01-30T13:11:57.599734375Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:11:57.611194 containerd[1927]: time="2025-01-30T13:11:57.610681410Z" level=info msg="CreateContainer within sandbox \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:11:57.638477 containerd[1927]: time="2025-01-30T13:11:57.638258868Z" level=info msg="CreateContainer within sandbox \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9\"" Jan 30 13:11:57.641190 containerd[1927]: time="2025-01-30T13:11:57.640312026Z" level=info msg="StartContainer for \"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9\"" Jan 30 13:11:57.700617 systemd[1]: Started cri-containerd-cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9.scope - libcontainer container cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9. Jan 30 13:11:57.748860 containerd[1927]: time="2025-01-30T13:11:57.748100298Z" level=info msg="StartContainer for \"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9\" returns successfully" Jan 30 13:11:57.764307 systemd[1]: cri-containerd-cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9.scope: Deactivated successfully. Jan 30 13:11:58.326090 kubelet[2393]: E0130 13:11:58.326044 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:58.630291 systemd[1]: run-containerd-runc-k8s.io-cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9-runc.P5p4ad.mount: Deactivated successfully. Jan 30 13:11:58.630463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9-rootfs.mount: Deactivated successfully. Jan 30 13:11:59.239199 containerd[1927]: time="2025-01-30T13:11:59.238948948Z" level=info msg="shim disconnected" id=cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9 namespace=k8s.io Jan 30 13:11:59.239199 containerd[1927]: time="2025-01-30T13:11:59.239033759Z" level=warning msg="cleaning up after shim disconnected" id=cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9 namespace=k8s.io Jan 30 13:11:59.239199 containerd[1927]: time="2025-01-30T13:11:59.239054949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:11:59.326374 kubelet[2393]: E0130 13:11:59.326230 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:11:59.590084 containerd[1927]: time="2025-01-30T13:11:59.589605071Z" level=info msg="CreateContainer within sandbox \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:11:59.624089 containerd[1927]: time="2025-01-30T13:11:59.623749763Z" level=info msg="CreateContainer within sandbox \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa\"" Jan 30 13:11:59.625217 containerd[1927]: time="2025-01-30T13:11:59.624758939Z" level=info msg="StartContainer for \"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa\"" Jan 30 13:11:59.703034 systemd[1]: Started cri-containerd-22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa.scope - libcontainer container 22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa. Jan 30 13:11:59.782624 containerd[1927]: time="2025-01-30T13:11:59.782549500Z" level=info msg="StartContainer for \"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa\" returns successfully" Jan 30 13:11:59.800965 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:11:59.801704 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:11:59.801832 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:11:59.816553 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:11:59.817010 systemd[1]: cri-containerd-22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa.scope: Deactivated successfully. Jan 30 13:11:59.868265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:11:59.874490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa-rootfs.mount: Deactivated successfully. Jan 30 13:11:59.929799 containerd[1927]: time="2025-01-30T13:11:59.929709877Z" level=info msg="shim disconnected" id=22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa namespace=k8s.io Jan 30 13:11:59.930181 containerd[1927]: time="2025-01-30T13:11:59.930121815Z" level=warning msg="cleaning up after shim disconnected" id=22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa namespace=k8s.io Jan 30 13:11:59.930319 containerd[1927]: time="2025-01-30T13:11:59.930290775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:00.254499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678662522.mount: Deactivated successfully. Jan 30 13:12:00.326970 kubelet[2393]: E0130 13:12:00.326864 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:00.597766 containerd[1927]: time="2025-01-30T13:12:00.597466133Z" level=info msg="CreateContainer within sandbox \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:12:00.639132 containerd[1927]: time="2025-01-30T13:12:00.638953166Z" level=info msg="CreateContainer within sandbox \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b\"" Jan 30 13:12:00.642739 containerd[1927]: time="2025-01-30T13:12:00.642557596Z" level=info msg="StartContainer for \"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b\"" Jan 30 13:12:00.653977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129537579.mount: Deactivated successfully. Jan 30 13:12:00.741716 systemd[1]: Started cri-containerd-593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b.scope - libcontainer container 593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b. Jan 30 13:12:00.822129 containerd[1927]: time="2025-01-30T13:12:00.821769670Z" level=info msg="StartContainer for \"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b\" returns successfully" Jan 30 13:12:00.831265 systemd[1]: cri-containerd-593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b.scope: Deactivated successfully. Jan 30 13:12:00.900273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b-rootfs.mount: Deactivated successfully. Jan 30 13:12:01.012797 containerd[1927]: time="2025-01-30T13:12:01.012700585Z" level=info msg="shim disconnected" id=593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b namespace=k8s.io Jan 30 13:12:01.012797 containerd[1927]: time="2025-01-30T13:12:01.012774170Z" level=warning msg="cleaning up after shim disconnected" id=593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b namespace=k8s.io Jan 30 13:12:01.012797 containerd[1927]: time="2025-01-30T13:12:01.012793836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:01.014612 containerd[1927]: time="2025-01-30T13:12:01.013521435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:01.015677 containerd[1927]: time="2025-01-30T13:12:01.015602892Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 30 13:12:01.017855 containerd[1927]: time="2025-01-30T13:12:01.017790865Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:01.024649 containerd[1927]: time="2025-01-30T13:12:01.024588734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:01.029591 containerd[1927]: time="2025-01-30T13:12:01.029476737Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 3.429678622s" Jan 30 13:12:01.029845 containerd[1927]: time="2025-01-30T13:12:01.029808439Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 13:12:01.035553 containerd[1927]: time="2025-01-30T13:12:01.035504265Z" level=info msg="CreateContainer within sandbox \"5cc47627ff10c9346eabe3833b624f72b37c848fee4c240fa79ba4ed3955aabf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:12:01.040607 containerd[1927]: time="2025-01-30T13:12:01.040524034Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:12:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:12:01.068701 containerd[1927]: time="2025-01-30T13:12:01.068623129Z" level=info msg="CreateContainer within sandbox \"5cc47627ff10c9346eabe3833b624f72b37c848fee4c240fa79ba4ed3955aabf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e540134a8e158b6c799d19407bebfac8d3bbb0e08c40ec439d0cdf071689204d\"" Jan 30 13:12:01.069737 containerd[1927]: time="2025-01-30T13:12:01.069676319Z" level=info msg="StartContainer for \"e540134a8e158b6c799d19407bebfac8d3bbb0e08c40ec439d0cdf071689204d\"" Jan 30 13:12:01.112490 systemd[1]: Started cri-containerd-e540134a8e158b6c799d19407bebfac8d3bbb0e08c40ec439d0cdf071689204d.scope - libcontainer container e540134a8e158b6c799d19407bebfac8d3bbb0e08c40ec439d0cdf071689204d. Jan 30 13:12:01.171475 containerd[1927]: time="2025-01-30T13:12:01.171286580Z" level=info msg="StartContainer for \"e540134a8e158b6c799d19407bebfac8d3bbb0e08c40ec439d0cdf071689204d\" returns successfully" Jan 30 13:12:01.327091 kubelet[2393]: E0130 13:12:01.327025 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:01.609543 containerd[1927]: time="2025-01-30T13:12:01.609371572Z" level=info msg="CreateContainer within sandbox \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:12:01.640343 containerd[1927]: time="2025-01-30T13:12:01.640144523Z" level=info msg="CreateContainer within sandbox \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11\"" Jan 30 13:12:01.641590 containerd[1927]: time="2025-01-30T13:12:01.641316092Z" level=info msg="StartContainer for \"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11\"" Jan 30 13:12:01.726515 systemd[1]: Started cri-containerd-64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11.scope - libcontainer container 64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11. Jan 30 13:12:01.775340 systemd[1]: cri-containerd-64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11.scope: Deactivated successfully. Jan 30 13:12:01.777335 containerd[1927]: time="2025-01-30T13:12:01.777146322Z" level=info msg="StartContainer for \"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11\" returns successfully" Jan 30 13:12:01.809389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11-rootfs.mount: Deactivated successfully. Jan 30 13:12:01.815469 containerd[1927]: time="2025-01-30T13:12:01.815346713Z" level=info msg="shim disconnected" id=64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11 namespace=k8s.io Jan 30 13:12:01.815741 containerd[1927]: time="2025-01-30T13:12:01.815475622Z" level=warning msg="cleaning up after shim disconnected" id=64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11 namespace=k8s.io Jan 30 13:12:01.815741 containerd[1927]: time="2025-01-30T13:12:01.815497713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:01.836438 containerd[1927]: time="2025-01-30T13:12:01.836313163Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:12:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:12:02.328247 kubelet[2393]: E0130 13:12:02.328171 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:02.620481 containerd[1927]: time="2025-01-30T13:12:02.619703264Z" level=info msg="CreateContainer within sandbox \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:12:02.638524 kubelet[2393]: I0130 13:12:02.638239 2393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-srhrn" podStartSLOduration=5.245237983 podStartE2EDuration="16.638189215s" podCreationTimestamp="2025-01-30 13:11:46 +0000 UTC" firstStartedPulling="2025-01-30 13:11:49.639473386 +0000 UTC m=+4.465421327" lastFinishedPulling="2025-01-30 13:12:01.032424618 +0000 UTC m=+15.858372559" observedRunningTime="2025-01-30 13:12:01.644065538 +0000 UTC m=+16.470013503" watchObservedRunningTime="2025-01-30 13:12:02.638189215 +0000 UTC m=+17.464137192" Jan 30 13:12:02.642524 containerd[1927]: time="2025-01-30T13:12:02.642407883Z" level=info msg="CreateContainer within sandbox \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\"" Jan 30 13:12:02.644518 containerd[1927]: time="2025-01-30T13:12:02.643070722Z" level=info msg="StartContainer for \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\"" Jan 30 13:12:02.702490 systemd[1]: Started cri-containerd-c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913.scope - libcontainer container c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913. Jan 30 13:12:02.753281 containerd[1927]: time="2025-01-30T13:12:02.752915574Z" level=info msg="StartContainer for \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\" returns successfully" Jan 30 13:12:02.881736 kubelet[2393]: I0130 13:12:02.880642 2393 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:12:03.328794 kubelet[2393]: E0130 13:12:03.328597 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:03.620210 kernel: Initializing XFRM netlink socket Jan 30 13:12:04.329027 kubelet[2393]: E0130 13:12:04.328958 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:04.484520 kubelet[2393]: I0130 13:12:04.484359 2393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g95b9" podStartSLOduration=10.521350718 podStartE2EDuration="18.484334163s" podCreationTimestamp="2025-01-30 13:11:46 +0000 UTC" firstStartedPulling="2025-01-30 13:11:49.634603765 +0000 UTC m=+4.460551706" lastFinishedPulling="2025-01-30 13:11:57.597587222 +0000 UTC m=+12.423535151" observedRunningTime="2025-01-30 13:12:03.654737602 +0000 UTC m=+18.480685579" watchObservedRunningTime="2025-01-30 13:12:04.484334163 +0000 UTC m=+19.310282116" Jan 30 13:12:04.484854 kubelet[2393]: I0130 13:12:04.484814 2393 topology_manager.go:215] "Topology Admit Handler" podUID="1a8b9894-7dc6-486e-bbca-0c3012ab0075" podNamespace="default" podName="nginx-deployment-85f456d6dd-jtgld" Jan 30 13:12:04.494861 systemd[1]: Created slice kubepods-besteffort-pod1a8b9894_7dc6_486e_bbca_0c3012ab0075.slice - libcontainer container kubepods-besteffort-pod1a8b9894_7dc6_486e_bbca_0c3012ab0075.slice. Jan 30 13:12:04.562635 kubelet[2393]: I0130 13:12:04.562558 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knhrt\" (UniqueName: \"kubernetes.io/projected/1a8b9894-7dc6-486e-bbca-0c3012ab0075-kube-api-access-knhrt\") pod \"nginx-deployment-85f456d6dd-jtgld\" (UID: \"1a8b9894-7dc6-486e-bbca-0c3012ab0075\") " pod="default/nginx-deployment-85f456d6dd-jtgld" Jan 30 13:12:04.801241 containerd[1927]: time="2025-01-30T13:12:04.800679562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-jtgld,Uid:1a8b9894-7dc6-486e-bbca-0c3012ab0075,Namespace:default,Attempt:0,}" Jan 30 13:12:05.329744 kubelet[2393]: E0130 13:12:05.329681 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:05.434909 systemd-networkd[1852]: cilium_host: Link UP Jan 30 13:12:05.435622 systemd-networkd[1852]: cilium_net: Link UP Jan 30 13:12:05.435969 systemd-networkd[1852]: cilium_net: Gained carrier Jan 30 13:12:05.440631 (udev-worker)[2846]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:12:05.440699 systemd-networkd[1852]: cilium_host: Gained carrier Jan 30 13:12:05.443499 (udev-worker)[3098]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:12:05.599770 (udev-worker)[3102]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:12:05.611310 systemd-networkd[1852]: cilium_vxlan: Link UP Jan 30 13:12:05.611323 systemd-networkd[1852]: cilium_vxlan: Gained carrier Jan 30 13:12:05.939457 systemd-networkd[1852]: cilium_net: Gained IPv6LL Jan 30 13:12:06.085474 kernel: NET: Registered PF_ALG protocol family Jan 30 13:12:06.203466 systemd-networkd[1852]: cilium_host: Gained IPv6LL Jan 30 13:12:06.315722 kubelet[2393]: E0130 13:12:06.315659 2393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:06.329930 kubelet[2393]: E0130 13:12:06.329862 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:07.330374 kubelet[2393]: E0130 13:12:07.330300 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:07.357251 (udev-worker)[2847]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:12:07.361374 systemd-networkd[1852]: lxc_health: Link UP Jan 30 13:12:07.373415 systemd-networkd[1852]: lxc_health: Gained carrier Jan 30 13:12:07.419850 systemd-networkd[1852]: cilium_vxlan: Gained IPv6LL Jan 30 13:12:07.866693 systemd-networkd[1852]: lxce61142d33aa5: Link UP Jan 30 13:12:07.876478 kernel: eth0: renamed from tmp749c7 Jan 30 13:12:07.885838 systemd-networkd[1852]: lxce61142d33aa5: Gained carrier Jan 30 13:12:07.938930 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 13:12:08.331070 kubelet[2393]: E0130 13:12:08.330909 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:08.635910 systemd-networkd[1852]: lxc_health: Gained IPv6LL Jan 30 13:12:09.019931 systemd-networkd[1852]: lxce61142d33aa5: Gained IPv6LL Jan 30 13:12:09.331560 kubelet[2393]: E0130 13:12:09.331333 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:10.333329 kubelet[2393]: E0130 13:12:10.333252 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:11.285394 ntpd[1913]: Listen normally on 7 cilium_host 192.168.1.75:123 Jan 30 13:12:11.287014 ntpd[1913]: 30 Jan 13:12:11 ntpd[1913]: Listen normally on 7 cilium_host 192.168.1.75:123 Jan 30 13:12:11.287014 ntpd[1913]: 30 Jan 13:12:11 ntpd[1913]: Listen normally on 8 cilium_net [fe80::78a8:5eff:fe7e:38df%3]:123 Jan 30 13:12:11.287014 ntpd[1913]: 30 Jan 13:12:11 ntpd[1913]: Listen normally on 9 cilium_host [fe80::3041:91ff:fea6:c877%4]:123 Jan 30 13:12:11.287014 ntpd[1913]: 30 Jan 13:12:11 ntpd[1913]: Listen normally on 10 cilium_vxlan [fe80::a46a:5fff:fe62:88e7%5]:123 Jan 30 13:12:11.287014 ntpd[1913]: 30 Jan 13:12:11 ntpd[1913]: Listen normally on 11 lxc_health [fe80::4c0b:9aff:fe87:2848%7]:123 Jan 30 13:12:11.287014 ntpd[1913]: 30 Jan 13:12:11 ntpd[1913]: Listen normally on 12 lxce61142d33aa5 [fe80::10db:43ff:fead:799a%9]:123 Jan 30 13:12:11.285515 ntpd[1913]: Listen normally on 8 cilium_net [fe80::78a8:5eff:fe7e:38df%3]:123 Jan 30 13:12:11.285593 ntpd[1913]: Listen normally on 9 cilium_host [fe80::3041:91ff:fea6:c877%4]:123 Jan 30 13:12:11.285684 ntpd[1913]: Listen normally on 10 cilium_vxlan [fe80::a46a:5fff:fe62:88e7%5]:123 Jan 30 13:12:11.285766 ntpd[1913]: Listen normally on 11 lxc_health [fe80::4c0b:9aff:fe87:2848%7]:123 Jan 30 13:12:11.285833 ntpd[1913]: Listen normally on 12 lxce61142d33aa5 [fe80::10db:43ff:fead:799a%9]:123 Jan 30 13:12:11.334286 kubelet[2393]: E0130 13:12:11.334202 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:12.335195 kubelet[2393]: E0130 13:12:12.335123 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:13.335666 kubelet[2393]: E0130 13:12:13.335599 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:14.336264 kubelet[2393]: E0130 13:12:14.336196 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:15.336867 kubelet[2393]: E0130 13:12:15.336794 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:15.880213 containerd[1927]: time="2025-01-30T13:12:15.880003925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:12:15.880213 containerd[1927]: time="2025-01-30T13:12:15.880123889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:12:15.881128 containerd[1927]: time="2025-01-30T13:12:15.880867277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:15.881248 containerd[1927]: time="2025-01-30T13:12:15.881054390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:15.919513 systemd[1]: Started cri-containerd-749c72a83ae66ac6fde316ee893742a0634d38b5edfbc3315aca335b32f66099.scope - libcontainer container 749c72a83ae66ac6fde316ee893742a0634d38b5edfbc3315aca335b32f66099. Jan 30 13:12:15.980543 containerd[1927]: time="2025-01-30T13:12:15.980397990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-jtgld,Uid:1a8b9894-7dc6-486e-bbca-0c3012ab0075,Namespace:default,Attempt:0,} returns sandbox id \"749c72a83ae66ac6fde316ee893742a0634d38b5edfbc3315aca335b32f66099\"" Jan 30 13:12:15.983629 containerd[1927]: time="2025-01-30T13:12:15.983464334Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:12:16.338000 kubelet[2393]: E0130 13:12:16.337444 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:17.113615 kubelet[2393]: I0130 13:12:17.113350 2393 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:12:17.338455 kubelet[2393]: E0130 13:12:17.338245 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:18.338746 kubelet[2393]: E0130 13:12:18.338678 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:18.966699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441465861.mount: Deactivated successfully. Jan 30 13:12:19.339281 kubelet[2393]: E0130 13:12:19.338917 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:20.302025 containerd[1927]: time="2025-01-30T13:12:20.300147888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:20.302922 containerd[1927]: time="2025-01-30T13:12:20.302843847Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490" Jan 30 13:12:20.304327 containerd[1927]: time="2025-01-30T13:12:20.304271961Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:20.309674 containerd[1927]: time="2025-01-30T13:12:20.309617560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:20.311776 containerd[1927]: time="2025-01-30T13:12:20.311683192Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 4.32815848s" Jan 30 13:12:20.311993 containerd[1927]: time="2025-01-30T13:12:20.311959751Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 30 13:12:20.316112 containerd[1927]: time="2025-01-30T13:12:20.316040350Z" level=info msg="CreateContainer within sandbox \"749c72a83ae66ac6fde316ee893742a0634d38b5edfbc3315aca335b32f66099\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:12:20.336804 containerd[1927]: time="2025-01-30T13:12:20.336721357Z" level=info msg="CreateContainer within sandbox \"749c72a83ae66ac6fde316ee893742a0634d38b5edfbc3315aca335b32f66099\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2a1a6412a7b7a64100e049913a250b2e4d2cff013120f93f426bdd231824cb93\"" Jan 30 13:12:20.337675 containerd[1927]: time="2025-01-30T13:12:20.337624245Z" level=info msg="StartContainer for \"2a1a6412a7b7a64100e049913a250b2e4d2cff013120f93f426bdd231824cb93\"" Jan 30 13:12:20.339658 kubelet[2393]: E0130 13:12:20.339558 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:20.393475 systemd[1]: Started cri-containerd-2a1a6412a7b7a64100e049913a250b2e4d2cff013120f93f426bdd231824cb93.scope - libcontainer container 2a1a6412a7b7a64100e049913a250b2e4d2cff013120f93f426bdd231824cb93. Jan 30 13:12:20.435034 containerd[1927]: time="2025-01-30T13:12:20.434956935Z" level=info msg="StartContainer for \"2a1a6412a7b7a64100e049913a250b2e4d2cff013120f93f426bdd231824cb93\" returns successfully" Jan 30 13:12:20.696839 kubelet[2393]: I0130 13:12:20.696739 2393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-jtgld" podStartSLOduration=12.366299611 podStartE2EDuration="16.696717524s" podCreationTimestamp="2025-01-30 13:12:04 +0000 UTC" firstStartedPulling="2025-01-30 13:12:15.982950718 +0000 UTC m=+30.808898671" lastFinishedPulling="2025-01-30 13:12:20.313368655 +0000 UTC m=+35.139316584" observedRunningTime="2025-01-30 13:12:20.696280926 +0000 UTC m=+35.522228891" watchObservedRunningTime="2025-01-30 13:12:20.696717524 +0000 UTC m=+35.522665465" Jan 30 13:12:21.340548 kubelet[2393]: E0130 13:12:21.340487 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:22.341639 kubelet[2393]: E0130 13:12:22.341555 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:22.655216 update_engine[1921]: I20250130 13:12:22.654765 1921 update_attempter.cc:509] Updating boot flags... Jan 30 13:12:22.734344 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3599) Jan 30 13:12:22.985596 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3603) Jan 30 13:12:23.233477 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3603) Jan 30 13:12:23.342486 kubelet[2393]: E0130 13:12:23.342397 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:24.343400 kubelet[2393]: E0130 13:12:24.343335 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:24.893647 kubelet[2393]: I0130 13:12:24.893540 2393 topology_manager.go:215] "Topology Admit Handler" podUID="1180118f-0aff-4cea-8785-a2ff608f438a" podNamespace="default" podName="nfs-server-provisioner-0" Jan 30 13:12:24.904932 systemd[1]: Created slice kubepods-besteffort-pod1180118f_0aff_4cea_8785_a2ff608f438a.slice - libcontainer container kubepods-besteffort-pod1180118f_0aff_4cea_8785_a2ff608f438a.slice. Jan 30 13:12:24.998800 kubelet[2393]: I0130 13:12:24.998724 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1180118f-0aff-4cea-8785-a2ff608f438a-data\") pod \"nfs-server-provisioner-0\" (UID: \"1180118f-0aff-4cea-8785-a2ff608f438a\") " pod="default/nfs-server-provisioner-0" Jan 30 13:12:24.998800 kubelet[2393]: I0130 13:12:24.998798 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkxgm\" (UniqueName: \"kubernetes.io/projected/1180118f-0aff-4cea-8785-a2ff608f438a-kube-api-access-tkxgm\") pod \"nfs-server-provisioner-0\" (UID: \"1180118f-0aff-4cea-8785-a2ff608f438a\") " pod="default/nfs-server-provisioner-0" Jan 30 13:12:25.211531 containerd[1927]: time="2025-01-30T13:12:25.211353089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1180118f-0aff-4cea-8785-a2ff608f438a,Namespace:default,Attempt:0,}" Jan 30 13:12:25.256634 systemd-networkd[1852]: lxcf18206a0957d: Link UP Jan 30 13:12:25.266023 (udev-worker)[3599]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:12:25.270198 kernel: eth0: renamed from tmp14e04 Jan 30 13:12:25.278405 systemd-networkd[1852]: lxcf18206a0957d: Gained carrier Jan 30 13:12:25.343823 kubelet[2393]: E0130 13:12:25.343707 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:25.601136 containerd[1927]: time="2025-01-30T13:12:25.600756368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:12:25.601136 containerd[1927]: time="2025-01-30T13:12:25.600865239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:12:25.601136 containerd[1927]: time="2025-01-30T13:12:25.600910105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:25.602174 containerd[1927]: time="2025-01-30T13:12:25.601499900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:25.643503 systemd[1]: Started cri-containerd-14e04a56e06a7b2a049eca4af920e207cff2d81e350436da1ceafcc216116d03.scope - libcontainer container 14e04a56e06a7b2a049eca4af920e207cff2d81e350436da1ceafcc216116d03. Jan 30 13:12:25.706913 containerd[1927]: time="2025-01-30T13:12:25.706853199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1180118f-0aff-4cea-8785-a2ff608f438a,Namespace:default,Attempt:0,} returns sandbox id \"14e04a56e06a7b2a049eca4af920e207cff2d81e350436da1ceafcc216116d03\"" Jan 30 13:12:25.710344 containerd[1927]: time="2025-01-30T13:12:25.709921164Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:12:26.316022 kubelet[2393]: E0130 13:12:26.315955 2393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:26.344501 kubelet[2393]: E0130 13:12:26.344431 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:26.878587 systemd-networkd[1852]: lxcf18206a0957d: Gained IPv6LL Jan 30 13:12:27.345762 kubelet[2393]: E0130 13:12:27.345431 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:28.339440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323657466.mount: Deactivated successfully. Jan 30 13:12:28.346873 kubelet[2393]: E0130 13:12:28.346794 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:29.285379 ntpd[1913]: Listen normally on 13 lxcf18206a0957d [fe80::1894:91ff:fee7:308f%11]:123 Jan 30 13:12:29.286037 ntpd[1913]: 30 Jan 13:12:29 ntpd[1913]: Listen normally on 13 lxcf18206a0957d [fe80::1894:91ff:fee7:308f%11]:123 Jan 30 13:12:29.347961 kubelet[2393]: E0130 13:12:29.347428 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:30.347723 kubelet[2393]: E0130 13:12:30.347639 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:31.264698 containerd[1927]: time="2025-01-30T13:12:31.264626576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:31.266587 containerd[1927]: time="2025-01-30T13:12:31.266514759Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jan 30 13:12:31.267673 containerd[1927]: time="2025-01-30T13:12:31.267582501Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:31.272857 containerd[1927]: time="2025-01-30T13:12:31.272756907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:31.275248 containerd[1927]: time="2025-01-30T13:12:31.275021442Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.565029383s" Jan 30 13:12:31.275248 containerd[1927]: time="2025-01-30T13:12:31.275078351Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 30 13:12:31.279739 containerd[1927]: time="2025-01-30T13:12:31.279686446Z" level=info msg="CreateContainer within sandbox \"14e04a56e06a7b2a049eca4af920e207cff2d81e350436da1ceafcc216116d03\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:12:31.309816 containerd[1927]: time="2025-01-30T13:12:31.309737932Z" level=info msg="CreateContainer within sandbox \"14e04a56e06a7b2a049eca4af920e207cff2d81e350436da1ceafcc216116d03\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f751617954726b958d2ab7ed928424137e0a9af679d6ee05de2f26c99d197617\"" Jan 30 13:12:31.312239 containerd[1927]: time="2025-01-30T13:12:31.310982294Z" level=info msg="StartContainer for \"f751617954726b958d2ab7ed928424137e0a9af679d6ee05de2f26c99d197617\"" Jan 30 13:12:31.348190 kubelet[2393]: E0130 13:12:31.348102 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:31.366539 systemd[1]: Started cri-containerd-f751617954726b958d2ab7ed928424137e0a9af679d6ee05de2f26c99d197617.scope - libcontainer container f751617954726b958d2ab7ed928424137e0a9af679d6ee05de2f26c99d197617. Jan 30 13:12:31.413525 containerd[1927]: time="2025-01-30T13:12:31.412988428Z" level=info msg="StartContainer for \"f751617954726b958d2ab7ed928424137e0a9af679d6ee05de2f26c99d197617\" returns successfully" Jan 30 13:12:31.755135 kubelet[2393]: I0130 13:12:31.755038 2393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.18762753 podStartE2EDuration="7.755015515s" podCreationTimestamp="2025-01-30 13:12:24 +0000 UTC" firstStartedPulling="2025-01-30 13:12:25.709358311 +0000 UTC m=+40.535306264" lastFinishedPulling="2025-01-30 13:12:31.276746296 +0000 UTC m=+46.102694249" observedRunningTime="2025-01-30 13:12:31.754407231 +0000 UTC m=+46.580355160" watchObservedRunningTime="2025-01-30 13:12:31.755015515 +0000 UTC m=+46.580963492" Jan 30 13:12:32.348726 kubelet[2393]: E0130 13:12:32.348661 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:33.349295 kubelet[2393]: E0130 13:12:33.349227 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:34.349804 kubelet[2393]: E0130 13:12:34.349740 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:35.350612 kubelet[2393]: E0130 13:12:35.350544 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:36.351510 kubelet[2393]: E0130 13:12:36.351440 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:37.352022 kubelet[2393]: E0130 13:12:37.351957 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:38.352606 kubelet[2393]: E0130 13:12:38.352539 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:39.353003 kubelet[2393]: E0130 13:12:39.352939 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:40.353765 kubelet[2393]: E0130 13:12:40.353701 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:40.929869 kubelet[2393]: I0130 13:12:40.929799 2393 topology_manager.go:215] "Topology Admit Handler" podUID="316bd2d7-4630-41b1-a527-67dc72480d56" podNamespace="default" podName="test-pod-1" Jan 30 13:12:40.942839 systemd[1]: Created slice kubepods-besteffort-pod316bd2d7_4630_41b1_a527_67dc72480d56.slice - libcontainer container kubepods-besteffort-pod316bd2d7_4630_41b1_a527_67dc72480d56.slice. Jan 30 13:12:41.091761 kubelet[2393]: I0130 13:12:41.091710 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7d1152e1-4148-45e8-bca7-4b5fd004b0d4\" (UniqueName: \"kubernetes.io/nfs/316bd2d7-4630-41b1-a527-67dc72480d56-pvc-7d1152e1-4148-45e8-bca7-4b5fd004b0d4\") pod \"test-pod-1\" (UID: \"316bd2d7-4630-41b1-a527-67dc72480d56\") " pod="default/test-pod-1" Jan 30 13:12:41.092115 kubelet[2393]: I0130 13:12:41.091996 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djz7t\" (UniqueName: \"kubernetes.io/projected/316bd2d7-4630-41b1-a527-67dc72480d56-kube-api-access-djz7t\") pod \"test-pod-1\" (UID: \"316bd2d7-4630-41b1-a527-67dc72480d56\") " pod="default/test-pod-1" Jan 30 13:12:41.227230 kernel: FS-Cache: Loaded Jan 30 13:12:41.271040 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:12:41.271214 kernel: RPC: Registered udp transport module. Jan 30 13:12:41.271335 kernel: RPC: Registered tcp transport module. Jan 30 13:12:41.271375 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:12:41.273535 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:12:41.354747 kubelet[2393]: E0130 13:12:41.354677 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:41.580622 kernel: NFS: Registering the id_resolver key type Jan 30 13:12:41.580771 kernel: Key type id_resolver registered Jan 30 13:12:41.580826 kernel: Key type id_legacy registered Jan 30 13:12:41.620188 nfsidmap[4035]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 30 13:12:41.626530 nfsidmap[4036]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 30 13:12:41.848475 containerd[1927]: time="2025-01-30T13:12:41.848407941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:316bd2d7-4630-41b1-a527-67dc72480d56,Namespace:default,Attempt:0,}" Jan 30 13:12:41.896419 (udev-worker)[4026]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:12:41.897684 (udev-worker)[4030]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:12:41.897952 systemd-networkd[1852]: lxcd5a55881b73b: Link UP Jan 30 13:12:41.906572 kernel: eth0: renamed from tmp23759 Jan 30 13:12:41.914996 systemd-networkd[1852]: lxcd5a55881b73b: Gained carrier Jan 30 13:12:42.224791 containerd[1927]: time="2025-01-30T13:12:42.224100559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:12:42.224791 containerd[1927]: time="2025-01-30T13:12:42.224250139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:12:42.225246 containerd[1927]: time="2025-01-30T13:12:42.224884927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:42.225246 containerd[1927]: time="2025-01-30T13:12:42.225068395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:42.266476 systemd[1]: Started cri-containerd-23759dd4d4ef61c580e8b48b83dd7e0ab6d0b15f5dfeeeb5e958eb8ae8ecb5bf.scope - libcontainer container 23759dd4d4ef61c580e8b48b83dd7e0ab6d0b15f5dfeeeb5e958eb8ae8ecb5bf. Jan 30 13:12:42.328924 containerd[1927]: time="2025-01-30T13:12:42.328768904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:316bd2d7-4630-41b1-a527-67dc72480d56,Namespace:default,Attempt:0,} returns sandbox id \"23759dd4d4ef61c580e8b48b83dd7e0ab6d0b15f5dfeeeb5e958eb8ae8ecb5bf\"" Jan 30 13:12:42.332805 containerd[1927]: time="2025-01-30T13:12:42.332720192Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:12:42.355767 kubelet[2393]: E0130 13:12:42.355718 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:42.614771 containerd[1927]: time="2025-01-30T13:12:42.614472909Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:42.615920 containerd[1927]: time="2025-01-30T13:12:42.615844389Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:12:42.621861 containerd[1927]: time="2025-01-30T13:12:42.621780357Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 288.969109ms" Jan 30 13:12:42.621861 containerd[1927]: time="2025-01-30T13:12:42.621839433Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 30 13:12:42.625606 containerd[1927]: time="2025-01-30T13:12:42.625533969Z" level=info msg="CreateContainer within sandbox \"23759dd4d4ef61c580e8b48b83dd7e0ab6d0b15f5dfeeeb5e958eb8ae8ecb5bf\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:12:42.642743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1640693427.mount: Deactivated successfully. Jan 30 13:12:42.648055 containerd[1927]: time="2025-01-30T13:12:42.647865729Z" level=info msg="CreateContainer within sandbox \"23759dd4d4ef61c580e8b48b83dd7e0ab6d0b15f5dfeeeb5e958eb8ae8ecb5bf\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d0c10d0e2d62f381a838124d118c1fa9bf503bbba3e01041048f8e688be52b99\"" Jan 30 13:12:42.649264 containerd[1927]: time="2025-01-30T13:12:42.649055193Z" level=info msg="StartContainer for \"d0c10d0e2d62f381a838124d118c1fa9bf503bbba3e01041048f8e688be52b99\"" Jan 30 13:12:42.698462 systemd[1]: Started cri-containerd-d0c10d0e2d62f381a838124d118c1fa9bf503bbba3e01041048f8e688be52b99.scope - libcontainer container d0c10d0e2d62f381a838124d118c1fa9bf503bbba3e01041048f8e688be52b99. Jan 30 13:12:42.744734 containerd[1927]: time="2025-01-30T13:12:42.744194830Z" level=info msg="StartContainer for \"d0c10d0e2d62f381a838124d118c1fa9bf503bbba3e01041048f8e688be52b99\" returns successfully" Jan 30 13:12:42.792697 kubelet[2393]: I0130 13:12:42.792571 2393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.501497649 podStartE2EDuration="17.792544702s" podCreationTimestamp="2025-01-30 13:12:25 +0000 UTC" firstStartedPulling="2025-01-30 13:12:42.331853792 +0000 UTC m=+57.157801721" lastFinishedPulling="2025-01-30 13:12:42.622900833 +0000 UTC m=+57.448848774" observedRunningTime="2025-01-30 13:12:42.790820002 +0000 UTC m=+57.616767943" watchObservedRunningTime="2025-01-30 13:12:42.792544702 +0000 UTC m=+57.618492631" Jan 30 13:12:43.259699 systemd-networkd[1852]: lxcd5a55881b73b: Gained IPv6LL Jan 30 13:12:43.356697 kubelet[2393]: E0130 13:12:43.356629 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:44.357453 kubelet[2393]: E0130 13:12:44.357380 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:45.285510 ntpd[1913]: Listen normally on 14 lxcd5a55881b73b [fe80::e8a1:23ff:fe10:af6%13]:123 Jan 30 13:12:45.286001 ntpd[1913]: 30 Jan 13:12:45 ntpd[1913]: Listen normally on 14 lxcd5a55881b73b [fe80::e8a1:23ff:fe10:af6%13]:123 Jan 30 13:12:45.357819 kubelet[2393]: E0130 13:12:45.357759 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:46.316009 kubelet[2393]: E0130 13:12:46.315939 2393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:46.358499 kubelet[2393]: E0130 13:12:46.358443 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:47.358710 kubelet[2393]: E0130 13:12:47.358642 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:48.359799 kubelet[2393]: E0130 13:12:48.359726 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:49.360849 kubelet[2393]: E0130 13:12:49.360756 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:50.361865 kubelet[2393]: E0130 13:12:50.361803 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:51.362377 kubelet[2393]: E0130 13:12:51.362314 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:51.502411 containerd[1927]: time="2025-01-30T13:12:51.502294649Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:12:51.512674 containerd[1927]: time="2025-01-30T13:12:51.512625857Z" level=info msg="StopContainer for \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\" with timeout 2 (s)" Jan 30 13:12:51.513411 containerd[1927]: time="2025-01-30T13:12:51.513367769Z" level=info msg="Stop container \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\" with signal terminated" Jan 30 13:12:51.527067 systemd-networkd[1852]: lxc_health: Link DOWN Jan 30 13:12:51.527085 systemd-networkd[1852]: lxc_health: Lost carrier Jan 30 13:12:51.550073 systemd[1]: cri-containerd-c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913.scope: Deactivated successfully. Jan 30 13:12:51.550553 systemd[1]: cri-containerd-c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913.scope: Consumed 14.111s CPU time. Jan 30 13:12:51.587025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913-rootfs.mount: Deactivated successfully. Jan 30 13:12:51.816088 containerd[1927]: time="2025-01-30T13:12:51.815890459Z" level=info msg="shim disconnected" id=c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913 namespace=k8s.io Jan 30 13:12:51.816088 containerd[1927]: time="2025-01-30T13:12:51.815975443Z" level=warning msg="cleaning up after shim disconnected" id=c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913 namespace=k8s.io Jan 30 13:12:51.816088 containerd[1927]: time="2025-01-30T13:12:51.815998903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:51.840593 containerd[1927]: time="2025-01-30T13:12:51.840530131Z" level=info msg="StopContainer for \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\" returns successfully" Jan 30 13:12:51.844258 containerd[1927]: time="2025-01-30T13:12:51.841369135Z" level=info msg="StopPodSandbox for \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\"" Jan 30 13:12:51.844258 containerd[1927]: time="2025-01-30T13:12:51.841422907Z" level=info msg="Container to stop \"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:12:51.844258 containerd[1927]: time="2025-01-30T13:12:51.841446583Z" level=info msg="Container to stop \"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:12:51.844258 containerd[1927]: time="2025-01-30T13:12:51.841467163Z" level=info msg="Container to stop \"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:12:51.844258 containerd[1927]: time="2025-01-30T13:12:51.841495723Z" level=info msg="Container to stop \"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:12:51.844258 containerd[1927]: time="2025-01-30T13:12:51.841516579Z" level=info msg="Container to stop \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:12:51.844662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0-shm.mount: Deactivated successfully. Jan 30 13:12:51.856186 systemd[1]: cri-containerd-2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0.scope: Deactivated successfully. Jan 30 13:12:51.889740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0-rootfs.mount: Deactivated successfully. Jan 30 13:12:51.894084 containerd[1927]: time="2025-01-30T13:12:51.893868271Z" level=info msg="shim disconnected" id=2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0 namespace=k8s.io Jan 30 13:12:51.894084 containerd[1927]: time="2025-01-30T13:12:51.893922139Z" level=warning msg="cleaning up after shim disconnected" id=2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0 namespace=k8s.io Jan 30 13:12:51.894084 containerd[1927]: time="2025-01-30T13:12:51.893948923Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:51.915150 containerd[1927]: time="2025-01-30T13:12:51.914970643Z" level=info msg="TearDown network for sandbox \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" successfully" Jan 30 13:12:51.915150 containerd[1927]: time="2025-01-30T13:12:51.915032347Z" level=info msg="StopPodSandbox for \"2fc72a75091285984f1b0562ea3457add297a0e1a1b5f5639864383ef54866e0\" returns successfully" Jan 30 13:12:52.059607 kubelet[2393]: I0130 13:12:52.058289 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-host-proc-sys-net\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.059607 kubelet[2393]: I0130 13:12:52.058374 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-host-proc-sys-kernel\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.059607 kubelet[2393]: I0130 13:12:52.058412 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-hostproc\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.059607 kubelet[2393]: I0130 13:12:52.058453 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-hubble-tls\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.059607 kubelet[2393]: I0130 13:12:52.058484 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-run\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.059607 kubelet[2393]: I0130 13:12:52.058515 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-bpf-maps\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.060194 kubelet[2393]: I0130 13:12:52.058549 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-577rs\" (UniqueName: \"kubernetes.io/projected/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-kube-api-access-577rs\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.060194 kubelet[2393]: I0130 13:12:52.058587 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cni-path\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.060194 kubelet[2393]: I0130 13:12:52.058618 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-etc-cni-netd\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.060194 kubelet[2393]: I0130 13:12:52.058653 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-xtables-lock\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.060194 kubelet[2393]: I0130 13:12:52.058694 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-clustermesh-secrets\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.060194 kubelet[2393]: I0130 13:12:52.058736 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-config-path\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.060519 kubelet[2393]: I0130 13:12:52.058774 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-cgroup\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.060519 kubelet[2393]: I0130 13:12:52.058808 2393 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-lib-modules\") pod \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\" (UID: \"fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe\") " Jan 30 13:12:52.060519 kubelet[2393]: I0130 13:12:52.058901 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:52.060519 kubelet[2393]: I0130 13:12:52.058960 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:52.060519 kubelet[2393]: I0130 13:12:52.058996 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:52.060786 kubelet[2393]: I0130 13:12:52.059032 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-hostproc" (OuterVolumeSpecName: "hostproc") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:52.060786 kubelet[2393]: I0130 13:12:52.059259 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:52.060786 kubelet[2393]: I0130 13:12:52.059332 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:52.060786 kubelet[2393]: I0130 13:12:52.059372 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:52.060786 kubelet[2393]: I0130 13:12:52.060415 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cni-path" (OuterVolumeSpecName: "cni-path") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:52.061049 kubelet[2393]: I0130 13:12:52.060937 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:52.063190 kubelet[2393]: I0130 13:12:52.062364 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:12:52.069154 kubelet[2393]: I0130 13:12:52.069020 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:12:52.072705 kubelet[2393]: I0130 13:12:52.072644 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:12:52.073019 kubelet[2393]: I0130 13:12:52.072645 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-kube-api-access-577rs" (OuterVolumeSpecName: "kube-api-access-577rs") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "kube-api-access-577rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:12:52.073019 kubelet[2393]: I0130 13:12:52.072962 2393 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" (UID: "fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:12:52.160192 kubelet[2393]: I0130 13:12:52.159825 2393 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-lib-modules\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160192 kubelet[2393]: I0130 13:12:52.159869 2393 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-host-proc-sys-net\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160192 kubelet[2393]: I0130 13:12:52.159896 2393 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-host-proc-sys-kernel\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160192 kubelet[2393]: I0130 13:12:52.159918 2393 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-hostproc\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160192 kubelet[2393]: I0130 13:12:52.159937 2393 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-hubble-tls\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160192 kubelet[2393]: I0130 13:12:52.159955 2393 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-run\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160192 kubelet[2393]: I0130 13:12:52.159994 2393 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-bpf-maps\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160192 kubelet[2393]: I0130 13:12:52.160013 2393 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-577rs\" (UniqueName: \"kubernetes.io/projected/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-kube-api-access-577rs\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160661 kubelet[2393]: I0130 13:12:52.160032 2393 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cni-path\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160661 kubelet[2393]: I0130 13:12:52.160050 2393 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-etc-cni-netd\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160661 kubelet[2393]: I0130 13:12:52.160072 2393 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-xtables-lock\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160661 kubelet[2393]: I0130 13:12:52.160090 2393 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-clustermesh-secrets\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160661 kubelet[2393]: I0130 13:12:52.160109 2393 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-config-path\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.160661 kubelet[2393]: I0130 13:12:52.160127 2393 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe-cilium-cgroup\") on node \"172.31.25.221\" DevicePath \"\"" Jan 30 13:12:52.362692 kubelet[2393]: E0130 13:12:52.362638 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:52.475873 systemd[1]: Removed slice kubepods-burstable-podfd552757_6fc0_4c32_a9b4_1a6f1bd0dbfe.slice - libcontainer container kubepods-burstable-podfd552757_6fc0_4c32_a9b4_1a6f1bd0dbfe.slice. Jan 30 13:12:52.476202 systemd[1]: kubepods-burstable-podfd552757_6fc0_4c32_a9b4_1a6f1bd0dbfe.slice: Consumed 14.264s CPU time. Jan 30 13:12:52.480075 systemd[1]: var-lib-kubelet-pods-fd552757\x2d6fc0\x2d4c32\x2da9b4\x2d1a6f1bd0dbfe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d577rs.mount: Deactivated successfully. Jan 30 13:12:52.480331 systemd[1]: var-lib-kubelet-pods-fd552757\x2d6fc0\x2d4c32\x2da9b4\x2d1a6f1bd0dbfe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:12:52.480489 systemd[1]: var-lib-kubelet-pods-fd552757\x2d6fc0\x2d4c32\x2da9b4\x2d1a6f1bd0dbfe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:12:52.805987 kubelet[2393]: I0130 13:12:52.804980 2393 scope.go:117] "RemoveContainer" containerID="c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913" Jan 30 13:12:52.808988 containerd[1927]: time="2025-01-30T13:12:52.808848332Z" level=info msg="RemoveContainer for \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\"" Jan 30 13:12:52.814434 containerd[1927]: time="2025-01-30T13:12:52.814271300Z" level=info msg="RemoveContainer for \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\" returns successfully" Jan 30 13:12:52.815091 kubelet[2393]: I0130 13:12:52.814828 2393 scope.go:117] "RemoveContainer" containerID="64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11" Jan 30 13:12:52.817423 containerd[1927]: time="2025-01-30T13:12:52.816930476Z" level=info msg="RemoveContainer for \"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11\"" Jan 30 13:12:52.820597 containerd[1927]: time="2025-01-30T13:12:52.820468676Z" level=info msg="RemoveContainer for \"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11\" returns successfully" Jan 30 13:12:52.821011 kubelet[2393]: I0130 13:12:52.820812 2393 scope.go:117] "RemoveContainer" containerID="593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b" Jan 30 13:12:52.823639 containerd[1927]: time="2025-01-30T13:12:52.823143080Z" level=info msg="RemoveContainer for \"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b\"" Jan 30 13:12:52.826699 containerd[1927]: time="2025-01-30T13:12:52.826578260Z" level=info msg="RemoveContainer for \"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b\" returns successfully" Jan 30 13:12:52.827414 kubelet[2393]: I0130 13:12:52.826897 2393 scope.go:117] "RemoveContainer" containerID="22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa" Jan 30 13:12:52.829584 containerd[1927]: time="2025-01-30T13:12:52.829090328Z" level=info msg="RemoveContainer for \"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa\"" Jan 30 13:12:52.833464 containerd[1927]: time="2025-01-30T13:12:52.833388476Z" level=info msg="RemoveContainer for \"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa\" returns successfully" Jan 30 13:12:52.833961 kubelet[2393]: I0130 13:12:52.833773 2393 scope.go:117] "RemoveContainer" containerID="cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9" Jan 30 13:12:52.835743 containerd[1927]: time="2025-01-30T13:12:52.835660280Z" level=info msg="RemoveContainer for \"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9\"" Jan 30 13:12:52.839188 containerd[1927]: time="2025-01-30T13:12:52.839097680Z" level=info msg="RemoveContainer for \"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9\" returns successfully" Jan 30 13:12:52.839662 kubelet[2393]: I0130 13:12:52.839598 2393 scope.go:117] "RemoveContainer" containerID="c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913" Jan 30 13:12:52.840529 containerd[1927]: time="2025-01-30T13:12:52.840331772Z" level=error msg="ContainerStatus for \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\": not found" Jan 30 13:12:52.840670 kubelet[2393]: E0130 13:12:52.840589 2393 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\": not found" containerID="c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913" Jan 30 13:12:52.843199 kubelet[2393]: I0130 13:12:52.840636 2393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913"} err="failed to get container status \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4e93382a7f6b94953b097e7e6e7b0b9e48195e6581379d4a70da6e8f13ee913\": not found" Jan 30 13:12:52.843199 kubelet[2393]: I0130 13:12:52.841605 2393 scope.go:117] "RemoveContainer" containerID="64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11" Jan 30 13:12:52.843743 containerd[1927]: time="2025-01-30T13:12:52.843682280Z" level=error msg="ContainerStatus for \"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11\": not found" Jan 30 13:12:52.845421 kubelet[2393]: E0130 13:12:52.845336 2393 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11\": not found" containerID="64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11" Jan 30 13:12:52.845577 kubelet[2393]: I0130 13:12:52.845437 2393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11"} err="failed to get container status \"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11\": rpc error: code = NotFound desc = an error occurred when try to find container \"64fdcbea4aa0dbfbacdd60c850feaa01a705190b8cb81a8b8fffaf2fdc64fb11\": not found" Jan 30 13:12:52.845577 kubelet[2393]: I0130 13:12:52.845503 2393 scope.go:117] "RemoveContainer" containerID="593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b" Jan 30 13:12:52.846933 containerd[1927]: time="2025-01-30T13:12:52.846423488Z" level=error msg="ContainerStatus for \"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b\": not found" Jan 30 13:12:52.848229 kubelet[2393]: E0130 13:12:52.847548 2393 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b\": not found" containerID="593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b" Jan 30 13:12:52.848229 kubelet[2393]: I0130 13:12:52.847626 2393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b"} err="failed to get container status \"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b\": rpc error: code = NotFound desc = an error occurred when try to find container \"593007c562acf459e1d4e73e58c6e551f93c31b51764f63c5f571ff08e66cd9b\": not found" Jan 30 13:12:52.848229 kubelet[2393]: I0130 13:12:52.847666 2393 scope.go:117] "RemoveContainer" containerID="22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa" Jan 30 13:12:52.848509 containerd[1927]: time="2025-01-30T13:12:52.848049404Z" level=error msg="ContainerStatus for \"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa\": not found" Jan 30 13:12:52.850199 kubelet[2393]: E0130 13:12:52.848757 2393 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa\": not found" containerID="22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa" Jan 30 13:12:52.850199 kubelet[2393]: I0130 13:12:52.848831 2393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa"} err="failed to get container status \"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa\": rpc error: code = NotFound desc = an error occurred when try to find container \"22afcba6e92fc53e07341ef61000f16c3207dfa2495649eeb4b5fabe5189eefa\": not found" Jan 30 13:12:52.850199 kubelet[2393]: I0130 13:12:52.848869 2393 scope.go:117] "RemoveContainer" containerID="cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9" Jan 30 13:12:52.850836 containerd[1927]: time="2025-01-30T13:12:52.850770824Z" level=error msg="ContainerStatus for \"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9\": not found" Jan 30 13:12:52.851336 kubelet[2393]: E0130 13:12:52.851284 2393 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9\": not found" containerID="cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9" Jan 30 13:12:52.851452 kubelet[2393]: I0130 13:12:52.851358 2393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9"} err="failed to get container status \"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbf0deaa26de00c56560e191f94422820d615a4f23883eda2535cd2297dc5ff9\": not found" Jan 30 13:12:53.363431 kubelet[2393]: E0130 13:12:53.363354 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:54.285373 ntpd[1913]: Deleting interface #11 lxc_health, fe80::4c0b:9aff:fe87:2848%7#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs Jan 30 13:12:54.285928 ntpd[1913]: 30 Jan 13:12:54 ntpd[1913]: Deleting interface #11 lxc_health, fe80::4c0b:9aff:fe87:2848%7#123, interface stats: received=0, sent=0, dropped=0, active_time=43 secs Jan 30 13:12:54.364577 kubelet[2393]: E0130 13:12:54.364520 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:54.469151 kubelet[2393]: I0130 13:12:54.469089 2393 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" path="/var/lib/kubelet/pods/fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe/volumes" Jan 30 13:12:55.364768 kubelet[2393]: E0130 13:12:55.364677 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:56.039776 kubelet[2393]: I0130 13:12:56.039709 2393 topology_manager.go:215] "Topology Admit Handler" podUID="4c3f3747-3eb6-4454-bb2d-63234e226950" podNamespace="kube-system" podName="cilium-operator-599987898-shdrl" Jan 30 13:12:56.039971 kubelet[2393]: E0130 13:12:56.039787 2393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" containerName="mount-cgroup" Jan 30 13:12:56.039971 kubelet[2393]: E0130 13:12:56.039808 2393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" containerName="mount-bpf-fs" Jan 30 13:12:56.039971 kubelet[2393]: E0130 13:12:56.039823 2393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" containerName="apply-sysctl-overwrites" Jan 30 13:12:56.039971 kubelet[2393]: E0130 13:12:56.039837 2393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" containerName="clean-cilium-state" Jan 30 13:12:56.039971 kubelet[2393]: E0130 13:12:56.039852 2393 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" containerName="cilium-agent" Jan 30 13:12:56.039971 kubelet[2393]: I0130 13:12:56.039889 2393 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd552757-6fc0-4c32-a9b4-1a6f1bd0dbfe" containerName="cilium-agent" Jan 30 13:12:56.044136 kubelet[2393]: I0130 13:12:56.042652 2393 topology_manager.go:215] "Topology Admit Handler" podUID="3c50af25-6061-4419-a68c-ea92df63915f" podNamespace="kube-system" podName="cilium-8lxbh" Jan 30 13:12:56.052570 systemd[1]: Created slice kubepods-besteffort-pod4c3f3747_3eb6_4454_bb2d_63234e226950.slice - libcontainer container kubepods-besteffort-pod4c3f3747_3eb6_4454_bb2d_63234e226950.slice. Jan 30 13:12:56.065019 systemd[1]: Created slice kubepods-burstable-pod3c50af25_6061_4419_a68c_ea92df63915f.slice - libcontainer container kubepods-burstable-pod3c50af25_6061_4419_a68c_ea92df63915f.slice. Jan 30 13:12:56.184232 kubelet[2393]: I0130 13:12:56.183670 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c50af25-6061-4419-a68c-ea92df63915f-host-proc-sys-net\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184232 kubelet[2393]: I0130 13:12:56.183732 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c50af25-6061-4419-a68c-ea92df63915f-cilium-cgroup\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184232 kubelet[2393]: I0130 13:12:56.183770 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c50af25-6061-4419-a68c-ea92df63915f-xtables-lock\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184232 kubelet[2393]: I0130 13:12:56.183804 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c50af25-6061-4419-a68c-ea92df63915f-hubble-tls\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184232 kubelet[2393]: I0130 13:12:56.183841 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c50af25-6061-4419-a68c-ea92df63915f-bpf-maps\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184620 kubelet[2393]: I0130 13:12:56.183876 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sfk9\" (UniqueName: \"kubernetes.io/projected/4c3f3747-3eb6-4454-bb2d-63234e226950-kube-api-access-9sfk9\") pod \"cilium-operator-599987898-shdrl\" (UID: \"4c3f3747-3eb6-4454-bb2d-63234e226950\") " pod="kube-system/cilium-operator-599987898-shdrl" Jan 30 13:12:56.184620 kubelet[2393]: I0130 13:12:56.183913 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c50af25-6061-4419-a68c-ea92df63915f-hostproc\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184620 kubelet[2393]: I0130 13:12:56.183964 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c50af25-6061-4419-a68c-ea92df63915f-cni-path\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184620 kubelet[2393]: I0130 13:12:56.184000 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c50af25-6061-4419-a68c-ea92df63915f-lib-modules\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184620 kubelet[2393]: I0130 13:12:56.184036 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c50af25-6061-4419-a68c-ea92df63915f-cilium-ipsec-secrets\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184911 kubelet[2393]: I0130 13:12:56.184070 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c50af25-6061-4419-a68c-ea92df63915f-host-proc-sys-kernel\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184911 kubelet[2393]: I0130 13:12:56.184122 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c3f3747-3eb6-4454-bb2d-63234e226950-cilium-config-path\") pod \"cilium-operator-599987898-shdrl\" (UID: \"4c3f3747-3eb6-4454-bb2d-63234e226950\") " pod="kube-system/cilium-operator-599987898-shdrl" Jan 30 13:12:56.184911 kubelet[2393]: I0130 13:12:56.184196 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c50af25-6061-4419-a68c-ea92df63915f-etc-cni-netd\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184911 kubelet[2393]: I0130 13:12:56.184364 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c50af25-6061-4419-a68c-ea92df63915f-clustermesh-secrets\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.184911 kubelet[2393]: I0130 13:12:56.184435 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c50af25-6061-4419-a68c-ea92df63915f-cilium-config-path\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.185238 kubelet[2393]: I0130 13:12:56.184595 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rmpd\" (UniqueName: \"kubernetes.io/projected/3c50af25-6061-4419-a68c-ea92df63915f-kube-api-access-9rmpd\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.185238 kubelet[2393]: I0130 13:12:56.184641 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c50af25-6061-4419-a68c-ea92df63915f-cilium-run\") pod \"cilium-8lxbh\" (UID: \"3c50af25-6061-4419-a68c-ea92df63915f\") " pod="kube-system/cilium-8lxbh" Jan 30 13:12:56.361219 containerd[1927]: time="2025-01-30T13:12:56.360533794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-shdrl,Uid:4c3f3747-3eb6-4454-bb2d-63234e226950,Namespace:kube-system,Attempt:0,}" Jan 30 13:12:56.365572 kubelet[2393]: E0130 13:12:56.365245 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:56.378910 containerd[1927]: time="2025-01-30T13:12:56.378332278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8lxbh,Uid:3c50af25-6061-4419-a68c-ea92df63915f,Namespace:kube-system,Attempt:0,}" Jan 30 13:12:56.395778 containerd[1927]: time="2025-01-30T13:12:56.395631586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:12:56.397179 containerd[1927]: time="2025-01-30T13:12:56.396779254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:12:56.397179 containerd[1927]: time="2025-01-30T13:12:56.396850294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:56.397179 containerd[1927]: time="2025-01-30T13:12:56.397027198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:56.427834 containerd[1927]: time="2025-01-30T13:12:56.425469202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:12:56.427834 containerd[1927]: time="2025-01-30T13:12:56.425553118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:12:56.427834 containerd[1927]: time="2025-01-30T13:12:56.425577910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:56.427834 containerd[1927]: time="2025-01-30T13:12:56.425709586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:12:56.437492 systemd[1]: Started cri-containerd-0736e4dd9a1f081da10b004e23be45209795f6c0941ebc086186cd53204ce519.scope - libcontainer container 0736e4dd9a1f081da10b004e23be45209795f6c0941ebc086186cd53204ce519. Jan 30 13:12:56.473534 systemd[1]: Started cri-containerd-2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610.scope - libcontainer container 2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610. Jan 30 13:12:56.487958 kubelet[2393]: E0130 13:12:56.487812 2393 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:12:56.532490 containerd[1927]: time="2025-01-30T13:12:56.532263754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8lxbh,Uid:3c50af25-6061-4419-a68c-ea92df63915f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610\"" Jan 30 13:12:56.538930 containerd[1927]: time="2025-01-30T13:12:56.538846858Z" level=info msg="CreateContainer within sandbox \"2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:12:56.542425 containerd[1927]: time="2025-01-30T13:12:56.542375158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-shdrl,Uid:4c3f3747-3eb6-4454-bb2d-63234e226950,Namespace:kube-system,Attempt:0,} returns sandbox id \"0736e4dd9a1f081da10b004e23be45209795f6c0941ebc086186cd53204ce519\"" Jan 30 13:12:56.546309 containerd[1927]: time="2025-01-30T13:12:56.546153790Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:12:56.568307 containerd[1927]: time="2025-01-30T13:12:56.568224467Z" level=info msg="CreateContainer within sandbox \"2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"672e030b96dba63f5209744d85e9ce1426f5d5e422b6a9f373f37bbd48acba54\"" Jan 30 13:12:56.569483 containerd[1927]: time="2025-01-30T13:12:56.569351195Z" level=info msg="StartContainer for \"672e030b96dba63f5209744d85e9ce1426f5d5e422b6a9f373f37bbd48acba54\"" Jan 30 13:12:56.610500 systemd[1]: Started cri-containerd-672e030b96dba63f5209744d85e9ce1426f5d5e422b6a9f373f37bbd48acba54.scope - libcontainer container 672e030b96dba63f5209744d85e9ce1426f5d5e422b6a9f373f37bbd48acba54. Jan 30 13:12:56.660444 containerd[1927]: time="2025-01-30T13:12:56.660259079Z" level=info msg="StartContainer for \"672e030b96dba63f5209744d85e9ce1426f5d5e422b6a9f373f37bbd48acba54\" returns successfully" Jan 30 13:12:56.674531 systemd[1]: cri-containerd-672e030b96dba63f5209744d85e9ce1426f5d5e422b6a9f373f37bbd48acba54.scope: Deactivated successfully. Jan 30 13:12:56.729660 containerd[1927]: time="2025-01-30T13:12:56.729317819Z" level=info msg="shim disconnected" id=672e030b96dba63f5209744d85e9ce1426f5d5e422b6a9f373f37bbd48acba54 namespace=k8s.io Jan 30 13:12:56.729660 containerd[1927]: time="2025-01-30T13:12:56.729393923Z" level=warning msg="cleaning up after shim disconnected" id=672e030b96dba63f5209744d85e9ce1426f5d5e422b6a9f373f37bbd48acba54 namespace=k8s.io Jan 30 13:12:56.729660 containerd[1927]: time="2025-01-30T13:12:56.729415775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:56.822133 containerd[1927]: time="2025-01-30T13:12:56.821902752Z" level=info msg="CreateContainer within sandbox \"2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:12:56.843008 containerd[1927]: time="2025-01-30T13:12:56.842930172Z" level=info msg="CreateContainer within sandbox \"2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9ee0de97936fe43a45c599ca9514b95fbae75c720d58a7a9c397b37ea4c770e9\"" Jan 30 13:12:56.843957 containerd[1927]: time="2025-01-30T13:12:56.843884616Z" level=info msg="StartContainer for \"9ee0de97936fe43a45c599ca9514b95fbae75c720d58a7a9c397b37ea4c770e9\"" Jan 30 13:12:56.887467 systemd[1]: Started cri-containerd-9ee0de97936fe43a45c599ca9514b95fbae75c720d58a7a9c397b37ea4c770e9.scope - libcontainer container 9ee0de97936fe43a45c599ca9514b95fbae75c720d58a7a9c397b37ea4c770e9. Jan 30 13:12:56.938666 containerd[1927]: time="2025-01-30T13:12:56.938462880Z" level=info msg="StartContainer for \"9ee0de97936fe43a45c599ca9514b95fbae75c720d58a7a9c397b37ea4c770e9\" returns successfully" Jan 30 13:12:56.950501 systemd[1]: cri-containerd-9ee0de97936fe43a45c599ca9514b95fbae75c720d58a7a9c397b37ea4c770e9.scope: Deactivated successfully. Jan 30 13:12:56.994068 containerd[1927]: time="2025-01-30T13:12:56.993936937Z" level=info msg="shim disconnected" id=9ee0de97936fe43a45c599ca9514b95fbae75c720d58a7a9c397b37ea4c770e9 namespace=k8s.io Jan 30 13:12:56.994068 containerd[1927]: time="2025-01-30T13:12:56.994056589Z" level=warning msg="cleaning up after shim disconnected" id=9ee0de97936fe43a45c599ca9514b95fbae75c720d58a7a9c397b37ea4c770e9 namespace=k8s.io Jan 30 13:12:56.994968 containerd[1927]: time="2025-01-30T13:12:56.994104793Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:57.366094 kubelet[2393]: E0130 13:12:57.366033 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:57.828489 containerd[1927]: time="2025-01-30T13:12:57.827611405Z" level=info msg="CreateContainer within sandbox \"2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:12:57.861993 containerd[1927]: time="2025-01-30T13:12:57.861856993Z" level=info msg="CreateContainer within sandbox \"2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"31fb6b51857b5ea3b9006e44f4f733f74c4fe6c3482eaed335bccf27e869d5e0\"" Jan 30 13:12:57.863617 containerd[1927]: time="2025-01-30T13:12:57.862729753Z" level=info msg="StartContainer for \"31fb6b51857b5ea3b9006e44f4f733f74c4fe6c3482eaed335bccf27e869d5e0\"" Jan 30 13:12:57.918116 systemd[1]: Started cri-containerd-31fb6b51857b5ea3b9006e44f4f733f74c4fe6c3482eaed335bccf27e869d5e0.scope - libcontainer container 31fb6b51857b5ea3b9006e44f4f733f74c4fe6c3482eaed335bccf27e869d5e0. Jan 30 13:12:57.984188 containerd[1927]: time="2025-01-30T13:12:57.982799198Z" level=info msg="StartContainer for \"31fb6b51857b5ea3b9006e44f4f733f74c4fe6c3482eaed335bccf27e869d5e0\" returns successfully" Jan 30 13:12:57.987383 systemd[1]: cri-containerd-31fb6b51857b5ea3b9006e44f4f733f74c4fe6c3482eaed335bccf27e869d5e0.scope: Deactivated successfully. Jan 30 13:12:58.038998 containerd[1927]: time="2025-01-30T13:12:58.038906050Z" level=info msg="shim disconnected" id=31fb6b51857b5ea3b9006e44f4f733f74c4fe6c3482eaed335bccf27e869d5e0 namespace=k8s.io Jan 30 13:12:58.038998 containerd[1927]: time="2025-01-30T13:12:58.038985982Z" level=warning msg="cleaning up after shim disconnected" id=31fb6b51857b5ea3b9006e44f4f733f74c4fe6c3482eaed335bccf27e869d5e0 namespace=k8s.io Jan 30 13:12:58.039345 containerd[1927]: time="2025-01-30T13:12:58.039011998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:58.294657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31fb6b51857b5ea3b9006e44f4f733f74c4fe6c3482eaed335bccf27e869d5e0-rootfs.mount: Deactivated successfully. Jan 30 13:12:58.367211 kubelet[2393]: E0130 13:12:58.366775 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:58.386874 kubelet[2393]: I0130 13:12:58.386807 2393 setters.go:580] "Node became not ready" node="172.31.25.221" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:12:58Z","lastTransitionTime":"2025-01-30T13:12:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:12:58.663807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399234082.mount: Deactivated successfully. Jan 30 13:12:58.835108 containerd[1927]: time="2025-01-30T13:12:58.834747614Z" level=info msg="CreateContainer within sandbox \"2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:12:58.867554 containerd[1927]: time="2025-01-30T13:12:58.867475058Z" level=info msg="CreateContainer within sandbox \"2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b3e93e39fea0540380870ade303a56750163be695331d3e4b7670cefdd27d94e\"" Jan 30 13:12:58.868530 containerd[1927]: time="2025-01-30T13:12:58.868342634Z" level=info msg="StartContainer for \"b3e93e39fea0540380870ade303a56750163be695331d3e4b7670cefdd27d94e\"" Jan 30 13:12:58.914458 systemd[1]: Started cri-containerd-b3e93e39fea0540380870ade303a56750163be695331d3e4b7670cefdd27d94e.scope - libcontainer container b3e93e39fea0540380870ade303a56750163be695331d3e4b7670cefdd27d94e. Jan 30 13:12:58.959433 systemd[1]: cri-containerd-b3e93e39fea0540380870ade303a56750163be695331d3e4b7670cefdd27d94e.scope: Deactivated successfully. Jan 30 13:12:58.963651 containerd[1927]: time="2025-01-30T13:12:58.963560222Z" level=info msg="StartContainer for \"b3e93e39fea0540380870ade303a56750163be695331d3e4b7670cefdd27d94e\" returns successfully" Jan 30 13:12:59.005211 containerd[1927]: time="2025-01-30T13:12:59.004135991Z" level=info msg="shim disconnected" id=b3e93e39fea0540380870ade303a56750163be695331d3e4b7670cefdd27d94e namespace=k8s.io Jan 30 13:12:59.005211 containerd[1927]: time="2025-01-30T13:12:59.004279343Z" level=warning msg="cleaning up after shim disconnected" id=b3e93e39fea0540380870ade303a56750163be695331d3e4b7670cefdd27d94e namespace=k8s.io Jan 30 13:12:59.005211 containerd[1927]: time="2025-01-30T13:12:59.004301411Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:12:59.367008 kubelet[2393]: E0130 13:12:59.366952 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:12:59.618759 containerd[1927]: time="2025-01-30T13:12:59.618593606Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:59.621198 containerd[1927]: time="2025-01-30T13:12:59.621055478Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 13:12:59.623873 containerd[1927]: time="2025-01-30T13:12:59.623811650Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:12:59.626621 containerd[1927]: time="2025-01-30T13:12:59.626118566Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.07988116s" Jan 30 13:12:59.626621 containerd[1927]: time="2025-01-30T13:12:59.626194106Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 13:12:59.630775 containerd[1927]: time="2025-01-30T13:12:59.630708194Z" level=info msg="CreateContainer within sandbox \"0736e4dd9a1f081da10b004e23be45209795f6c0941ebc086186cd53204ce519\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:12:59.658129 containerd[1927]: time="2025-01-30T13:12:59.658052954Z" level=info msg="CreateContainer within sandbox \"0736e4dd9a1f081da10b004e23be45209795f6c0941ebc086186cd53204ce519\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"94c324f29c1ceaedd3948f17a1c865c5c469a016932f1eb0dc896d6c0d1c344d\"" Jan 30 13:12:59.659068 containerd[1927]: time="2025-01-30T13:12:59.658995266Z" level=info msg="StartContainer for \"94c324f29c1ceaedd3948f17a1c865c5c469a016932f1eb0dc896d6c0d1c344d\"" Jan 30 13:12:59.711489 systemd[1]: Started cri-containerd-94c324f29c1ceaedd3948f17a1c865c5c469a016932f1eb0dc896d6c0d1c344d.scope - libcontainer container 94c324f29c1ceaedd3948f17a1c865c5c469a016932f1eb0dc896d6c0d1c344d. Jan 30 13:12:59.761593 containerd[1927]: time="2025-01-30T13:12:59.761528702Z" level=info msg="StartContainer for \"94c324f29c1ceaedd3948f17a1c865c5c469a016932f1eb0dc896d6c0d1c344d\" returns successfully" Jan 30 13:12:59.849214 containerd[1927]: time="2025-01-30T13:12:59.847528071Z" level=info msg="CreateContainer within sandbox \"2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:12:59.880284 containerd[1927]: time="2025-01-30T13:12:59.879669639Z" level=info msg="CreateContainer within sandbox \"2310c9c207dfedaf177f3300c35d82cf012c972c69af0b1a08395f87d6582610\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b2646f079beba135f34e4fbaa4bedecacf5c709ef6cd73e0d6b8986b1992c40e\"" Jan 30 13:12:59.882910 containerd[1927]: time="2025-01-30T13:12:59.882396123Z" level=info msg="StartContainer for \"b2646f079beba135f34e4fbaa4bedecacf5c709ef6cd73e0d6b8986b1992c40e\"" Jan 30 13:12:59.938153 systemd[1]: Started cri-containerd-b2646f079beba135f34e4fbaa4bedecacf5c709ef6cd73e0d6b8986b1992c40e.scope - libcontainer container b2646f079beba135f34e4fbaa4bedecacf5c709ef6cd73e0d6b8986b1992c40e. Jan 30 13:13:00.020742 containerd[1927]: time="2025-01-30T13:13:00.020473776Z" level=info msg="StartContainer for \"b2646f079beba135f34e4fbaa4bedecacf5c709ef6cd73e0d6b8986b1992c40e\" returns successfully" Jan 30 13:13:00.368238 kubelet[2393]: E0130 13:13:00.368130 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:00.806243 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 13:13:00.892058 kubelet[2393]: I0130 13:13:00.891580 2393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-shdrl" podStartSLOduration=2.80864076 podStartE2EDuration="5.891557176s" podCreationTimestamp="2025-01-30 13:12:55 +0000 UTC" firstStartedPulling="2025-01-30 13:12:56.54492817 +0000 UTC m=+71.370876123" lastFinishedPulling="2025-01-30 13:12:59.627844598 +0000 UTC m=+74.453792539" observedRunningTime="2025-01-30 13:12:59.937579767 +0000 UTC m=+74.763527744" watchObservedRunningTime="2025-01-30 13:13:00.891557176 +0000 UTC m=+75.717505129" Jan 30 13:13:01.191536 systemd[1]: run-containerd-runc-k8s.io-b2646f079beba135f34e4fbaa4bedecacf5c709ef6cd73e0d6b8986b1992c40e-runc.6CN7Is.mount: Deactivated successfully. Jan 30 13:13:01.368640 kubelet[2393]: E0130 13:13:01.368575 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:02.369476 kubelet[2393]: E0130 13:13:02.369404 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:03.370429 kubelet[2393]: E0130 13:13:03.370354 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:03.531063 kubelet[2393]: E0130 13:13:03.531008 2393 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:36820->127.0.0.1:40321: write tcp 127.0.0.1:36820->127.0.0.1:40321: write: broken pipe Jan 30 13:13:04.371067 kubelet[2393]: E0130 13:13:04.371003 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:04.831887 (udev-worker)[5160]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:13:04.839446 systemd-networkd[1852]: lxc_health: Link UP Jan 30 13:13:04.843757 (udev-worker)[5161]: Network interface NamePolicy= disabled on kernel command line. Jan 30 13:13:04.864736 systemd-networkd[1852]: lxc_health: Gained carrier Jan 30 13:13:05.372122 kubelet[2393]: E0130 13:13:05.372049 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:06.299422 systemd-networkd[1852]: lxc_health: Gained IPv6LL Jan 30 13:13:06.316567 kubelet[2393]: E0130 13:13:06.316442 2393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:06.372851 kubelet[2393]: E0130 13:13:06.372792 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:06.420734 kubelet[2393]: I0130 13:13:06.419755 2393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8lxbh" podStartSLOduration=11.419734652 podStartE2EDuration="11.419734652s" podCreationTimestamp="2025-01-30 13:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:13:00.893949184 +0000 UTC m=+75.719897173" watchObservedRunningTime="2025-01-30 13:13:06.419734652 +0000 UTC m=+81.245682593" Jan 30 13:13:07.373886 kubelet[2393]: E0130 13:13:07.373812 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:08.374309 kubelet[2393]: E0130 13:13:08.374218 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:09.285431 ntpd[1913]: Listen normally on 15 lxc_health [fe80::682e:8fff:fef1:5812%15]:123 Jan 30 13:13:09.285975 ntpd[1913]: 30 Jan 13:13:09 ntpd[1913]: Listen normally on 15 lxc_health [fe80::682e:8fff:fef1:5812%15]:123 Jan 30 13:13:09.375377 kubelet[2393]: E0130 13:13:09.375269 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:10.342307 kubelet[2393]: E0130 13:13:10.342250 2393 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60262->127.0.0.1:40321: write tcp 127.0.0.1:60262->127.0.0.1:40321: write: broken pipe Jan 30 13:13:10.376042 kubelet[2393]: E0130 13:13:10.375973 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:11.376528 kubelet[2393]: E0130 13:13:11.376433 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:12.377148 kubelet[2393]: E0130 13:13:12.377067 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:13.377350 kubelet[2393]: E0130 13:13:13.377272 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:14.377862 kubelet[2393]: E0130 13:13:14.377785 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:15.378246 kubelet[2393]: E0130 13:13:15.378151 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:16.378793 kubelet[2393]: E0130 13:13:16.378745 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:17.379298 kubelet[2393]: E0130 13:13:17.379239 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:18.380033 kubelet[2393]: E0130 13:13:18.379967 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:19.380791 kubelet[2393]: E0130 13:13:19.380725 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:20.381849 kubelet[2393]: E0130 13:13:20.381783 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:21.382077 kubelet[2393]: E0130 13:13:21.381984 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:22.382521 kubelet[2393]: E0130 13:13:22.382457 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:23.382887 kubelet[2393]: E0130 13:13:23.382824 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:24.383549 kubelet[2393]: E0130 13:13:24.383485 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:25.383943 kubelet[2393]: E0130 13:13:25.383877 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:26.316621 kubelet[2393]: E0130 13:13:26.316560 2393 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:26.384320 kubelet[2393]: E0130 13:13:26.384258 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:27.385284 kubelet[2393]: E0130 13:13:27.385222 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:28.385777 kubelet[2393]: E0130 13:13:28.385712 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:29.386066 kubelet[2393]: E0130 13:13:29.386004 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:30.386689 kubelet[2393]: E0130 13:13:30.386628 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:31.387894 kubelet[2393]: E0130 13:13:31.387816 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:32.388196 kubelet[2393]: E0130 13:13:32.388127 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:33.388420 kubelet[2393]: E0130 13:13:33.388330 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:13:34.233895 systemd[1]: cri-containerd-94c324f29c1ceaedd3948f17a1c865c5c469a016932f1eb0dc896d6c0d1c344d.scope: Deactivated successfully. Jan 30 13:13:34.273002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94c324f29c1ceaedd3948f17a1c865c5c469a016932f1eb0dc896d6c0d1c344d-rootfs.mount: Deactivated successfully. Jan 30 13:13:34.283848 containerd[1927]: time="2025-01-30T13:13:34.283760734Z" level=info msg="shim disconnected" id=94c324f29c1ceaedd3948f17a1c865c5c469a016932f1eb0dc896d6c0d1c344d namespace=k8s.io Jan 30 13:13:34.283848 containerd[1927]: time="2025-01-30T13:13:34.283839442Z" level=warning msg="cleaning up after shim disconnected" id=94c324f29c1ceaedd3948f17a1c865c5c469a016932f1eb0dc896d6c0d1c344d namespace=k8s.io Jan 30 13:13:34.284987 containerd[1927]: time="2025-01-30T13:13:34.283861210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:13:34.303015 containerd[1927]: time="2025-01-30T13:13:34.302920306Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:13:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:13:34.389487 kubelet[2393]: E0130 13:13:34.389418 2393 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"