Jan 29 10:57:01.180957 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 29 10:57:01.181002 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 10:57:01.181027 kernel: KASLR disabled due to lack of seed Jan 29 10:57:01.181043 kernel: efi: EFI v2.7 by EDK II Jan 29 10:57:01.181059 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Jan 29 10:57:01.181074 kernel: secureboot: Secure boot disabled Jan 29 10:57:01.183145 kernel: ACPI: Early table checksum verification disabled Jan 29 10:57:01.183183 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 29 10:57:01.183200 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 29 10:57:01.183216 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 10:57:01.183241 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 29 10:57:01.183257 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 10:57:01.183272 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 29 10:57:01.183288 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 29 10:57:01.183306 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 29 10:57:01.183327 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 10:57:01.183344 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 29 10:57:01.183361 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 29 10:57:01.183377 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 29 10:57:01.183393 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 29 10:57:01.183410 kernel: printk: bootconsole [uart0] enabled Jan 29 10:57:01.183426 kernel: NUMA: Failed to initialise from firmware Jan 29 10:57:01.183442 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 10:57:01.183459 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 29 10:57:01.183475 kernel: Zone ranges: Jan 29 10:57:01.183492 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 10:57:01.183512 kernel: DMA32 empty Jan 29 10:57:01.183528 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 29 10:57:01.183545 kernel: Movable zone start for each node Jan 29 10:57:01.183561 kernel: Early memory node ranges Jan 29 10:57:01.183577 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 29 10:57:01.183593 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 29 10:57:01.183609 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 29 10:57:01.183625 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 29 10:57:01.183641 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 29 10:57:01.183657 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 29 10:57:01.183673 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 29 10:57:01.183689 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 29 10:57:01.183710 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 10:57:01.183732 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 29 10:57:01.183756 kernel: psci: probing for conduit method from ACPI. Jan 29 10:57:01.183773 kernel: psci: PSCIv1.0 detected in firmware. Jan 29 10:57:01.183809 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 10:57:01.183832 kernel: psci: Trusted OS migration not required Jan 29 10:57:01.183849 kernel: psci: SMC Calling Convention v1.1 Jan 29 10:57:01.183866 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 10:57:01.183884 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 10:57:01.183903 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 10:57:01.183920 kernel: Detected PIPT I-cache on CPU0 Jan 29 10:57:01.183937 kernel: CPU features: detected: GIC system register CPU interface Jan 29 10:57:01.183954 kernel: CPU features: detected: Spectre-v2 Jan 29 10:57:01.183973 kernel: CPU features: detected: Spectre-v3a Jan 29 10:57:01.183991 kernel: CPU features: detected: Spectre-BHB Jan 29 10:57:01.184012 kernel: CPU features: detected: ARM erratum 1742098 Jan 29 10:57:01.184031 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 29 10:57:01.184053 kernel: alternatives: applying boot alternatives Jan 29 10:57:01.184072 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 10:57:01.184122 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 10:57:01.184147 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 10:57:01.184165 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 10:57:01.184182 kernel: Fallback order for Node 0: 0 Jan 29 10:57:01.184199 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 29 10:57:01.184215 kernel: Policy zone: Normal Jan 29 10:57:01.184232 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 10:57:01.184251 kernel: software IO TLB: area num 2. Jan 29 10:57:01.184277 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 29 10:57:01.184295 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Jan 29 10:57:01.184313 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 10:57:01.184330 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 10:57:01.184348 kernel: rcu: RCU event tracing is enabled. Jan 29 10:57:01.184365 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 10:57:01.184383 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 10:57:01.184400 kernel: Tracing variant of Tasks RCU enabled. Jan 29 10:57:01.184417 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 10:57:01.184434 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 10:57:01.184451 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 10:57:01.184473 kernel: GICv3: 96 SPIs implemented Jan 29 10:57:01.184490 kernel: GICv3: 0 Extended SPIs implemented Jan 29 10:57:01.184507 kernel: Root IRQ handler: gic_handle_irq Jan 29 10:57:01.184524 kernel: GICv3: GICv3 features: 16 PPIs Jan 29 10:57:01.184540 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 29 10:57:01.184557 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 29 10:57:01.184574 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 10:57:01.184591 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 29 10:57:01.184608 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 29 10:57:01.184625 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 29 10:57:01.184642 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 29 10:57:01.184659 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 10:57:01.184690 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 29 10:57:01.184713 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 29 10:57:01.184731 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 29 10:57:01.184748 kernel: Console: colour dummy device 80x25 Jan 29 10:57:01.184766 kernel: printk: console [tty1] enabled Jan 29 10:57:01.184784 kernel: ACPI: Core revision 20230628 Jan 29 10:57:01.184801 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 29 10:57:01.184819 kernel: pid_max: default: 32768 minimum: 301 Jan 29 10:57:01.184836 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 10:57:01.184853 kernel: landlock: Up and running. Jan 29 10:57:01.184876 kernel: SELinux: Initializing. Jan 29 10:57:01.184894 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:57:01.184912 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:57:01.184929 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:57:01.184947 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:57:01.184965 kernel: rcu: Hierarchical SRCU implementation. Jan 29 10:57:01.184982 kernel: rcu: Max phase no-delay instances is 400. Jan 29 10:57:01.185000 kernel: Platform MSI: ITS@0x10080000 domain created Jan 29 10:57:01.185021 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 29 10:57:01.185038 kernel: Remapping and enabling EFI services. Jan 29 10:57:01.185056 kernel: smp: Bringing up secondary CPUs ... Jan 29 10:57:01.185073 kernel: Detected PIPT I-cache on CPU1 Jan 29 10:57:01.187139 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 29 10:57:01.187195 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 29 10:57:01.187214 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 29 10:57:01.187232 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 10:57:01.187250 kernel: SMP: Total of 2 processors activated. Jan 29 10:57:01.187267 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 10:57:01.187294 kernel: CPU features: detected: 32-bit EL1 Support Jan 29 10:57:01.187311 kernel: CPU features: detected: CRC32 instructions Jan 29 10:57:01.187340 kernel: CPU: All CPU(s) started at EL1 Jan 29 10:57:01.187362 kernel: alternatives: applying system-wide alternatives Jan 29 10:57:01.187380 kernel: devtmpfs: initialized Jan 29 10:57:01.187398 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 10:57:01.187416 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 10:57:01.187434 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 10:57:01.187453 kernel: SMBIOS 3.0.0 present. Jan 29 10:57:01.187475 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 29 10:57:01.187493 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 10:57:01.187511 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 10:57:01.187529 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 10:57:01.187548 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 10:57:01.187566 kernel: audit: initializing netlink subsys (disabled) Jan 29 10:57:01.187584 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Jan 29 10:57:01.187606 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 10:57:01.187625 kernel: cpuidle: using governor menu Jan 29 10:57:01.187643 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 10:57:01.187661 kernel: ASID allocator initialised with 65536 entries Jan 29 10:57:01.187679 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 10:57:01.187697 kernel: Serial: AMBA PL011 UART driver Jan 29 10:57:01.187716 kernel: Modules: 17440 pages in range for non-PLT usage Jan 29 10:57:01.187734 kernel: Modules: 508960 pages in range for PLT usage Jan 29 10:57:01.187752 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 10:57:01.187774 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 10:57:01.187792 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 10:57:01.187822 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 10:57:01.187846 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 10:57:01.187865 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 10:57:01.187884 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 10:57:01.187902 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 10:57:01.187921 kernel: ACPI: Added _OSI(Module Device) Jan 29 10:57:01.187939 kernel: ACPI: Added _OSI(Processor Device) Jan 29 10:57:01.187962 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 10:57:01.187981 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 10:57:01.187999 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 10:57:01.188017 kernel: ACPI: Interpreter enabled Jan 29 10:57:01.188035 kernel: ACPI: Using GIC for interrupt routing Jan 29 10:57:01.188053 kernel: ACPI: MCFG table detected, 1 entries Jan 29 10:57:01.188071 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 29 10:57:01.188394 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 10:57:01.188641 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 10:57:01.188838 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 10:57:01.189064 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 29 10:57:01.191402 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 29 10:57:01.191447 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 29 10:57:01.191466 kernel: acpiphp: Slot [1] registered Jan 29 10:57:01.191485 kernel: acpiphp: Slot [2] registered Jan 29 10:57:01.191503 kernel: acpiphp: Slot [3] registered Jan 29 10:57:01.191530 kernel: acpiphp: Slot [4] registered Jan 29 10:57:01.191549 kernel: acpiphp: Slot [5] registered Jan 29 10:57:01.191567 kernel: acpiphp: Slot [6] registered Jan 29 10:57:01.191585 kernel: acpiphp: Slot [7] registered Jan 29 10:57:01.191603 kernel: acpiphp: Slot [8] registered Jan 29 10:57:01.191621 kernel: acpiphp: Slot [9] registered Jan 29 10:57:01.191640 kernel: acpiphp: Slot [10] registered Jan 29 10:57:01.191658 kernel: acpiphp: Slot [11] registered Jan 29 10:57:01.191676 kernel: acpiphp: Slot [12] registered Jan 29 10:57:01.191694 kernel: acpiphp: Slot [13] registered Jan 29 10:57:01.191716 kernel: acpiphp: Slot [14] registered Jan 29 10:57:01.191734 kernel: acpiphp: Slot [15] registered Jan 29 10:57:01.191752 kernel: acpiphp: Slot [16] registered Jan 29 10:57:01.191770 kernel: acpiphp: Slot [17] registered Jan 29 10:57:01.191788 kernel: acpiphp: Slot [18] registered Jan 29 10:57:01.191805 kernel: acpiphp: Slot [19] registered Jan 29 10:57:01.191824 kernel: acpiphp: Slot [20] registered Jan 29 10:57:01.191842 kernel: acpiphp: Slot [21] registered Jan 29 10:57:01.191860 kernel: acpiphp: Slot [22] registered Jan 29 10:57:01.191882 kernel: acpiphp: Slot [23] registered Jan 29 10:57:01.191900 kernel: acpiphp: Slot [24] registered Jan 29 10:57:01.191918 kernel: acpiphp: Slot [25] registered Jan 29 10:57:01.191936 kernel: acpiphp: Slot [26] registered Jan 29 10:57:01.191954 kernel: acpiphp: Slot [27] registered Jan 29 10:57:01.191972 kernel: acpiphp: Slot [28] registered Jan 29 10:57:01.191990 kernel: acpiphp: Slot [29] registered Jan 29 10:57:01.192008 kernel: acpiphp: Slot [30] registered Jan 29 10:57:01.192026 kernel: acpiphp: Slot [31] registered Jan 29 10:57:01.192044 kernel: PCI host bridge to bus 0000:00 Jan 29 10:57:01.192279 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 29 10:57:01.192469 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 10:57:01.192653 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 29 10:57:01.192835 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 29 10:57:01.193073 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 29 10:57:01.194537 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 29 10:57:01.194793 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 29 10:57:01.195015 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 10:57:01.196301 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 29 10:57:01.196526 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 10:57:01.196741 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 10:57:01.196941 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 29 10:57:01.198521 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 29 10:57:01.198784 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 29 10:57:01.198985 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 10:57:01.199226 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 29 10:57:01.199451 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 29 10:57:01.199655 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 29 10:57:01.199873 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 29 10:57:01.201136 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 29 10:57:01.203554 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 29 10:57:01.203757 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 10:57:01.203955 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 29 10:57:01.203981 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 10:57:01.204001 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 10:57:01.204020 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 10:57:01.204039 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 10:57:01.204058 kernel: iommu: Default domain type: Translated Jan 29 10:57:01.204088 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 10:57:01.204210 kernel: efivars: Registered efivars operations Jan 29 10:57:01.204229 kernel: vgaarb: loaded Jan 29 10:57:01.204248 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 10:57:01.204266 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 10:57:01.204285 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 10:57:01.204303 kernel: pnp: PnP ACPI init Jan 29 10:57:01.204557 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 29 10:57:01.204595 kernel: pnp: PnP ACPI: found 1 devices Jan 29 10:57:01.204614 kernel: NET: Registered PF_INET protocol family Jan 29 10:57:01.204632 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 10:57:01.204651 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 10:57:01.204669 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 10:57:01.204701 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 10:57:01.204722 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 10:57:01.204740 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 10:57:01.204758 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:57:01.204782 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:57:01.204801 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 10:57:01.204819 kernel: PCI: CLS 0 bytes, default 64 Jan 29 10:57:01.204836 kernel: kvm [1]: HYP mode not available Jan 29 10:57:01.204854 kernel: Initialise system trusted keyrings Jan 29 10:57:01.204872 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 10:57:01.204890 kernel: Key type asymmetric registered Jan 29 10:57:01.204908 kernel: Asymmetric key parser 'x509' registered Jan 29 10:57:01.204926 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 10:57:01.204949 kernel: io scheduler mq-deadline registered Jan 29 10:57:01.204967 kernel: io scheduler kyber registered Jan 29 10:57:01.204985 kernel: io scheduler bfq registered Jan 29 10:57:01.205230 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 29 10:57:01.205258 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 10:57:01.205277 kernel: ACPI: button: Power Button [PWRB] Jan 29 10:57:01.205296 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 29 10:57:01.205314 kernel: ACPI: button: Sleep Button [SLPB] Jan 29 10:57:01.205337 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 10:57:01.205357 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 10:57:01.205559 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 29 10:57:01.205584 kernel: printk: console [ttyS0] disabled Jan 29 10:57:01.205603 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 29 10:57:01.205621 kernel: printk: console [ttyS0] enabled Jan 29 10:57:01.205639 kernel: printk: bootconsole [uart0] disabled Jan 29 10:57:01.205657 kernel: thunder_xcv, ver 1.0 Jan 29 10:57:01.205675 kernel: thunder_bgx, ver 1.0 Jan 29 10:57:01.205693 kernel: nicpf, ver 1.0 Jan 29 10:57:01.205716 kernel: nicvf, ver 1.0 Jan 29 10:57:01.205930 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 10:57:01.206147 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T10:57:00 UTC (1738148220) Jan 29 10:57:01.206174 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 10:57:01.206194 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 29 10:57:01.206212 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 10:57:01.206231 kernel: watchdog: Hard watchdog permanently disabled Jan 29 10:57:01.206254 kernel: NET: Registered PF_INET6 protocol family Jan 29 10:57:01.206272 kernel: Segment Routing with IPv6 Jan 29 10:57:01.206290 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 10:57:01.206308 kernel: NET: Registered PF_PACKET protocol family Jan 29 10:57:01.206326 kernel: Key type dns_resolver registered Jan 29 10:57:01.206344 kernel: registered taskstats version 1 Jan 29 10:57:01.206362 kernel: Loading compiled-in X.509 certificates Jan 29 10:57:01.206381 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 10:57:01.206399 kernel: Key type .fscrypt registered Jan 29 10:57:01.206416 kernel: Key type fscrypt-provisioning registered Jan 29 10:57:01.206440 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 10:57:01.206458 kernel: ima: Allocated hash algorithm: sha1 Jan 29 10:57:01.206476 kernel: ima: No architecture policies found Jan 29 10:57:01.206495 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 10:57:01.206513 kernel: clk: Disabling unused clocks Jan 29 10:57:01.206556 kernel: Freeing unused kernel memory: 39680K Jan 29 10:57:01.206578 kernel: Run /init as init process Jan 29 10:57:01.206596 kernel: with arguments: Jan 29 10:57:01.206618 kernel: /init Jan 29 10:57:01.206643 kernel: with environment: Jan 29 10:57:01.206661 kernel: HOME=/ Jan 29 10:57:01.206679 kernel: TERM=linux Jan 29 10:57:01.206697 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 10:57:01.206719 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:57:01.206743 systemd[1]: Detected virtualization amazon. Jan 29 10:57:01.206764 systemd[1]: Detected architecture arm64. Jan 29 10:57:01.206788 systemd[1]: Running in initrd. Jan 29 10:57:01.206808 systemd[1]: No hostname configured, using default hostname. Jan 29 10:57:01.206827 systemd[1]: Hostname set to . Jan 29 10:57:01.206848 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:57:01.206867 systemd[1]: Queued start job for default target initrd.target. Jan 29 10:57:01.206887 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:57:01.206907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:57:01.206928 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 10:57:01.206952 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:57:01.206973 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 10:57:01.206994 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 10:57:01.207016 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 10:57:01.207037 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 10:57:01.207056 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:57:01.207076 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:57:01.207134 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:57:01.207161 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:57:01.207182 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:57:01.207202 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:57:01.207222 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:57:01.207242 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:57:01.207262 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 10:57:01.207282 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 10:57:01.207303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:57:01.207355 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:57:01.207376 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:57:01.207396 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:57:01.207416 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 10:57:01.207436 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:57:01.207456 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 10:57:01.207476 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 10:57:01.207495 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:57:01.207519 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:57:01.207539 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:57:01.207559 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 10:57:01.207580 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:57:01.207640 systemd-journald[252]: Collecting audit messages is disabled. Jan 29 10:57:01.207687 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 10:57:01.207709 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:57:01.207729 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:57:01.207750 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:57:01.207774 systemd-journald[252]: Journal started Jan 29 10:57:01.207811 systemd-journald[252]: Runtime Journal (/run/log/journal/ec27c723cf6834df18d1c0176d894981) is 8.0M, max 75.3M, 67.3M free. Jan 29 10:57:01.206570 systemd-modules-load[253]: Inserted module 'overlay' Jan 29 10:57:01.223292 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:57:01.234203 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 10:57:01.235470 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:57:01.241782 kernel: Bridge firewalling registered Jan 29 10:57:01.238027 systemd-modules-load[253]: Inserted module 'br_netfilter' Jan 29 10:57:01.248662 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:57:01.254650 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:57:01.258230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:57:01.285556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:57:01.290671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:57:01.307720 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:57:01.322550 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 10:57:01.328668 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:57:01.344175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:57:01.356408 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:57:01.365876 dracut-cmdline[285]: dracut-dracut-053 Jan 29 10:57:01.372155 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 10:57:01.432365 systemd-resolved[292]: Positive Trust Anchors: Jan 29 10:57:01.433413 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:57:01.433479 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:57:01.529115 kernel: SCSI subsystem initialized Jan 29 10:57:01.536127 kernel: Loading iSCSI transport class v2.0-870. Jan 29 10:57:01.549134 kernel: iscsi: registered transport (tcp) Jan 29 10:57:01.571130 kernel: iscsi: registered transport (qla4xxx) Jan 29 10:57:01.571199 kernel: QLogic iSCSI HBA Driver Jan 29 10:57:01.662142 kernel: random: crng init done Jan 29 10:57:01.662391 systemd-resolved[292]: Defaulting to hostname 'linux'. Jan 29 10:57:01.665338 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:57:01.667536 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:57:01.693280 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 10:57:01.704474 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 10:57:01.743142 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 10:57:01.743217 kernel: device-mapper: uevent: version 1.0.3 Jan 29 10:57:01.746145 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 10:57:01.812151 kernel: raid6: neonx8 gen() 6653 MB/s Jan 29 10:57:01.829136 kernel: raid6: neonx4 gen() 6479 MB/s Jan 29 10:57:01.846144 kernel: raid6: neonx2 gen() 5402 MB/s Jan 29 10:57:01.863138 kernel: raid6: neonx1 gen() 3934 MB/s Jan 29 10:57:01.880124 kernel: raid6: int64x8 gen() 3795 MB/s Jan 29 10:57:01.897131 kernel: raid6: int64x4 gen() 3694 MB/s Jan 29 10:57:01.914130 kernel: raid6: int64x2 gen() 3578 MB/s Jan 29 10:57:01.931873 kernel: raid6: int64x1 gen() 2767 MB/s Jan 29 10:57:01.931919 kernel: raid6: using algorithm neonx8 gen() 6653 MB/s Jan 29 10:57:01.949858 kernel: raid6: .... xor() 4907 MB/s, rmw enabled Jan 29 10:57:01.949906 kernel: raid6: using neon recovery algorithm Jan 29 10:57:01.958251 kernel: xor: measuring software checksum speed Jan 29 10:57:01.958313 kernel: 8regs : 10969 MB/sec Jan 29 10:57:01.959334 kernel: 32regs : 11947 MB/sec Jan 29 10:57:01.960493 kernel: arm64_neon : 9581 MB/sec Jan 29 10:57:01.960525 kernel: xor: using function: 32regs (11947 MB/sec) Jan 29 10:57:02.044142 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 10:57:02.063338 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:57:02.074388 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:57:02.116732 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 29 10:57:02.126538 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:57:02.145498 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 10:57:02.172637 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jan 29 10:57:02.225903 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:57:02.244528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:57:02.355298 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:57:02.379343 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 10:57:02.419181 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 10:57:02.424946 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:57:02.430755 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:57:02.433079 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:57:02.457515 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 10:57:02.490796 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:57:02.555552 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 10:57:02.555615 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 29 10:57:02.583794 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 10:57:02.584060 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 10:57:02.584440 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:42:48:f3:b7:c3 Jan 29 10:57:02.562721 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:57:02.562959 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:57:02.565600 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:57:02.567728 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:57:02.567973 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:57:02.572374 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:57:02.584559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:57:02.619642 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 10:57:02.619720 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 10:57:02.621494 (udev-worker)[527]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:57:02.630149 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 10:57:02.636553 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 10:57:02.636626 kernel: GPT:9289727 != 16777215 Jan 29 10:57:02.637798 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 10:57:02.638577 kernel: GPT:9289727 != 16777215 Jan 29 10:57:02.639636 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 10:57:02.640514 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:57:02.642543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:57:02.662508 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:57:02.708186 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:57:02.732345 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (529) Jan 29 10:57:02.782806 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 10:57:02.790694 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (528) Jan 29 10:57:02.849825 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 10:57:02.877499 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 10:57:02.883029 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 10:57:02.910910 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 10:57:02.931485 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 10:57:02.944632 disk-uuid[663]: Primary Header is updated. Jan 29 10:57:02.944632 disk-uuid[663]: Secondary Entries is updated. Jan 29 10:57:02.944632 disk-uuid[663]: Secondary Header is updated. Jan 29 10:57:02.953179 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:57:02.964129 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:57:03.970165 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:57:03.971143 disk-uuid[664]: The operation has completed successfully. Jan 29 10:57:04.159438 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 10:57:04.163179 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 10:57:04.198378 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 10:57:04.215006 sh[924]: Success Jan 29 10:57:04.243624 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 10:57:04.344254 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 10:57:04.359723 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 10:57:04.367252 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 10:57:04.401482 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 10:57:04.401545 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:57:04.401571 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 10:57:04.403181 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 10:57:04.404353 kernel: BTRFS info (device dm-0): using free space tree Jan 29 10:57:04.487132 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 10:57:04.512748 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 10:57:04.516621 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 10:57:04.533345 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 10:57:04.540387 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 10:57:04.557647 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 10:57:04.557720 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:57:04.557751 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:57:04.564138 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:57:04.582457 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 10:57:04.584839 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 10:57:04.603470 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 10:57:04.617475 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 10:57:04.711264 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:57:04.724441 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:57:04.778904 systemd-networkd[1129]: lo: Link UP Jan 29 10:57:04.778929 systemd-networkd[1129]: lo: Gained carrier Jan 29 10:57:04.784036 systemd-networkd[1129]: Enumeration completed Jan 29 10:57:04.785910 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:57:04.786065 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:57:04.786072 systemd-networkd[1129]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:57:04.790437 systemd[1]: Reached target network.target - Network. Jan 29 10:57:04.800843 systemd-networkd[1129]: eth0: Link UP Jan 29 10:57:04.800863 systemd-networkd[1129]: eth0: Gained carrier Jan 29 10:57:04.800880 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:57:04.825167 systemd-networkd[1129]: eth0: DHCPv4 address 172.31.18.182/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 10:57:05.002476 ignition[1043]: Ignition 2.20.0 Jan 29 10:57:05.002986 ignition[1043]: Stage: fetch-offline Jan 29 10:57:05.003463 ignition[1043]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:57:05.003487 ignition[1043]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:57:05.003970 ignition[1043]: Ignition finished successfully Jan 29 10:57:05.012972 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:57:05.021460 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 10:57:05.050421 ignition[1140]: Ignition 2.20.0 Jan 29 10:57:05.050450 ignition[1140]: Stage: fetch Jan 29 10:57:05.052000 ignition[1140]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:57:05.052026 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:57:05.052584 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:57:05.062956 ignition[1140]: PUT result: OK Jan 29 10:57:05.066121 ignition[1140]: parsed url from cmdline: "" Jan 29 10:57:05.066137 ignition[1140]: no config URL provided Jan 29 10:57:05.066153 ignition[1140]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:57:05.066178 ignition[1140]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:57:05.066209 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:57:05.073342 ignition[1140]: PUT result: OK Jan 29 10:57:05.073442 ignition[1140]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 10:57:05.076932 ignition[1140]: GET result: OK Jan 29 10:57:05.077014 ignition[1140]: parsing config with SHA512: 7dbf445b80a1b7fbe4ccac08b429a2e32052467e3771071c80dba490fc4d4160eae153618ed496fee78278caf5c9825f0e52420a4edea07124850e6495b86cb2 Jan 29 10:57:05.083537 unknown[1140]: fetched base config from "system" Jan 29 10:57:05.083878 unknown[1140]: fetched base config from "system" Jan 29 10:57:05.084312 ignition[1140]: fetch: fetch complete Jan 29 10:57:05.083893 unknown[1140]: fetched user config from "aws" Jan 29 10:57:05.084324 ignition[1140]: fetch: fetch passed Jan 29 10:57:05.091824 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 10:57:05.084407 ignition[1140]: Ignition finished successfully Jan 29 10:57:05.110450 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 10:57:05.134276 ignition[1146]: Ignition 2.20.0 Jan 29 10:57:05.134306 ignition[1146]: Stage: kargs Jan 29 10:57:05.135842 ignition[1146]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:57:05.135868 ignition[1146]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:57:05.136650 ignition[1146]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:57:05.141944 ignition[1146]: PUT result: OK Jan 29 10:57:05.147353 ignition[1146]: kargs: kargs passed Jan 29 10:57:05.147462 ignition[1146]: Ignition finished successfully Jan 29 10:57:05.151179 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 10:57:05.166405 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 10:57:05.189437 ignition[1152]: Ignition 2.20.0 Jan 29 10:57:05.189458 ignition[1152]: Stage: disks Jan 29 10:57:05.190058 ignition[1152]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:57:05.190131 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:57:05.190288 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:57:05.193036 ignition[1152]: PUT result: OK Jan 29 10:57:05.202265 ignition[1152]: disks: disks passed Jan 29 10:57:05.202375 ignition[1152]: Ignition finished successfully Jan 29 10:57:05.208165 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 10:57:05.209932 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 10:57:05.214447 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 10:57:05.218702 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:57:05.222520 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:57:05.225954 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:57:05.241360 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 10:57:05.286331 systemd-fsck[1160]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 10:57:05.293496 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 10:57:05.310464 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 10:57:05.389136 kernel: EXT4-fs (nvme0n1p9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 10:57:05.390458 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 10:57:05.393418 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 10:57:05.403274 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:57:05.411374 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 10:57:05.419635 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 10:57:05.437065 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1179) Jan 29 10:57:05.437123 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 10:57:05.437167 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:57:05.437196 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:57:05.419731 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 10:57:05.419784 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:57:05.448122 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:57:05.453003 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:57:05.456832 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 10:57:05.471520 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 10:57:05.928177 initrd-setup-root[1203]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 10:57:05.951501 initrd-setup-root[1210]: cut: /sysroot/etc/group: No such file or directory Jan 29 10:57:05.960026 initrd-setup-root[1217]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 10:57:05.968553 initrd-setup-root[1224]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 10:57:06.246309 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 10:57:06.257289 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 10:57:06.264384 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 10:57:06.276951 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 10:57:06.279264 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 10:57:06.324189 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 10:57:06.328914 ignition[1292]: INFO : Ignition 2.20.0 Jan 29 10:57:06.328914 ignition[1292]: INFO : Stage: mount Jan 29 10:57:06.332034 ignition[1292]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:57:06.332034 ignition[1292]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:57:06.336245 ignition[1292]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:57:06.338831 ignition[1292]: INFO : PUT result: OK Jan 29 10:57:06.342735 ignition[1292]: INFO : mount: mount passed Jan 29 10:57:06.342735 ignition[1292]: INFO : Ignition finished successfully Jan 29 10:57:06.346353 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 10:57:06.362395 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 10:57:06.386518 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:57:06.402153 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1303) Jan 29 10:57:06.406166 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 10:57:06.406232 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:57:06.406258 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:57:06.412127 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:57:06.415585 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:57:06.453993 ignition[1320]: INFO : Ignition 2.20.0 Jan 29 10:57:06.453993 ignition[1320]: INFO : Stage: files Jan 29 10:57:06.457577 ignition[1320]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:57:06.457577 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:57:06.457577 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:57:06.464693 ignition[1320]: INFO : PUT result: OK Jan 29 10:57:06.467523 ignition[1320]: DEBUG : files: compiled without relabeling support, skipping Jan 29 10:57:06.469673 ignition[1320]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 10:57:06.469673 ignition[1320]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 10:57:06.490929 ignition[1320]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 10:57:06.493835 ignition[1320]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 10:57:06.498970 unknown[1320]: wrote ssh authorized keys file for user: core Jan 29 10:57:06.501225 ignition[1320]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 10:57:06.513915 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 10:57:06.517134 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 10:57:06.517134 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:57:06.517134 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:57:06.517134 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:57:06.517134 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:57:06.517134 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:57:06.517134 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 10:57:06.543718 systemd-networkd[1129]: eth0: Gained IPv6LL Jan 29 10:57:06.920991 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 10:57:07.275977 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 10:57:07.280384 ignition[1320]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:57:07.280384 ignition[1320]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:57:07.280384 ignition[1320]: INFO : files: files passed Jan 29 10:57:07.280384 ignition[1320]: INFO : Ignition finished successfully Jan 29 10:57:07.288620 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 10:57:07.304384 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 10:57:07.309673 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 10:57:07.320192 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 10:57:07.323173 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 10:57:07.343532 initrd-setup-root-after-ignition[1349]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:57:07.343532 initrd-setup-root-after-ignition[1349]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:57:07.351293 initrd-setup-root-after-ignition[1353]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:57:07.356889 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:57:07.362842 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 10:57:07.375381 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 10:57:07.422669 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 10:57:07.423084 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 10:57:07.428601 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 10:57:07.431629 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 10:57:07.434030 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 10:57:07.445859 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 10:57:07.475239 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:57:07.486391 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 10:57:07.514115 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:57:07.518466 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:57:07.522729 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 10:57:07.523557 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 10:57:07.523781 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:57:07.525122 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 10:57:07.525685 systemd[1]: Stopped target basic.target - Basic System. Jan 29 10:57:07.526301 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 10:57:07.526864 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:57:07.527194 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 10:57:07.527464 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 10:57:07.527753 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:57:07.528065 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 10:57:07.528634 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 10:57:07.528928 systemd[1]: Stopped target swap.target - Swaps. Jan 29 10:57:07.529177 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 10:57:07.529385 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:57:07.530062 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:57:07.530671 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:57:07.531207 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 10:57:07.548823 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:57:07.552823 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 10:57:07.553045 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 10:57:07.555484 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 10:57:07.555701 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:57:07.558267 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 10:57:07.558488 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 10:57:07.610411 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 10:57:07.618413 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 10:57:07.628505 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 10:57:07.628793 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:57:07.632039 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 10:57:07.632319 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:57:07.652410 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 10:57:07.654597 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 10:57:07.665808 ignition[1373]: INFO : Ignition 2.20.0 Jan 29 10:57:07.668421 ignition[1373]: INFO : Stage: umount Jan 29 10:57:07.668421 ignition[1373]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:57:07.671685 ignition[1373]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:57:07.673803 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 10:57:07.676308 ignition[1373]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:57:07.679628 ignition[1373]: INFO : PUT result: OK Jan 29 10:57:07.683881 ignition[1373]: INFO : umount: umount passed Jan 29 10:57:07.685586 ignition[1373]: INFO : Ignition finished successfully Jan 29 10:57:07.688193 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 10:57:07.690155 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 10:57:07.693055 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 10:57:07.693420 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 10:57:07.695844 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 10:57:07.695941 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 10:57:07.699553 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 10:57:07.699637 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 10:57:07.701506 systemd[1]: Stopped target network.target - Network. Jan 29 10:57:07.703177 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 10:57:07.703269 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:57:07.706244 systemd[1]: Stopped target paths.target - Path Units. Jan 29 10:57:07.706431 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 10:57:07.709181 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:57:07.711445 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 10:57:07.713155 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 10:57:07.715300 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 10:57:07.715380 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:57:07.720343 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 10:57:07.720418 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:57:07.722306 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 10:57:07.722388 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 10:57:07.724294 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 10:57:07.724372 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 10:57:07.728000 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 10:57:07.736296 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 10:57:07.745728 systemd-networkd[1129]: eth0: DHCPv6 lease lost Jan 29 10:57:07.758585 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 10:57:07.758801 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 10:57:07.774595 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 10:57:07.774976 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 10:57:07.785855 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 10:57:07.785980 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:57:07.808425 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 10:57:07.812196 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 10:57:07.812313 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:57:07.814952 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:57:07.815057 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:57:07.822014 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 10:57:07.822421 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 10:57:07.828914 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 10:57:07.829011 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:57:07.831483 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:57:07.870488 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 10:57:07.873598 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:57:07.878846 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 10:57:07.878937 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 10:57:07.881215 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 10:57:07.881282 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:57:07.883730 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 10:57:07.883824 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:57:07.886045 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 10:57:07.886162 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 10:57:07.897748 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:57:07.897859 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:57:07.909418 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 10:57:07.920761 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 10:57:07.920884 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:57:07.935658 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 10:57:07.935766 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:57:07.938205 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 10:57:07.938284 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:57:07.940655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:57:07.940729 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:57:07.957925 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 10:57:07.958174 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 10:57:07.977074 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 10:57:07.977511 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 10:57:07.991896 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 10:57:07.992690 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 10:57:07.999460 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 10:57:08.001522 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 10:57:08.001616 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 10:57:08.022474 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 10:57:08.046257 systemd[1]: Switching root. Jan 29 10:57:08.107249 systemd-journald[252]: Journal stopped Jan 29 10:57:10.365366 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jan 29 10:57:10.365511 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 10:57:10.365563 kernel: SELinux: policy capability open_perms=1 Jan 29 10:57:10.365594 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 10:57:10.365623 kernel: SELinux: policy capability always_check_network=0 Jan 29 10:57:10.365662 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 10:57:10.365695 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 10:57:10.365727 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 10:57:10.365755 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 10:57:10.365784 kernel: audit: type=1403 audit(1738148228.713:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 10:57:10.365821 systemd[1]: Successfully loaded SELinux policy in 46.888ms. Jan 29 10:57:10.365867 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.045ms. Jan 29 10:57:10.365901 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:57:10.365934 systemd[1]: Detected virtualization amazon. Jan 29 10:57:10.365973 systemd[1]: Detected architecture arm64. Jan 29 10:57:10.366014 systemd[1]: Detected first boot. Jan 29 10:57:10.366045 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:57:10.373198 zram_generator::config[1416]: No configuration found. Jan 29 10:57:10.373262 systemd[1]: Populated /etc with preset unit settings. Jan 29 10:57:10.373296 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 10:57:10.373328 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 10:57:10.373361 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 10:57:10.373395 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 10:57:10.373425 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 10:57:10.373465 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 10:57:10.373497 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 10:57:10.373531 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 10:57:10.373562 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 10:57:10.373593 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 10:57:10.373625 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 10:57:10.373654 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:57:10.373683 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:57:10.373712 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 10:57:10.373746 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 10:57:10.373780 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 10:57:10.373810 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:57:10.373840 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 10:57:10.373874 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:57:10.373911 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 10:57:10.373943 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 10:57:10.373974 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 10:57:10.374020 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 10:57:10.374053 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:57:10.374142 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:57:10.374175 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:57:10.374208 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:57:10.374237 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 10:57:10.374269 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 10:57:10.374300 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:57:10.374336 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:57:10.374368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:57:10.374397 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 10:57:10.374428 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 10:57:10.374458 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 10:57:10.374487 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 10:57:10.374516 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 10:57:10.374546 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 10:57:10.374578 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 10:57:10.374613 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 10:57:10.374645 systemd[1]: Reached target machines.target - Containers. Jan 29 10:57:10.374675 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 10:57:10.374705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:57:10.374734 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:57:10.374763 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 10:57:10.374793 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:57:10.374821 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:57:10.374853 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:57:10.374886 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 10:57:10.374918 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:57:10.374949 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 10:57:10.374978 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 10:57:10.375010 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 10:57:10.375039 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 10:57:10.375069 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 10:57:10.383675 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:57:10.383756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:57:10.383792 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 10:57:10.383824 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 10:57:10.383854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:57:10.383885 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 10:57:10.383918 systemd[1]: Stopped verity-setup.service. Jan 29 10:57:10.383947 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 10:57:10.383976 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 10:57:10.384006 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 10:57:10.384040 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 10:57:10.384069 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 10:57:10.386181 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 10:57:10.386249 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:57:10.386281 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 10:57:10.386322 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 10:57:10.386352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:57:10.386382 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:57:10.386412 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:57:10.386445 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 10:57:10.386474 kernel: fuse: init (API version 7.39) Jan 29 10:57:10.386505 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 10:57:10.386534 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 10:57:10.386563 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:57:10.386602 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:57:10.386631 kernel: ACPI: bus type drm_connector registered Jan 29 10:57:10.386660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:57:10.386689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:57:10.386719 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:57:10.386751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:57:10.386779 kernel: loop: module loaded Jan 29 10:57:10.386808 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 10:57:10.386838 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 10:57:10.386921 systemd-journald[1498]: Collecting audit messages is disabled. Jan 29 10:57:10.386974 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:57:10.387007 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:57:10.387038 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 10:57:10.387074 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 10:57:10.398363 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 10:57:10.398419 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 10:57:10.398450 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:57:10.398484 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 10:57:10.398517 systemd-journald[1498]: Journal started Jan 29 10:57:10.398570 systemd-journald[1498]: Runtime Journal (/run/log/journal/ec27c723cf6834df18d1c0176d894981) is 8.0M, max 75.3M, 67.3M free. Jan 29 10:57:09.758664 systemd[1]: Queued start job for default target multi-user.target. Jan 29 10:57:09.817790 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 10:57:09.818620 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 10:57:10.415189 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 10:57:10.432136 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 10:57:10.432239 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:57:10.445128 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 10:57:10.445225 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:57:10.453152 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 10:57:10.456145 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:57:10.474609 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 10:57:10.495244 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:57:10.499658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:57:10.502577 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 10:57:10.505874 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 10:57:10.519548 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 10:57:10.537903 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 10:57:10.555140 kernel: loop0: detected capacity change from 0 to 116808 Jan 29 10:57:10.560600 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jan 29 10:57:10.560633 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jan 29 10:57:10.573965 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 10:57:10.587241 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 10:57:10.594253 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 10:57:10.597064 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:57:10.610415 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 10:57:10.629468 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 10:57:10.656685 systemd-journald[1498]: Time spent on flushing to /var/log/journal/ec27c723cf6834df18d1c0176d894981 is 160.300ms for 902 entries. Jan 29 10:57:10.656685 systemd-journald[1498]: System Journal (/var/log/journal/ec27c723cf6834df18d1c0176d894981) is 8.0M, max 195.6M, 187.6M free. Jan 29 10:57:10.829765 systemd-journald[1498]: Received client request to flush runtime journal. Jan 29 10:57:10.829869 kernel: loop1: detected capacity change from 0 to 53784 Jan 29 10:57:10.753250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:57:10.763420 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 10:57:10.807076 udevadm[1563]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 10:57:10.816804 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 10:57:10.830475 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:57:10.840941 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 10:57:10.888570 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 10:57:10.892630 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 10:57:10.911286 kernel: loop2: detected capacity change from 0 to 194096 Jan 29 10:57:10.921951 systemd-tmpfiles[1567]: ACLs are not supported, ignoring. Jan 29 10:57:10.921991 systemd-tmpfiles[1567]: ACLs are not supported, ignoring. Jan 29 10:57:10.943193 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:57:10.961162 kernel: loop3: detected capacity change from 0 to 113536 Jan 29 10:57:11.063145 kernel: loop4: detected capacity change from 0 to 116808 Jan 29 10:57:11.081212 kernel: loop5: detected capacity change from 0 to 53784 Jan 29 10:57:11.096244 kernel: loop6: detected capacity change from 0 to 194096 Jan 29 10:57:11.163189 kernel: loop7: detected capacity change from 0 to 113536 Jan 29 10:57:11.172598 (sd-merge)[1575]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 10:57:11.174121 (sd-merge)[1575]: Merged extensions into '/usr'. Jan 29 10:57:11.184174 systemd[1]: Reloading requested from client PID 1530 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 10:57:11.184204 systemd[1]: Reloading... Jan 29 10:57:11.365836 zram_generator::config[1598]: No configuration found. Jan 29 10:57:11.631988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:57:11.747087 systemd[1]: Reloading finished in 562 ms. Jan 29 10:57:11.790223 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 10:57:11.801427 systemd[1]: Starting ensure-sysext.service... Jan 29 10:57:11.811556 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:57:11.854913 systemd[1]: Reloading requested from client PID 1652 ('systemctl') (unit ensure-sysext.service)... Jan 29 10:57:11.854960 systemd[1]: Reloading... Jan 29 10:57:11.859464 systemd-tmpfiles[1653]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 10:57:11.863275 systemd-tmpfiles[1653]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 10:57:11.868313 systemd-tmpfiles[1653]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 10:57:11.868974 systemd-tmpfiles[1653]: ACLs are not supported, ignoring. Jan 29 10:57:11.869194 systemd-tmpfiles[1653]: ACLs are not supported, ignoring. Jan 29 10:57:11.891669 systemd-tmpfiles[1653]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:57:11.891702 systemd-tmpfiles[1653]: Skipping /boot Jan 29 10:57:11.956477 systemd-tmpfiles[1653]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:57:11.960188 systemd-tmpfiles[1653]: Skipping /boot Jan 29 10:57:11.984900 ldconfig[1524]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 10:57:12.060945 zram_generator::config[1681]: No configuration found. Jan 29 10:57:12.277580 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:57:12.385111 systemd[1]: Reloading finished in 529 ms. Jan 29 10:57:12.414125 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 10:57:12.416824 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 10:57:12.425267 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:57:12.446380 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:57:12.457443 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 10:57:12.464426 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 10:57:12.481642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:57:12.491399 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:57:12.499003 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 10:57:12.523150 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 10:57:12.530455 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:57:12.545881 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:57:12.551603 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:57:12.557575 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:57:12.562425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:57:12.568308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:57:12.568676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:57:12.576178 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:57:12.584673 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:57:12.588523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:57:12.588961 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 10:57:12.599133 systemd[1]: Finished ensure-sysext.service. Jan 29 10:57:12.615962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:57:12.619434 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:57:12.622503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:57:12.622873 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:57:12.632001 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:57:12.647631 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 10:57:12.660325 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:57:12.664284 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:57:12.667251 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 10:57:12.684836 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 10:57:12.693417 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 10:57:12.697648 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 10:57:12.700873 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:57:12.701250 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:57:12.706171 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:57:12.734475 systemd-udevd[1741]: Using default interface naming scheme 'v255'. Jan 29 10:57:12.747561 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 10:57:12.761422 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 10:57:12.770276 augenrules[1780]: No rules Jan 29 10:57:12.773852 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:57:12.775222 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:57:12.795502 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:57:12.811507 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:57:12.918273 (udev-worker)[1802]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:57:13.021566 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 10:57:13.086583 systemd-networkd[1790]: lo: Link UP Jan 29 10:57:13.087786 systemd-networkd[1790]: lo: Gained carrier Jan 29 10:57:13.091881 systemd-networkd[1790]: Enumeration completed Jan 29 10:57:13.092082 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:57:13.096511 systemd-networkd[1790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:57:13.097024 systemd-networkd[1790]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:57:13.100819 systemd-networkd[1790]: eth0: Link UP Jan 29 10:57:13.101395 systemd-networkd[1790]: eth0: Gained carrier Jan 29 10:57:13.102213 systemd-networkd[1790]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:57:13.102368 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 10:57:13.115554 systemd-networkd[1790]: eth0: DHCPv4 address 172.31.18.182/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 10:57:13.141621 systemd-resolved[1739]: Positive Trust Anchors: Jan 29 10:57:13.141676 systemd-resolved[1739]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:57:13.141743 systemd-resolved[1739]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:57:13.151745 systemd-resolved[1739]: Defaulting to hostname 'linux'. Jan 29 10:57:13.156429 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:57:13.159231 systemd[1]: Reached target network.target - Network. Jan 29 10:57:13.161013 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:57:13.228121 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1798) Jan 29 10:57:13.232668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:57:13.381574 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:57:13.430833 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 10:57:13.436242 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 10:57:13.443403 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 10:57:13.451573 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 10:57:13.474133 lvm[1910]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:57:13.480233 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 10:57:13.517165 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 10:57:13.519944 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:57:13.522219 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:57:13.524494 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 10:57:13.526972 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 10:57:13.529596 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 10:57:13.532420 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 10:57:13.534708 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 10:57:13.536948 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 10:57:13.537106 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:57:13.538888 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:57:13.541982 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 10:57:13.546776 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 10:57:13.560405 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 10:57:13.564993 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 10:57:13.568345 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 10:57:13.571445 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:57:13.573324 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:57:13.575376 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:57:13.575441 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:57:13.579369 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 10:57:13.584002 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 10:57:13.597942 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 10:57:13.603204 lvm[1917]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:57:13.604003 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 10:57:13.609459 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 10:57:13.612186 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 10:57:13.619149 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 10:57:13.626446 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 10:57:13.634337 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 10:57:13.656457 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 10:57:13.663444 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 10:57:13.678489 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 10:57:13.681365 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 10:57:13.690560 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 10:57:13.693654 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 10:57:13.705307 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 10:57:13.729037 jq[1921]: false Jan 29 10:57:13.737663 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 10:57:13.740036 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 10:57:13.761811 update_engine[1930]: I20250129 10:57:13.761584 1930 main.cc:92] Flatcar Update Engine starting Jan 29 10:57:13.780129 (ntainerd)[1937]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 10:57:13.782219 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 10:57:13.782612 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 10:57:13.801171 extend-filesystems[1922]: Found loop4 Jan 29 10:57:13.801171 extend-filesystems[1922]: Found loop5 Jan 29 10:57:13.801171 extend-filesystems[1922]: Found loop6 Jan 29 10:57:13.801171 extend-filesystems[1922]: Found loop7 Jan 29 10:57:13.801171 extend-filesystems[1922]: Found nvme0n1 Jan 29 10:57:13.801171 extend-filesystems[1922]: Found nvme0n1p1 Jan 29 10:57:13.801171 extend-filesystems[1922]: Found nvme0n1p2 Jan 29 10:57:13.816484 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 10:57:13.831437 extend-filesystems[1922]: Found nvme0n1p3 Jan 29 10:57:13.831437 extend-filesystems[1922]: Found usr Jan 29 10:57:13.831437 extend-filesystems[1922]: Found nvme0n1p4 Jan 29 10:57:13.831437 extend-filesystems[1922]: Found nvme0n1p6 Jan 29 10:57:13.831437 extend-filesystems[1922]: Found nvme0n1p7 Jan 29 10:57:13.831437 extend-filesystems[1922]: Found nvme0n1p9 Jan 29 10:57:13.831437 extend-filesystems[1922]: Checking size of /dev/nvme0n1p9 Jan 29 10:57:13.856800 jq[1932]: true Jan 29 10:57:13.871877 dbus-daemon[1920]: [system] SELinux support is enabled Jan 29 10:57:13.873535 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 10:57:13.893561 dbus-daemon[1920]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1790 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 10:57:13.898870 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 10:57:13.898937 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 10:57:13.912862 update_engine[1930]: I20250129 10:57:13.901393 1930 update_check_scheduler.cc:74] Next update check in 4m0s Jan 29 10:57:13.903868 dbus-daemon[1920]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 10:57:13.902285 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 10:57:13.902322 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 10:57:13.904839 systemd[1]: Started update-engine.service - Update Engine. Jan 29 10:57:13.912285 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 10:57:13.916111 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 10:57:13.934423 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 10:57:13.944396 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 10:57:13.948292 ntpd[1924]: 29 Jan 10:57:13 ntpd[1924]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:04:55 UTC 2025 (1): Starting Jan 29 10:57:13.948292 ntpd[1924]: 29 Jan 10:57:13 ntpd[1924]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 10:57:13.948292 ntpd[1924]: 29 Jan 10:57:13 ntpd[1924]: ---------------------------------------------------- Jan 29 10:57:13.948292 ntpd[1924]: 29 Jan 10:57:13 ntpd[1924]: ntp-4 is maintained by Network Time Foundation, Jan 29 10:57:13.948292 ntpd[1924]: 29 Jan 10:57:13 ntpd[1924]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 10:57:13.948292 ntpd[1924]: 29 Jan 10:57:13 ntpd[1924]: corporation. Support and training for ntp-4 are Jan 29 10:57:13.948292 ntpd[1924]: 29 Jan 10:57:13 ntpd[1924]: available at https://www.nwtime.org/support Jan 29 10:57:13.948292 ntpd[1924]: 29 Jan 10:57:13 ntpd[1924]: ---------------------------------------------------- Jan 29 10:57:13.943038 ntpd[1924]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:04:55 UTC 2025 (1): Starting Jan 29 10:57:13.943086 ntpd[1924]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 10:57:13.943129 ntpd[1924]: ---------------------------------------------------- Jan 29 10:57:13.943149 ntpd[1924]: ntp-4 is maintained by Network Time Foundation, Jan 29 10:57:13.943168 ntpd[1924]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 10:57:13.943187 ntpd[1924]: corporation. Support and training for ntp-4 are Jan 29 10:57:13.943205 ntpd[1924]: available at https://www.nwtime.org/support Jan 29 10:57:13.943224 ntpd[1924]: ---------------------------------------------------- Jan 29 10:57:13.975520 extend-filesystems[1922]: Resized partition /dev/nvme0n1p9 Jan 29 10:57:13.967793 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 10:57:13.978752 ntpd[1924]: 29 Jan 10:57:13 ntpd[1924]: proto: precision = 0.096 usec (-23) Jan 29 10:57:13.978216 ntpd[1924]: proto: precision = 0.096 usec (-23) Jan 29 10:57:13.982520 ntpd[1924]: basedate set to 2025-01-17 Jan 29 10:57:13.985974 extend-filesystems[1967]: resize2fs 1.47.1 (20-May-2024) Jan 29 10:57:13.993981 ntpd[1924]: 29 Jan 10:57:13 ntpd[1924]: basedate set to 2025-01-17 Jan 29 10:57:13.993981 ntpd[1924]: 29 Jan 10:57:13 ntpd[1924]: gps base set to 2025-01-19 (week 2350) Jan 29 10:57:13.982563 ntpd[1924]: gps base set to 2025-01-19 (week 2350) Jan 29 10:57:13.996510 ntpd[1924]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 10:57:14.007685 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 10:57:14.001257 ntpd[1924]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 10:57:14.007869 ntpd[1924]: 29 Jan 10:57:14 ntpd[1924]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 10:57:14.007869 ntpd[1924]: 29 Jan 10:57:14 ntpd[1924]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 10:57:14.007869 ntpd[1924]: 29 Jan 10:57:14 ntpd[1924]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 10:57:14.007869 ntpd[1924]: 29 Jan 10:57:14 ntpd[1924]: Listen normally on 3 eth0 172.31.18.182:123 Jan 29 10:57:14.007869 ntpd[1924]: 29 Jan 10:57:14 ntpd[1924]: Listen normally on 4 lo [::1]:123 Jan 29 10:57:14.007869 ntpd[1924]: 29 Jan 10:57:14 ntpd[1924]: bind(21) AF_INET6 fe80::442:48ff:fef3:b7c3%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:57:14.007869 ntpd[1924]: 29 Jan 10:57:14 ntpd[1924]: unable to create socket on eth0 (5) for fe80::442:48ff:fef3:b7c3%2#123 Jan 29 10:57:14.007869 ntpd[1924]: 29 Jan 10:57:14 ntpd[1924]: failed to init interface for address fe80::442:48ff:fef3:b7c3%2 Jan 29 10:57:14.007869 ntpd[1924]: 29 Jan 10:57:14 ntpd[1924]: Listening on routing socket on fd #21 for interface updates Jan 29 10:57:14.003381 ntpd[1924]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 10:57:14.003455 ntpd[1924]: Listen normally on 3 eth0 172.31.18.182:123 Jan 29 10:57:14.003522 ntpd[1924]: Listen normally on 4 lo [::1]:123 Jan 29 10:57:14.003605 ntpd[1924]: bind(21) AF_INET6 fe80::442:48ff:fef3:b7c3%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:57:14.003658 ntpd[1924]: unable to create socket on eth0 (5) for fe80::442:48ff:fef3:b7c3%2#123 Jan 29 10:57:14.003688 ntpd[1924]: failed to init interface for address fe80::442:48ff:fef3:b7c3%2 Jan 29 10:57:14.003745 ntpd[1924]: Listening on routing socket on fd #21 for interface updates Jan 29 10:57:14.033397 jq[1957]: true Jan 29 10:57:14.043672 ntpd[1924]: 29 Jan 10:57:14 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:57:14.043672 ntpd[1924]: 29 Jan 10:57:14 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:57:14.039494 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:57:14.039545 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:57:14.104155 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 10:57:14.137149 extend-filesystems[1967]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 10:57:14.137149 extend-filesystems[1967]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 10:57:14.137149 extend-filesystems[1967]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 10:57:14.155083 extend-filesystems[1922]: Resized filesystem in /dev/nvme0n1p9 Jan 29 10:57:14.143520 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 10:57:14.143873 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 10:57:14.210293 dbus-daemon[1920]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 10:57:14.210859 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 10:57:14.218477 dbus-daemon[1920]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1963 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 10:57:14.235693 coreos-metadata[1919]: Jan 29 10:57:14.234 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 10:57:14.259889 coreos-metadata[1919]: Jan 29 10:57:14.251 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 10:57:14.259889 coreos-metadata[1919]: Jan 29 10:57:14.256 INFO Fetch successful Jan 29 10:57:14.259889 coreos-metadata[1919]: Jan 29 10:57:14.256 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 10:57:14.257588 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 10:57:14.263255 coreos-metadata[1919]: Jan 29 10:57:14.261 INFO Fetch successful Jan 29 10:57:14.263255 coreos-metadata[1919]: Jan 29 10:57:14.261 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 10:57:14.265424 coreos-metadata[1919]: Jan 29 10:57:14.264 INFO Fetch successful Jan 29 10:57:14.265424 coreos-metadata[1919]: Jan 29 10:57:14.265 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 10:57:14.268441 coreos-metadata[1919]: Jan 29 10:57:14.268 INFO Fetch successful Jan 29 10:57:14.268441 coreos-metadata[1919]: Jan 29 10:57:14.268 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 10:57:14.279421 coreos-metadata[1919]: Jan 29 10:57:14.275 INFO Fetch failed with 404: resource not found Jan 29 10:57:14.279421 coreos-metadata[1919]: Jan 29 10:57:14.275 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 10:57:14.279421 coreos-metadata[1919]: Jan 29 10:57:14.279 INFO Fetch successful Jan 29 10:57:14.279421 coreos-metadata[1919]: Jan 29 10:57:14.279 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 10:57:14.281228 coreos-metadata[1919]: Jan 29 10:57:14.280 INFO Fetch successful Jan 29 10:57:14.281228 coreos-metadata[1919]: Jan 29 10:57:14.281 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 10:57:14.282144 coreos-metadata[1919]: Jan 29 10:57:14.281 INFO Fetch successful Jan 29 10:57:14.282144 coreos-metadata[1919]: Jan 29 10:57:14.281 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 10:57:14.287303 coreos-metadata[1919]: Jan 29 10:57:14.286 INFO Fetch successful Jan 29 10:57:14.287303 coreos-metadata[1919]: Jan 29 10:57:14.286 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 10:57:14.296202 coreos-metadata[1919]: Jan 29 10:57:14.293 INFO Fetch successful Jan 29 10:57:14.316027 systemd-logind[1928]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 10:57:14.317837 systemd-logind[1928]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 29 10:57:14.321328 systemd-logind[1928]: New seat seat0. Jan 29 10:57:14.322986 bash[1999]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:57:14.329582 polkitd[1995]: Started polkitd version 121 Jan 29 10:57:14.328275 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 10:57:14.334163 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 10:57:14.342426 systemd[1]: Starting sshkeys.service... Jan 29 10:57:14.375595 containerd[1937]: time="2025-01-29T10:57:14.374984300Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 10:57:14.397378 polkitd[1995]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 10:57:14.397488 polkitd[1995]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 10:57:14.403157 polkitd[1995]: Finished loading, compiling and executing 2 rules Jan 29 10:57:14.416976 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 10:57:14.417308 dbus-daemon[1920]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 10:57:14.422685 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 10:57:14.426754 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 10:57:14.428662 polkitd[1995]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 10:57:14.435132 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1798) Jan 29 10:57:14.448081 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 10:57:14.477843 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 10:57:14.493553 systemd-hostnamed[1963]: Hostname set to (transient) Jan 29 10:57:14.493725 systemd-resolved[1739]: System hostname changed to 'ip-172-31-18-182'. Jan 29 10:57:14.515956 locksmithd[1964]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 10:57:14.611133 containerd[1937]: time="2025-01-29T10:57:14.608783038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:57:14.618551 containerd[1937]: time="2025-01-29T10:57:14.618486310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:57:14.619233 containerd[1937]: time="2025-01-29T10:57:14.619193638Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 10:57:14.619368 containerd[1937]: time="2025-01-29T10:57:14.619339846Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 10:57:14.619755 containerd[1937]: time="2025-01-29T10:57:14.619723510Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 10:57:14.620602 containerd[1937]: time="2025-01-29T10:57:14.620555662Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 10:57:14.620917 containerd[1937]: time="2025-01-29T10:57:14.620872738Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:57:14.624123 containerd[1937]: time="2025-01-29T10:57:14.623171494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:57:14.624123 containerd[1937]: time="2025-01-29T10:57:14.623579830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:57:14.624123 containerd[1937]: time="2025-01-29T10:57:14.623615530Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 10:57:14.624123 containerd[1937]: time="2025-01-29T10:57:14.623647138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:57:14.624123 containerd[1937]: time="2025-01-29T10:57:14.623670886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 10:57:14.624123 containerd[1937]: time="2025-01-29T10:57:14.623834206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:57:14.624716 containerd[1937]: time="2025-01-29T10:57:14.624677890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:57:14.625024 containerd[1937]: time="2025-01-29T10:57:14.624991402Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:57:14.625142 containerd[1937]: time="2025-01-29T10:57:14.625114426Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 10:57:14.625429 containerd[1937]: time="2025-01-29T10:57:14.625397710Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 10:57:14.625876 containerd[1937]: time="2025-01-29T10:57:14.625841134Z" level=info msg="metadata content store policy set" policy=shared Jan 29 10:57:14.646682 containerd[1937]: time="2025-01-29T10:57:14.646627990Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 10:57:14.646985 containerd[1937]: time="2025-01-29T10:57:14.646869790Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 10:57:14.647170 containerd[1937]: time="2025-01-29T10:57:14.647137930Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 10:57:14.647310 containerd[1937]: time="2025-01-29T10:57:14.647281678Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 10:57:14.648221 containerd[1937]: time="2025-01-29T10:57:14.647421202Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 10:57:14.648221 containerd[1937]: time="2025-01-29T10:57:14.647785426Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 10:57:14.648340 containerd[1937]: time="2025-01-29T10:57:14.648275410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 10:57:14.648564 containerd[1937]: time="2025-01-29T10:57:14.648518986Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 10:57:14.648734 containerd[1937]: time="2025-01-29T10:57:14.648569434Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 10:57:14.648734 containerd[1937]: time="2025-01-29T10:57:14.648606382Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 10:57:14.648734 containerd[1937]: time="2025-01-29T10:57:14.648638338Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 10:57:14.648734 containerd[1937]: time="2025-01-29T10:57:14.648668998Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 10:57:14.648734 containerd[1937]: time="2025-01-29T10:57:14.648699730Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 10:57:14.648924 containerd[1937]: time="2025-01-29T10:57:14.648732238Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 10:57:14.648924 containerd[1937]: time="2025-01-29T10:57:14.648765922Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 10:57:14.648924 containerd[1937]: time="2025-01-29T10:57:14.648796006Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 10:57:14.648924 containerd[1937]: time="2025-01-29T10:57:14.648823606Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 10:57:14.648924 containerd[1937]: time="2025-01-29T10:57:14.648850270Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 10:57:14.648924 containerd[1937]: time="2025-01-29T10:57:14.648889606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649195 containerd[1937]: time="2025-01-29T10:57:14.648929170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649195 containerd[1937]: time="2025-01-29T10:57:14.648958702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649195 containerd[1937]: time="2025-01-29T10:57:14.648987970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649195 containerd[1937]: time="2025-01-29T10:57:14.649016122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649195 containerd[1937]: time="2025-01-29T10:57:14.649048990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649195 containerd[1937]: time="2025-01-29T10:57:14.649076998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649195 containerd[1937]: time="2025-01-29T10:57:14.649127602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649195 containerd[1937]: time="2025-01-29T10:57:14.649161790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649196398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649225930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649255498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649283398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649314082Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649356802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649388194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649415410Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649555210Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649599202Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649624222Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649652662Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 10:57:14.649800 containerd[1937]: time="2025-01-29T10:57:14.649677130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.651067 containerd[1937]: time="2025-01-29T10:57:14.649708654Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 10:57:14.651067 containerd[1937]: time="2025-01-29T10:57:14.649731598Z" level=info msg="NRI interface is disabled by configuration." Jan 29 10:57:14.651067 containerd[1937]: time="2025-01-29T10:57:14.649757146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 10:57:14.652785 containerd[1937]: time="2025-01-29T10:57:14.652635790Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 10:57:14.652785 containerd[1937]: time="2025-01-29T10:57:14.652770418Z" level=info msg="Connect containerd service" Jan 29 10:57:14.653326 containerd[1937]: time="2025-01-29T10:57:14.652845346Z" level=info msg="using legacy CRI server" Jan 29 10:57:14.653326 containerd[1937]: time="2025-01-29T10:57:14.652863610Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 10:57:14.653326 containerd[1937]: time="2025-01-29T10:57:14.653143126Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 10:57:14.666187 containerd[1937]: time="2025-01-29T10:57:14.662441278Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:57:14.666187 containerd[1937]: time="2025-01-29T10:57:14.662651998Z" level=info msg="Start subscribing containerd event" Jan 29 10:57:14.666187 containerd[1937]: time="2025-01-29T10:57:14.662721754Z" level=info msg="Start recovering state" Jan 29 10:57:14.666187 containerd[1937]: time="2025-01-29T10:57:14.662847154Z" level=info msg="Start event monitor" Jan 29 10:57:14.666187 containerd[1937]: time="2025-01-29T10:57:14.662871370Z" level=info msg="Start snapshots syncer" Jan 29 10:57:14.666187 containerd[1937]: time="2025-01-29T10:57:14.662908978Z" level=info msg="Start cni network conf syncer for default" Jan 29 10:57:14.666187 containerd[1937]: time="2025-01-29T10:57:14.662931286Z" level=info msg="Start streaming server" Jan 29 10:57:14.666187 containerd[1937]: time="2025-01-29T10:57:14.663071362Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 10:57:14.669449 containerd[1937]: time="2025-01-29T10:57:14.666225322Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 10:57:14.669449 containerd[1937]: time="2025-01-29T10:57:14.666394930Z" level=info msg="containerd successfully booted in 0.299512s" Jan 29 10:57:14.666502 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 10:57:14.743364 coreos-metadata[2029]: Jan 29 10:57:14.743 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 10:57:14.743364 coreos-metadata[2029]: Jan 29 10:57:14.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 10:57:14.743364 coreos-metadata[2029]: Jan 29 10:57:14.743 INFO Fetch successful Jan 29 10:57:14.743364 coreos-metadata[2029]: Jan 29 10:57:14.743 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 10:57:14.747294 coreos-metadata[2029]: Jan 29 10:57:14.747 INFO Fetch successful Jan 29 10:57:14.752962 unknown[2029]: wrote ssh authorized keys file for user: core Jan 29 10:57:14.813836 update-ssh-keys[2109]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:57:14.820350 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 10:57:14.831196 systemd[1]: Finished sshkeys.service. Jan 29 10:57:14.919236 systemd-networkd[1790]: eth0: Gained IPv6LL Jan 29 10:57:14.925794 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 10:57:14.931671 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 10:57:14.943558 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 10:57:14.960801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:57:14.972584 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 10:57:15.053343 amazon-ssm-agent[2120]: Initializing new seelog logger Jan 29 10:57:15.055106 amazon-ssm-agent[2120]: New Seelog Logger Creation Complete Jan 29 10:57:15.055106 amazon-ssm-agent[2120]: 2025/01/29 10:57:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:57:15.055106 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:57:15.055106 amazon-ssm-agent[2120]: 2025/01/29 10:57:15 processing appconfig overrides Jan 29 10:57:15.055701 amazon-ssm-agent[2120]: 2025/01/29 10:57:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:57:15.055805 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:57:15.056055 amazon-ssm-agent[2120]: 2025/01/29 10:57:15 processing appconfig overrides Jan 29 10:57:15.056504 amazon-ssm-agent[2120]: 2025/01/29 10:57:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:57:15.056598 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:57:15.056782 amazon-ssm-agent[2120]: 2025/01/29 10:57:15 processing appconfig overrides Jan 29 10:57:15.057388 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO Proxy environment variables: Jan 29 10:57:15.060774 amazon-ssm-agent[2120]: 2025/01/29 10:57:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:57:15.060774 amazon-ssm-agent[2120]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:57:15.060922 amazon-ssm-agent[2120]: 2025/01/29 10:57:15 processing appconfig overrides Jan 29 10:57:15.068695 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 10:57:15.160135 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO https_proxy: Jan 29 10:57:15.259117 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO http_proxy: Jan 29 10:57:15.358333 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO no_proxy: Jan 29 10:57:15.456758 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO Checking if agent identity type OnPrem can be assumed Jan 29 10:57:15.557074 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO Checking if agent identity type EC2 can be assumed Jan 29 10:57:15.658112 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO Agent will take identity from EC2 Jan 29 10:57:15.693189 sshd_keygen[1965]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 10:57:15.758224 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:57:15.769159 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 10:57:15.779773 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 10:57:15.817765 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 10:57:15.818505 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 10:57:15.831526 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 10:57:15.856825 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 10:57:15.859219 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:57:15.872610 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 10:57:15.884797 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 10:57:15.888778 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 10:57:15.957201 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:57:16.056806 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 10:57:16.157185 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 29 10:57:16.250222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:57:16.253347 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 10:57:16.256435 systemd[1]: Startup finished in 1.096s (kernel) + 7.915s (initrd) + 7.588s (userspace) = 16.600s. Jan 29 10:57:16.257317 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 10:57:16.265721 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:57:16.337295 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 10:57:16.337442 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO [Registrar] Starting registrar module Jan 29 10:57:16.337442 amazon-ssm-agent[2120]: 2025-01-29 10:57:15 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 10:57:16.337442 amazon-ssm-agent[2120]: 2025-01-29 10:57:16 INFO [EC2Identity] EC2 registration was successful. Jan 29 10:57:16.337442 amazon-ssm-agent[2120]: 2025-01-29 10:57:16 INFO [CredentialRefresher] credentialRefresher has started Jan 29 10:57:16.337442 amazon-ssm-agent[2120]: 2025-01-29 10:57:16 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 10:57:16.337442 amazon-ssm-agent[2120]: 2025-01-29 10:57:16 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 10:57:16.356802 amazon-ssm-agent[2120]: 2025-01-29 10:57:16 INFO [CredentialRefresher] Next credential rotation will be in 31.408294306933332 minutes Jan 29 10:57:16.944194 ntpd[1924]: Listen normally on 6 eth0 [fe80::442:48ff:fef3:b7c3%2]:123 Jan 29 10:57:16.945905 ntpd[1924]: 29 Jan 10:57:16 ntpd[1924]: Listen normally on 6 eth0 [fe80::442:48ff:fef3:b7c3%2]:123 Jan 29 10:57:17.007591 kubelet[2160]: E0129 10:57:17.007496 2160 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:57:17.010704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:57:17.011035 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:57:17.011593 systemd[1]: kubelet.service: Consumed 1.302s CPU time. Jan 29 10:57:17.363332 amazon-ssm-agent[2120]: 2025-01-29 10:57:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 10:57:17.466131 amazon-ssm-agent[2120]: 2025-01-29 10:57:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2173) started Jan 29 10:57:17.565257 amazon-ssm-agent[2120]: 2025-01-29 10:57:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 10:57:21.259570 systemd-resolved[1739]: Clock change detected. Flushing caches. Jan 29 10:57:23.727959 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 10:57:23.739964 systemd[1]: Started sshd@0-172.31.18.182:22-139.178.89.65:57028.service - OpenSSH per-connection server daemon (139.178.89.65:57028). Jan 29 10:57:23.933524 sshd[2184]: Accepted publickey for core from 139.178.89.65 port 57028 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:57:23.936830 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:57:23.956849 systemd-logind[1928]: New session 1 of user core. Jan 29 10:57:23.958561 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 10:57:23.968942 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 10:57:23.991800 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 10:57:24.000033 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 10:57:24.021091 (systemd)[2188]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 10:57:24.231270 systemd[2188]: Queued start job for default target default.target. Jan 29 10:57:24.243069 systemd[2188]: Created slice app.slice - User Application Slice. Jan 29 10:57:24.243609 systemd[2188]: Reached target paths.target - Paths. Jan 29 10:57:24.243774 systemd[2188]: Reached target timers.target - Timers. Jan 29 10:57:24.246374 systemd[2188]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 10:57:24.266984 systemd[2188]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 10:57:24.267385 systemd[2188]: Reached target sockets.target - Sockets. Jan 29 10:57:24.267424 systemd[2188]: Reached target basic.target - Basic System. Jan 29 10:57:24.267537 systemd[2188]: Reached target default.target - Main User Target. Jan 29 10:57:24.267604 systemd[2188]: Startup finished in 234ms. Jan 29 10:57:24.268147 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 10:57:24.281093 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 10:57:24.439047 systemd[1]: Started sshd@1-172.31.18.182:22-139.178.89.65:57034.service - OpenSSH per-connection server daemon (139.178.89.65:57034). Jan 29 10:57:24.627506 sshd[2199]: Accepted publickey for core from 139.178.89.65 port 57034 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:57:24.629742 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:57:24.638869 systemd-logind[1928]: New session 2 of user core. Jan 29 10:57:24.645764 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 10:57:24.773360 sshd[2201]: Connection closed by 139.178.89.65 port 57034 Jan 29 10:57:24.774196 sshd-session[2199]: pam_unix(sshd:session): session closed for user core Jan 29 10:57:24.780262 systemd-logind[1928]: Session 2 logged out. Waiting for processes to exit. Jan 29 10:57:24.781844 systemd[1]: sshd@1-172.31.18.182:22-139.178.89.65:57034.service: Deactivated successfully. Jan 29 10:57:24.785295 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 10:57:24.786977 systemd-logind[1928]: Removed session 2. Jan 29 10:57:24.820213 systemd[1]: Started sshd@2-172.31.18.182:22-139.178.89.65:57038.service - OpenSSH per-connection server daemon (139.178.89.65:57038). Jan 29 10:57:24.999100 sshd[2206]: Accepted publickey for core from 139.178.89.65 port 57038 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:57:25.001679 sshd-session[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:57:25.008696 systemd-logind[1928]: New session 3 of user core. Jan 29 10:57:25.021740 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 10:57:25.140081 sshd[2208]: Connection closed by 139.178.89.65 port 57038 Jan 29 10:57:25.140916 sshd-session[2206]: pam_unix(sshd:session): session closed for user core Jan 29 10:57:25.146899 systemd[1]: sshd@2-172.31.18.182:22-139.178.89.65:57038.service: Deactivated successfully. Jan 29 10:57:25.150269 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 10:57:25.151616 systemd-logind[1928]: Session 3 logged out. Waiting for processes to exit. Jan 29 10:57:25.153614 systemd-logind[1928]: Removed session 3. Jan 29 10:57:25.171768 systemd[1]: Started sshd@3-172.31.18.182:22-139.178.89.65:57042.service - OpenSSH per-connection server daemon (139.178.89.65:57042). Jan 29 10:57:25.366850 sshd[2213]: Accepted publickey for core from 139.178.89.65 port 57042 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:57:25.369607 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:57:25.376681 systemd-logind[1928]: New session 4 of user core. Jan 29 10:57:25.389775 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 10:57:25.518083 sshd[2215]: Connection closed by 139.178.89.65 port 57042 Jan 29 10:57:25.518827 sshd-session[2213]: pam_unix(sshd:session): session closed for user core Jan 29 10:57:25.524577 systemd[1]: sshd@3-172.31.18.182:22-139.178.89.65:57042.service: Deactivated successfully. Jan 29 10:57:25.528150 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 10:57:25.529549 systemd-logind[1928]: Session 4 logged out. Waiting for processes to exit. Jan 29 10:57:25.531307 systemd-logind[1928]: Removed session 4. Jan 29 10:57:25.556060 systemd[1]: Started sshd@4-172.31.18.182:22-139.178.89.65:57050.service - OpenSSH per-connection server daemon (139.178.89.65:57050). Jan 29 10:57:25.745666 sshd[2220]: Accepted publickey for core from 139.178.89.65 port 57050 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:57:25.748057 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:57:25.755278 systemd-logind[1928]: New session 5 of user core. Jan 29 10:57:25.763738 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 10:57:25.876626 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 10:57:25.877236 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:57:25.892610 sudo[2223]: pam_unix(sudo:session): session closed for user root Jan 29 10:57:25.916011 sshd[2222]: Connection closed by 139.178.89.65 port 57050 Jan 29 10:57:25.915830 sshd-session[2220]: pam_unix(sshd:session): session closed for user core Jan 29 10:57:25.920974 systemd[1]: sshd@4-172.31.18.182:22-139.178.89.65:57050.service: Deactivated successfully. Jan 29 10:57:25.923987 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 10:57:25.927769 systemd-logind[1928]: Session 5 logged out. Waiting for processes to exit. Jan 29 10:57:25.930044 systemd-logind[1928]: Removed session 5. Jan 29 10:57:25.952899 systemd[1]: Started sshd@5-172.31.18.182:22-139.178.89.65:57062.service - OpenSSH per-connection server daemon (139.178.89.65:57062). Jan 29 10:57:26.142832 sshd[2228]: Accepted publickey for core from 139.178.89.65 port 57062 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:57:26.145300 sshd-session[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:57:26.152684 systemd-logind[1928]: New session 6 of user core. Jan 29 10:57:26.165726 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 10:57:26.268332 sudo[2232]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 10:57:26.268982 sudo[2232]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:57:26.275444 sudo[2232]: pam_unix(sudo:session): session closed for user root Jan 29 10:57:26.285164 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 10:57:26.285875 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:57:26.309107 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:57:26.356821 augenrules[2254]: No rules Jan 29 10:57:26.359221 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:57:26.359703 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:57:26.362039 sudo[2231]: pam_unix(sudo:session): session closed for user root Jan 29 10:57:26.384532 sshd[2230]: Connection closed by 139.178.89.65 port 57062 Jan 29 10:57:26.385515 sshd-session[2228]: pam_unix(sshd:session): session closed for user core Jan 29 10:57:26.391439 systemd-logind[1928]: Session 6 logged out. Waiting for processes to exit. Jan 29 10:57:26.393625 systemd[1]: sshd@5-172.31.18.182:22-139.178.89.65:57062.service: Deactivated successfully. Jan 29 10:57:26.397415 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 10:57:26.400623 systemd-logind[1928]: Removed session 6. Jan 29 10:57:26.429198 systemd[1]: Started sshd@6-172.31.18.182:22-139.178.89.65:57074.service - OpenSSH per-connection server daemon (139.178.89.65:57074). Jan 29 10:57:26.607520 sshd[2262]: Accepted publickey for core from 139.178.89.65 port 57074 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:57:26.609921 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:57:26.619566 systemd-logind[1928]: New session 7 of user core. Jan 29 10:57:26.624747 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 10:57:26.726889 sudo[2265]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 10:57:26.728424 sudo[2265]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:57:27.446947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 10:57:27.458979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:57:27.649038 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 10:57:27.649263 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 10:57:27.649954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:57:27.662322 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:57:27.703747 systemd[1]: Reloading requested from client PID 2307 ('systemctl') (unit session-7.scope)... Jan 29 10:57:27.703782 systemd[1]: Reloading... Jan 29 10:57:27.927526 zram_generator::config[2350]: No configuration found. Jan 29 10:57:28.169538 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:57:28.331796 systemd[1]: Reloading finished in 627 ms. Jan 29 10:57:28.417616 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 10:57:28.417803 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 10:57:28.418348 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:57:28.428102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:57:28.738214 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:57:28.756960 (kubelet)[2409]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:57:28.837213 kubelet[2409]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:57:28.837213 kubelet[2409]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:57:28.837213 kubelet[2409]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:57:28.841544 kubelet[2409]: I0129 10:57:28.839436 2409 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:57:29.936570 kubelet[2409]: I0129 10:57:29.936348 2409 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 10:57:29.936570 kubelet[2409]: I0129 10:57:29.936389 2409 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:57:29.937185 kubelet[2409]: I0129 10:57:29.936843 2409 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 10:57:29.960708 kubelet[2409]: I0129 10:57:29.960463 2409 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:57:29.974949 kubelet[2409]: I0129 10:57:29.974232 2409 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:57:29.976342 kubelet[2409]: I0129 10:57:29.976282 2409 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:57:29.976770 kubelet[2409]: I0129 10:57:29.976437 2409 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.18.182","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 10:57:29.977013 kubelet[2409]: I0129 10:57:29.976991 2409 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:57:29.977137 kubelet[2409]: I0129 10:57:29.977117 2409 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 10:57:29.977473 kubelet[2409]: I0129 10:57:29.977452 2409 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:57:29.978817 kubelet[2409]: I0129 10:57:29.978790 2409 kubelet.go:400] "Attempting to sync node with API server" Jan 29 10:57:29.978947 kubelet[2409]: I0129 10:57:29.978927 2409 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:57:29.979275 kubelet[2409]: I0129 10:57:29.979250 2409 kubelet.go:312] "Adding apiserver pod source" Jan 29 10:57:29.980515 kubelet[2409]: E0129 10:57:29.979400 2409 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:29.980515 kubelet[2409]: I0129 10:57:29.979535 2409 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:57:29.980515 kubelet[2409]: E0129 10:57:29.979545 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:29.981029 kubelet[2409]: I0129 10:57:29.980994 2409 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:57:29.981561 kubelet[2409]: I0129 10:57:29.981537 2409 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:57:29.981819 kubelet[2409]: W0129 10:57:29.981799 2409 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 10:57:29.983122 kubelet[2409]: I0129 10:57:29.983089 2409 server.go:1264] "Started kubelet" Jan 29 10:57:29.987327 kubelet[2409]: I0129 10:57:29.987263 2409 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:57:29.987577 kubelet[2409]: I0129 10:57:29.987520 2409 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:57:29.989541 kubelet[2409]: I0129 10:57:29.989470 2409 server.go:455] "Adding debug handlers to kubelet server" Jan 29 10:57:29.991129 kubelet[2409]: I0129 10:57:29.991001 2409 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:57:29.991408 kubelet[2409]: I0129 10:57:29.991369 2409 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:57:29.999863 kubelet[2409]: I0129 10:57:29.999804 2409 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 10:57:30.000957 kubelet[2409]: I0129 10:57:30.000820 2409 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 10:57:30.004598 kubelet[2409]: I0129 10:57:30.003560 2409 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:57:30.006161 kubelet[2409]: E0129 10:57:30.005789 2409 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.18.182.181f249fcf33cd63 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.18.182,UID:172.31.18.182,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.18.182,},FirstTimestamp:2025-01-29 10:57:29.982979427 +0000 UTC m=+1.220030959,LastTimestamp:2025-01-29 10:57:29.982979427 +0000 UTC m=+1.220030959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.18.182,}" Jan 29 10:57:30.006161 kubelet[2409]: W0129 10:57:30.006090 2409 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 10:57:30.006161 kubelet[2409]: E0129 10:57:30.006132 2409 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 10:57:30.008828 kubelet[2409]: W0129 10:57:30.006313 2409 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.18.182" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 10:57:30.008828 kubelet[2409]: E0129 10:57:30.006353 2409 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.18.182" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 10:57:30.015527 kubelet[2409]: E0129 10:57:30.011131 2409 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 10:57:30.015527 kubelet[2409]: I0129 10:57:30.011751 2409 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:57:30.015527 kubelet[2409]: I0129 10:57:30.011891 2409 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:57:30.019367 kubelet[2409]: I0129 10:57:30.019320 2409 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:57:30.038987 kubelet[2409]: E0129 10:57:30.038840 2409 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.18.182.181f249fd0e0ffc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.18.182,UID:172.31.18.182,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.18.182,},FirstTimestamp:2025-01-29 10:57:30.011107271 +0000 UTC m=+1.248158815,LastTimestamp:2025-01-29 10:57:30.011107271 +0000 UTC m=+1.248158815,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.18.182,}" Jan 29 10:57:30.039305 kubelet[2409]: W0129 10:57:30.039274 2409 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 10:57:30.039428 kubelet[2409]: E0129 10:57:30.039406 2409 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 10:57:30.039983 kubelet[2409]: E0129 10:57:30.039936 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.18.182\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 10:57:30.060401 kubelet[2409]: I0129 10:57:30.060191 2409 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:57:30.060401 kubelet[2409]: I0129 10:57:30.060222 2409 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:57:30.060401 kubelet[2409]: I0129 10:57:30.060252 2409 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:57:30.061163 kubelet[2409]: E0129 10:57:30.060907 2409 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.18.182.181f249fd3ab9f9f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.18.182,UID:172.31.18.182,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.18.182 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.18.182,},FirstTimestamp:2025-01-29 10:57:30.057940895 +0000 UTC m=+1.294992415,LastTimestamp:2025-01-29 10:57:30.057940895 +0000 UTC m=+1.294992415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.18.182,}" Jan 29 10:57:30.067586 kubelet[2409]: I0129 10:57:30.067538 2409 policy_none.go:49] "None policy: Start" Jan 29 10:57:30.069315 kubelet[2409]: I0129 10:57:30.069174 2409 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:57:30.069315 kubelet[2409]: I0129 10:57:30.069259 2409 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:57:30.090723 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 10:57:30.099530 kubelet[2409]: I0129 10:57:30.099320 2409 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:57:30.102870 kubelet[2409]: I0129 10:57:30.102820 2409 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:57:30.102987 kubelet[2409]: I0129 10:57:30.102895 2409 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:57:30.102987 kubelet[2409]: I0129 10:57:30.102925 2409 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 10:57:30.103113 kubelet[2409]: E0129 10:57:30.102997 2409 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:57:30.103678 kubelet[2409]: I0129 10:57:30.103242 2409 kubelet_node_status.go:73] "Attempting to register node" node="172.31.18.182" Jan 29 10:57:30.116920 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 10:57:30.118491 kubelet[2409]: I0129 10:57:30.118426 2409 kubelet_node_status.go:76] "Successfully registered node" node="172.31.18.182" Jan 29 10:57:30.125340 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 10:57:30.135680 kubelet[2409]: I0129 10:57:30.135421 2409 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:57:30.135843 kubelet[2409]: I0129 10:57:30.135773 2409 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:57:30.136526 kubelet[2409]: I0129 10:57:30.135955 2409 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:57:30.141463 kubelet[2409]: E0129 10:57:30.141424 2409 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.18.182\" not found" Jan 29 10:57:30.226968 kubelet[2409]: E0129 10:57:30.226811 2409 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.182\" not found" Jan 29 10:57:30.327589 kubelet[2409]: E0129 10:57:30.327525 2409 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.182\" not found" Jan 29 10:57:30.342139 sudo[2265]: pam_unix(sudo:session): session closed for user root Jan 29 10:57:30.365657 sshd[2264]: Connection closed by 139.178.89.65 port 57074 Jan 29 10:57:30.365528 sshd-session[2262]: pam_unix(sshd:session): session closed for user core Jan 29 10:57:30.370905 systemd-logind[1928]: Session 7 logged out. Waiting for processes to exit. Jan 29 10:57:30.372221 systemd[1]: sshd@6-172.31.18.182:22-139.178.89.65:57074.service: Deactivated successfully. Jan 29 10:57:30.376465 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 10:57:30.380433 systemd-logind[1928]: Removed session 7. Jan 29 10:57:30.428237 kubelet[2409]: E0129 10:57:30.428160 2409 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.182\" not found" Jan 29 10:57:30.529333 kubelet[2409]: E0129 10:57:30.529280 2409 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.182\" not found" Jan 29 10:57:30.629940 kubelet[2409]: E0129 10:57:30.629886 2409 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.182\" not found" Jan 29 10:57:30.730592 kubelet[2409]: E0129 10:57:30.730542 2409 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.182\" not found" Jan 29 10:57:30.831335 kubelet[2409]: E0129 10:57:30.831196 2409 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.18.182\" not found" Jan 29 10:57:30.932413 kubelet[2409]: I0129 10:57:30.932344 2409 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 10:57:30.933135 containerd[1937]: time="2025-01-29T10:57:30.932907604Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 10:57:30.933697 kubelet[2409]: I0129 10:57:30.933318 2409 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 10:57:30.939933 kubelet[2409]: I0129 10:57:30.939635 2409 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 10:57:30.939933 kubelet[2409]: W0129 10:57:30.939816 2409 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 10:57:30.939933 kubelet[2409]: W0129 10:57:30.939866 2409 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 10:57:30.980623 kubelet[2409]: I0129 10:57:30.980314 2409 apiserver.go:52] "Watching apiserver" Jan 29 10:57:30.980623 kubelet[2409]: E0129 10:57:30.980575 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:31.009631 kubelet[2409]: I0129 10:57:31.009578 2409 topology_manager.go:215] "Topology Admit Handler" podUID="bb140f78-f754-4fd1-a6db-7a145c56d3a2" podNamespace="calico-system" podName="calico-node-5qkw9" Jan 29 10:57:31.010119 kubelet[2409]: I0129 10:57:31.009945 2409 topology_manager.go:215] "Topology Admit Handler" podUID="d93af062-ea94-445a-8eee-553c377a0330" podNamespace="calico-system" podName="csi-node-driver-2rxwp" Jan 29 10:57:31.010296 kubelet[2409]: I0129 10:57:31.010214 2409 topology_manager.go:215] "Topology Admit Handler" podUID="61a15f34-dfab-42c2-9d9f-33397d96a415" podNamespace="kube-system" podName="kube-proxy-wj76b" Jan 29 10:57:31.011513 kubelet[2409]: E0129 10:57:31.011036 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:31.023824 systemd[1]: Created slice kubepods-besteffort-podbb140f78_f754_4fd1_a6db_7a145c56d3a2.slice - libcontainer container kubepods-besteffort-podbb140f78_f754_4fd1_a6db_7a145c56d3a2.slice. Jan 29 10:57:31.045918 systemd[1]: Created slice kubepods-besteffort-pod61a15f34_dfab_42c2_9d9f_33397d96a415.slice - libcontainer container kubepods-besteffort-pod61a15f34_dfab_42c2_9d9f_33397d96a415.slice. Jan 29 10:57:31.102364 kubelet[2409]: I0129 10:57:31.102228 2409 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 10:57:31.111829 kubelet[2409]: I0129 10:57:31.111367 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/61a15f34-dfab-42c2-9d9f-33397d96a415-kube-proxy\") pod \"kube-proxy-wj76b\" (UID: \"61a15f34-dfab-42c2-9d9f-33397d96a415\") " pod="kube-system/kube-proxy-wj76b" Jan 29 10:57:31.111829 kubelet[2409]: I0129 10:57:31.111429 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bb140f78-f754-4fd1-a6db-7a145c56d3a2-var-run-calico\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.111829 kubelet[2409]: I0129 10:57:31.111468 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bb140f78-f754-4fd1-a6db-7a145c56d3a2-cni-log-dir\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.111829 kubelet[2409]: I0129 10:57:31.111533 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d93af062-ea94-445a-8eee-553c377a0330-socket-dir\") pod \"csi-node-driver-2rxwp\" (UID: \"d93af062-ea94-445a-8eee-553c377a0330\") " pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:31.111829 kubelet[2409]: I0129 10:57:31.111581 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61a15f34-dfab-42c2-9d9f-33397d96a415-xtables-lock\") pod \"kube-proxy-wj76b\" (UID: \"61a15f34-dfab-42c2-9d9f-33397d96a415\") " pod="kube-system/kube-proxy-wj76b" Jan 29 10:57:31.112174 kubelet[2409]: I0129 10:57:31.111636 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb140f78-f754-4fd1-a6db-7a145c56d3a2-lib-modules\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.112174 kubelet[2409]: I0129 10:57:31.111674 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bb140f78-f754-4fd1-a6db-7a145c56d3a2-policysync\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.112174 kubelet[2409]: I0129 10:57:31.111713 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bb140f78-f754-4fd1-a6db-7a145c56d3a2-cni-bin-dir\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.112174 kubelet[2409]: I0129 10:57:31.111751 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bb140f78-f754-4fd1-a6db-7a145c56d3a2-cni-net-dir\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.112174 kubelet[2409]: I0129 10:57:31.111792 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d93af062-ea94-445a-8eee-553c377a0330-registration-dir\") pod \"csi-node-driver-2rxwp\" (UID: \"d93af062-ea94-445a-8eee-553c377a0330\") " pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:31.112395 kubelet[2409]: I0129 10:57:31.111856 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d93af062-ea94-445a-8eee-553c377a0330-kubelet-dir\") pod \"csi-node-driver-2rxwp\" (UID: \"d93af062-ea94-445a-8eee-553c377a0330\") " pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:31.112395 kubelet[2409]: I0129 10:57:31.111921 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61a15f34-dfab-42c2-9d9f-33397d96a415-lib-modules\") pod \"kube-proxy-wj76b\" (UID: \"61a15f34-dfab-42c2-9d9f-33397d96a415\") " pod="kube-system/kube-proxy-wj76b" Jan 29 10:57:31.112395 kubelet[2409]: I0129 10:57:31.111970 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb140f78-f754-4fd1-a6db-7a145c56d3a2-xtables-lock\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.112395 kubelet[2409]: I0129 10:57:31.112016 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb140f78-f754-4fd1-a6db-7a145c56d3a2-var-lib-calico\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.112395 kubelet[2409]: I0129 10:57:31.112055 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bb140f78-f754-4fd1-a6db-7a145c56d3a2-flexvol-driver-host\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.112651 kubelet[2409]: I0129 10:57:31.112107 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56ctt\" (UniqueName: \"kubernetes.io/projected/bb140f78-f754-4fd1-a6db-7a145c56d3a2-kube-api-access-56ctt\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.112651 kubelet[2409]: I0129 10:57:31.112144 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d93af062-ea94-445a-8eee-553c377a0330-varrun\") pod \"csi-node-driver-2rxwp\" (UID: \"d93af062-ea94-445a-8eee-553c377a0330\") " pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:31.112651 kubelet[2409]: I0129 10:57:31.112180 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp9kx\" (UniqueName: \"kubernetes.io/projected/61a15f34-dfab-42c2-9d9f-33397d96a415-kube-api-access-qp9kx\") pod \"kube-proxy-wj76b\" (UID: \"61a15f34-dfab-42c2-9d9f-33397d96a415\") " pod="kube-system/kube-proxy-wj76b" Jan 29 10:57:31.112651 kubelet[2409]: I0129 10:57:31.112228 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb140f78-f754-4fd1-a6db-7a145c56d3a2-tigera-ca-bundle\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.112651 kubelet[2409]: I0129 10:57:31.112261 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bb140f78-f754-4fd1-a6db-7a145c56d3a2-node-certs\") pod \"calico-node-5qkw9\" (UID: \"bb140f78-f754-4fd1-a6db-7a145c56d3a2\") " pod="calico-system/calico-node-5qkw9" Jan 29 10:57:31.112867 kubelet[2409]: I0129 10:57:31.112299 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9js5\" (UniqueName: \"kubernetes.io/projected/d93af062-ea94-445a-8eee-553c377a0330-kube-api-access-h9js5\") pod \"csi-node-driver-2rxwp\" (UID: \"d93af062-ea94-445a-8eee-553c377a0330\") " pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:31.217432 kubelet[2409]: E0129 10:57:31.217109 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:31.217432 kubelet[2409]: W0129 10:57:31.217150 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:31.217432 kubelet[2409]: E0129 10:57:31.217214 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:31.224304 kubelet[2409]: E0129 10:57:31.224160 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:31.224304 kubelet[2409]: W0129 10:57:31.224194 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:31.224304 kubelet[2409]: E0129 10:57:31.224224 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:31.277240 kubelet[2409]: E0129 10:57:31.274461 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:31.277240 kubelet[2409]: W0129 10:57:31.274578 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:31.277240 kubelet[2409]: E0129 10:57:31.274628 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:31.285232 kubelet[2409]: E0129 10:57:31.285197 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:31.285462 kubelet[2409]: W0129 10:57:31.285435 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:31.285628 kubelet[2409]: E0129 10:57:31.285605 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:31.291562 kubelet[2409]: E0129 10:57:31.290736 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:31.291562 kubelet[2409]: W0129 10:57:31.290773 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:31.291562 kubelet[2409]: E0129 10:57:31.290806 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:31.346539 containerd[1937]: time="2025-01-29T10:57:31.346468310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5qkw9,Uid:bb140f78-f754-4fd1-a6db-7a145c56d3a2,Namespace:calico-system,Attempt:0,}" Jan 29 10:57:31.352231 containerd[1937]: time="2025-01-29T10:57:31.352179182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj76b,Uid:61a15f34-dfab-42c2-9d9f-33397d96a415,Namespace:kube-system,Attempt:0,}" Jan 29 10:57:31.911035 containerd[1937]: time="2025-01-29T10:57:31.910916296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:57:31.913954 containerd[1937]: time="2025-01-29T10:57:31.913887088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:57:31.915529 containerd[1937]: time="2025-01-29T10:57:31.915424084Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 29 10:57:31.916657 containerd[1937]: time="2025-01-29T10:57:31.916598656Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:57:31.918122 containerd[1937]: time="2025-01-29T10:57:31.918046828Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:57:31.924980 containerd[1937]: time="2025-01-29T10:57:31.924923872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:57:31.926872 containerd[1937]: time="2025-01-29T10:57:31.926150128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.37727ms" Jan 29 10:57:31.932391 containerd[1937]: time="2025-01-29T10:57:31.932264057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.823047ms" Jan 29 10:57:31.982086 kubelet[2409]: E0129 10:57:31.981285 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:32.166064 containerd[1937]: time="2025-01-29T10:57:32.165194270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:57:32.166064 containerd[1937]: time="2025-01-29T10:57:32.165317114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:57:32.167109 containerd[1937]: time="2025-01-29T10:57:32.165354182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:57:32.167109 containerd[1937]: time="2025-01-29T10:57:32.165571466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:57:32.170098 containerd[1937]: time="2025-01-29T10:57:32.169471046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:57:32.170098 containerd[1937]: time="2025-01-29T10:57:32.169611770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:57:32.170098 containerd[1937]: time="2025-01-29T10:57:32.169648322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:57:32.170098 containerd[1937]: time="2025-01-29T10:57:32.169804334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:57:32.231856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480888525.mount: Deactivated successfully. Jan 29 10:57:32.285783 systemd[1]: Started cri-containerd-e2c908ea473e78c954a7ffca7b84c6030e2070954050e9477ce81eae026255fc.scope - libcontainer container e2c908ea473e78c954a7ffca7b84c6030e2070954050e9477ce81eae026255fc. Jan 29 10:57:32.296687 systemd[1]: Started cri-containerd-bf3e0bca9c4b594a91576d1ca934705d5f2d0f5959fa7ad168979a6de27e9ea7.scope - libcontainer container bf3e0bca9c4b594a91576d1ca934705d5f2d0f5959fa7ad168979a6de27e9ea7. Jan 29 10:57:32.361538 containerd[1937]: time="2025-01-29T10:57:32.361403487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj76b,Uid:61a15f34-dfab-42c2-9d9f-33397d96a415,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2c908ea473e78c954a7ffca7b84c6030e2070954050e9477ce81eae026255fc\"" Jan 29 10:57:32.368922 containerd[1937]: time="2025-01-29T10:57:32.368854059Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 10:57:32.371887 containerd[1937]: time="2025-01-29T10:57:32.371816211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5qkw9,Uid:bb140f78-f754-4fd1-a6db-7a145c56d3a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf3e0bca9c4b594a91576d1ca934705d5f2d0f5959fa7ad168979a6de27e9ea7\"" Jan 29 10:57:32.981769 kubelet[2409]: E0129 10:57:32.981701 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:33.105090 kubelet[2409]: E0129 10:57:33.104115 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:33.651684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634293643.mount: Deactivated successfully. Jan 29 10:57:33.982004 kubelet[2409]: E0129 10:57:33.981902 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:34.178824 containerd[1937]: time="2025-01-29T10:57:34.178266664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:34.180237 containerd[1937]: time="2025-01-29T10:57:34.180151444Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 29 10:57:34.182153 containerd[1937]: time="2025-01-29T10:57:34.182075776Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:34.186462 containerd[1937]: time="2025-01-29T10:57:34.185990524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:34.187407 containerd[1937]: time="2025-01-29T10:57:34.187345984Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.818424161s" Jan 29 10:57:34.187526 containerd[1937]: time="2025-01-29T10:57:34.187401052Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 10:57:34.190248 containerd[1937]: time="2025-01-29T10:57:34.190185052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 10:57:34.194346 containerd[1937]: time="2025-01-29T10:57:34.194174056Z" level=info msg="CreateContainer within sandbox \"e2c908ea473e78c954a7ffca7b84c6030e2070954050e9477ce81eae026255fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 10:57:34.231072 containerd[1937]: time="2025-01-29T10:57:34.230868088Z" level=info msg="CreateContainer within sandbox \"e2c908ea473e78c954a7ffca7b84c6030e2070954050e9477ce81eae026255fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ae59c6f8badad7ebb73c6612052330a1cd7d1afc2cfbbdfe416721d5ae090bd\"" Jan 29 10:57:34.232615 containerd[1937]: time="2025-01-29T10:57:34.232143292Z" level=info msg="StartContainer for \"2ae59c6f8badad7ebb73c6612052330a1cd7d1afc2cfbbdfe416721d5ae090bd\"" Jan 29 10:57:34.285820 systemd[1]: Started cri-containerd-2ae59c6f8badad7ebb73c6612052330a1cd7d1afc2cfbbdfe416721d5ae090bd.scope - libcontainer container 2ae59c6f8badad7ebb73c6612052330a1cd7d1afc2cfbbdfe416721d5ae090bd. Jan 29 10:57:34.350912 containerd[1937]: time="2025-01-29T10:57:34.350825657Z" level=info msg="StartContainer for \"2ae59c6f8badad7ebb73c6612052330a1cd7d1afc2cfbbdfe416721d5ae090bd\" returns successfully" Jan 29 10:57:34.983126 kubelet[2409]: E0129 10:57:34.983047 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:35.103885 kubelet[2409]: E0129 10:57:35.103207 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:35.165835 kubelet[2409]: I0129 10:57:35.165733 2409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wj76b" podStartSLOduration=3.342413284 podStartE2EDuration="5.165713657s" podCreationTimestamp="2025-01-29 10:57:30 +0000 UTC" firstStartedPulling="2025-01-29 10:57:32.366682419 +0000 UTC m=+3.603733951" lastFinishedPulling="2025-01-29 10:57:34.189982804 +0000 UTC m=+5.427034324" observedRunningTime="2025-01-29 10:57:35.160244273 +0000 UTC m=+6.397295841" watchObservedRunningTime="2025-01-29 10:57:35.165713657 +0000 UTC m=+6.402765189" Jan 29 10:57:35.227727 kubelet[2409]: E0129 10:57:35.227683 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.227727 kubelet[2409]: W0129 10:57:35.227721 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.227963 kubelet[2409]: E0129 10:57:35.227754 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.228323 kubelet[2409]: E0129 10:57:35.228261 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.228323 kubelet[2409]: W0129 10:57:35.228282 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.228323 kubelet[2409]: E0129 10:57:35.228306 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.228788 kubelet[2409]: E0129 10:57:35.228639 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.228788 kubelet[2409]: W0129 10:57:35.228656 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.228788 kubelet[2409]: E0129 10:57:35.228677 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.229025 kubelet[2409]: E0129 10:57:35.228954 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.229025 kubelet[2409]: W0129 10:57:35.228970 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.229025 kubelet[2409]: E0129 10:57:35.228989 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.229369 kubelet[2409]: E0129 10:57:35.229277 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.229369 kubelet[2409]: W0129 10:57:35.229293 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.229369 kubelet[2409]: E0129 10:57:35.229312 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.229631 kubelet[2409]: E0129 10:57:35.229603 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.229713 kubelet[2409]: W0129 10:57:35.229632 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.229713 kubelet[2409]: E0129 10:57:35.229654 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.229947 kubelet[2409]: E0129 10:57:35.229930 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.230040 kubelet[2409]: W0129 10:57:35.229945 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.230040 kubelet[2409]: E0129 10:57:35.229964 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.230272 kubelet[2409]: E0129 10:57:35.230247 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.230272 kubelet[2409]: W0129 10:57:35.230264 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.230272 kubelet[2409]: E0129 10:57:35.230285 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.230693 kubelet[2409]: E0129 10:57:35.230652 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.230693 kubelet[2409]: W0129 10:57:35.230682 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.230908 kubelet[2409]: E0129 10:57:35.230706 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.231083 kubelet[2409]: E0129 10:57:35.231007 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.231083 kubelet[2409]: W0129 10:57:35.231024 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.231083 kubelet[2409]: E0129 10:57:35.231042 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.231471 kubelet[2409]: E0129 10:57:35.231308 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.231471 kubelet[2409]: W0129 10:57:35.231324 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.231471 kubelet[2409]: E0129 10:57:35.231372 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.232102 kubelet[2409]: E0129 10:57:35.231887 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.232102 kubelet[2409]: W0129 10:57:35.231918 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.232102 kubelet[2409]: E0129 10:57:35.231945 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.232441 kubelet[2409]: E0129 10:57:35.232286 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.232441 kubelet[2409]: W0129 10:57:35.232304 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.232441 kubelet[2409]: E0129 10:57:35.232325 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.232681 kubelet[2409]: E0129 10:57:35.232631 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.232681 kubelet[2409]: W0129 10:57:35.232647 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.232681 kubelet[2409]: E0129 10:57:35.232665 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.233504 kubelet[2409]: E0129 10:57:35.233289 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.233504 kubelet[2409]: W0129 10:57:35.233319 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.233504 kubelet[2409]: E0129 10:57:35.233360 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.234900 kubelet[2409]: E0129 10:57:35.234829 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.235472 kubelet[2409]: W0129 10:57:35.235428 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.235814 kubelet[2409]: E0129 10:57:35.235762 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.238690 kubelet[2409]: E0129 10:57:35.238640 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.238690 kubelet[2409]: W0129 10:57:35.238680 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.238920 kubelet[2409]: E0129 10:57:35.238718 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.239169 kubelet[2409]: E0129 10:57:35.239118 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.239169 kubelet[2409]: W0129 10:57:35.239145 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.239169 kubelet[2409]: E0129 10:57:35.239168 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.239523 kubelet[2409]: E0129 10:57:35.239452 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.239523 kubelet[2409]: W0129 10:57:35.239493 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.239722 kubelet[2409]: E0129 10:57:35.239540 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.240062 kubelet[2409]: E0129 10:57:35.240029 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.240062 kubelet[2409]: W0129 10:57:35.240057 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.240250 kubelet[2409]: E0129 10:57:35.240081 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.240773 kubelet[2409]: E0129 10:57:35.240741 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.240940 kubelet[2409]: W0129 10:57:35.240771 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.240940 kubelet[2409]: E0129 10:57:35.240799 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.242817 kubelet[2409]: E0129 10:57:35.242779 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.242817 kubelet[2409]: W0129 10:57:35.242814 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.243068 kubelet[2409]: E0129 10:57:35.242863 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.243470 kubelet[2409]: E0129 10:57:35.243440 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.243573 kubelet[2409]: W0129 10:57:35.243468 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.243715 kubelet[2409]: E0129 10:57:35.243646 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.244742 kubelet[2409]: E0129 10:57:35.244696 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.244742 kubelet[2409]: W0129 10:57:35.244731 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.244945 kubelet[2409]: E0129 10:57:35.244859 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.247769 kubelet[2409]: E0129 10:57:35.247549 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.247769 kubelet[2409]: W0129 10:57:35.247583 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.248226 kubelet[2409]: E0129 10:57:35.248010 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.248546 kubelet[2409]: E0129 10:57:35.248448 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.248546 kubelet[2409]: W0129 10:57:35.248474 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.248962 kubelet[2409]: E0129 10:57:35.248732 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.249544 kubelet[2409]: E0129 10:57:35.249254 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.249544 kubelet[2409]: W0129 10:57:35.249280 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.249544 kubelet[2409]: E0129 10:57:35.249314 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.249967 kubelet[2409]: E0129 10:57:35.249941 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.250182 kubelet[2409]: W0129 10:57:35.250067 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.250301 kubelet[2409]: E0129 10:57:35.250274 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.251650 kubelet[2409]: E0129 10:57:35.251562 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.251650 kubelet[2409]: W0129 10:57:35.251596 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.252508 kubelet[2409]: E0129 10:57:35.252237 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.252853 kubelet[2409]: E0129 10:57:35.252690 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.252853 kubelet[2409]: W0129 10:57:35.252721 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.252853 kubelet[2409]: E0129 10:57:35.252799 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.254224 kubelet[2409]: E0129 10:57:35.253777 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.254224 kubelet[2409]: W0129 10:57:35.253805 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.254224 kubelet[2409]: E0129 10:57:35.253845 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.254933 kubelet[2409]: E0129 10:57:35.254817 2409 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:57:35.254933 kubelet[2409]: W0129 10:57:35.254846 2409 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:57:35.254933 kubelet[2409]: E0129 10:57:35.254876 2409 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:57:35.362029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2188088421.mount: Deactivated successfully. Jan 29 10:57:35.493571 containerd[1937]: time="2025-01-29T10:57:35.492824250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:35.495000 containerd[1937]: time="2025-01-29T10:57:35.494817774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 29 10:57:35.496816 containerd[1937]: time="2025-01-29T10:57:35.496561038Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:35.500448 containerd[1937]: time="2025-01-29T10:57:35.500368470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:35.501999 containerd[1937]: time="2025-01-29T10:57:35.501801174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.311554106s" Jan 29 10:57:35.501999 containerd[1937]: time="2025-01-29T10:57:35.501852666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 10:57:35.506554 containerd[1937]: time="2025-01-29T10:57:35.506464194Z" level=info msg="CreateContainer within sandbox \"bf3e0bca9c4b594a91576d1ca934705d5f2d0f5959fa7ad168979a6de27e9ea7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 10:57:35.535800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3614832105.mount: Deactivated successfully. Jan 29 10:57:35.538644 containerd[1937]: time="2025-01-29T10:57:35.538567350Z" level=info msg="CreateContainer within sandbox \"bf3e0bca9c4b594a91576d1ca934705d5f2d0f5959fa7ad168979a6de27e9ea7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"91ee4b1c47267490123babf917cc79d88b2ad79a212911a75fe33870541e6786\"" Jan 29 10:57:35.539648 containerd[1937]: time="2025-01-29T10:57:35.539581038Z" level=info msg="StartContainer for \"91ee4b1c47267490123babf917cc79d88b2ad79a212911a75fe33870541e6786\"" Jan 29 10:57:35.589808 systemd[1]: Started cri-containerd-91ee4b1c47267490123babf917cc79d88b2ad79a212911a75fe33870541e6786.scope - libcontainer container 91ee4b1c47267490123babf917cc79d88b2ad79a212911a75fe33870541e6786. Jan 29 10:57:35.644922 containerd[1937]: time="2025-01-29T10:57:35.644769979Z" level=info msg="StartContainer for \"91ee4b1c47267490123babf917cc79d88b2ad79a212911a75fe33870541e6786\" returns successfully" Jan 29 10:57:35.661462 systemd[1]: cri-containerd-91ee4b1c47267490123babf917cc79d88b2ad79a212911a75fe33870541e6786.scope: Deactivated successfully. Jan 29 10:57:35.983504 kubelet[2409]: E0129 10:57:35.983420 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:36.060170 containerd[1937]: time="2025-01-29T10:57:36.060012125Z" level=info msg="shim disconnected" id=91ee4b1c47267490123babf917cc79d88b2ad79a212911a75fe33870541e6786 namespace=k8s.io Jan 29 10:57:36.060170 containerd[1937]: time="2025-01-29T10:57:36.060084653Z" level=warning msg="cleaning up after shim disconnected" id=91ee4b1c47267490123babf917cc79d88b2ad79a212911a75fe33870541e6786 namespace=k8s.io Jan 29 10:57:36.060170 containerd[1937]: time="2025-01-29T10:57:36.060104705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:57:36.138736 containerd[1937]: time="2025-01-29T10:57:36.138578321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 10:57:36.320653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91ee4b1c47267490123babf917cc79d88b2ad79a212911a75fe33870541e6786-rootfs.mount: Deactivated successfully. Jan 29 10:57:36.984099 kubelet[2409]: E0129 10:57:36.984030 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:37.104053 kubelet[2409]: E0129 10:57:37.103897 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:37.984626 kubelet[2409]: E0129 10:57:37.984577 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:38.986127 kubelet[2409]: E0129 10:57:38.986075 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:39.104366 kubelet[2409]: E0129 10:57:39.104241 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:39.615814 containerd[1937]: time="2025-01-29T10:57:39.615737483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:39.618935 containerd[1937]: time="2025-01-29T10:57:39.618847451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 10:57:39.622726 containerd[1937]: time="2025-01-29T10:57:39.622653923Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:39.627270 containerd[1937]: time="2025-01-29T10:57:39.627176483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:39.628688 containerd[1937]: time="2025-01-29T10:57:39.628505555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.489842406s" Jan 29 10:57:39.628688 containerd[1937]: time="2025-01-29T10:57:39.628557899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 10:57:39.633034 containerd[1937]: time="2025-01-29T10:57:39.632961335Z" level=info msg="CreateContainer within sandbox \"bf3e0bca9c4b594a91576d1ca934705d5f2d0f5959fa7ad168979a6de27e9ea7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 10:57:39.671845 containerd[1937]: time="2025-01-29T10:57:39.671747759Z" level=info msg="CreateContainer within sandbox \"bf3e0bca9c4b594a91576d1ca934705d5f2d0f5959fa7ad168979a6de27e9ea7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3c192a35491ea6a394856ad13b8c18c92b174bbcc627f96481a05b58a33e865a\"" Jan 29 10:57:39.672610 containerd[1937]: time="2025-01-29T10:57:39.672548627Z" level=info msg="StartContainer for \"3c192a35491ea6a394856ad13b8c18c92b174bbcc627f96481a05b58a33e865a\"" Jan 29 10:57:39.731783 systemd[1]: Started cri-containerd-3c192a35491ea6a394856ad13b8c18c92b174bbcc627f96481a05b58a33e865a.scope - libcontainer container 3c192a35491ea6a394856ad13b8c18c92b174bbcc627f96481a05b58a33e865a. Jan 29 10:57:39.796575 containerd[1937]: time="2025-01-29T10:57:39.795810144Z" level=info msg="StartContainer for \"3c192a35491ea6a394856ad13b8c18c92b174bbcc627f96481a05b58a33e865a\" returns successfully" Jan 29 10:57:39.986710 kubelet[2409]: E0129 10:57:39.986630 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:40.987248 kubelet[2409]: E0129 10:57:40.986994 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:41.104659 kubelet[2409]: E0129 10:57:41.104149 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:41.143066 containerd[1937]: time="2025-01-29T10:57:41.142991290Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:57:41.148725 systemd[1]: cri-containerd-3c192a35491ea6a394856ad13b8c18c92b174bbcc627f96481a05b58a33e865a.scope: Deactivated successfully. Jan 29 10:57:41.185859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c192a35491ea6a394856ad13b8c18c92b174bbcc627f96481a05b58a33e865a-rootfs.mount: Deactivated successfully. Jan 29 10:57:41.195663 kubelet[2409]: I0129 10:57:41.195617 2409 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 10:57:41.968205 containerd[1937]: time="2025-01-29T10:57:41.968062742Z" level=info msg="shim disconnected" id=3c192a35491ea6a394856ad13b8c18c92b174bbcc627f96481a05b58a33e865a namespace=k8s.io Jan 29 10:57:41.968680 containerd[1937]: time="2025-01-29T10:57:41.968224022Z" level=warning msg="cleaning up after shim disconnected" id=3c192a35491ea6a394856ad13b8c18c92b174bbcc627f96481a05b58a33e865a namespace=k8s.io Jan 29 10:57:41.968680 containerd[1937]: time="2025-01-29T10:57:41.968248346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:57:41.987911 kubelet[2409]: E0129 10:57:41.987807 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:42.175929 containerd[1937]: time="2025-01-29T10:57:42.175785275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 10:57:42.988926 kubelet[2409]: E0129 10:57:42.988865 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:43.113948 systemd[1]: Created slice kubepods-besteffort-podd93af062_ea94_445a_8eee_553c377a0330.slice - libcontainer container kubepods-besteffort-podd93af062_ea94_445a_8eee_553c377a0330.slice. Jan 29 10:57:43.118759 containerd[1937]: time="2025-01-29T10:57:43.118692276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:0,}" Jan 29 10:57:43.247580 containerd[1937]: time="2025-01-29T10:57:43.244701325Z" level=error msg="Failed to destroy network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:43.247580 containerd[1937]: time="2025-01-29T10:57:43.245299657Z" level=error msg="encountered an error cleaning up failed sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:43.247580 containerd[1937]: time="2025-01-29T10:57:43.245416021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:43.248213 kubelet[2409]: E0129 10:57:43.245970 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:43.248213 kubelet[2409]: E0129 10:57:43.246054 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:43.248213 kubelet[2409]: E0129 10:57:43.246088 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:43.248400 kubelet[2409]: E0129 10:57:43.246151 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:43.250943 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e-shm.mount: Deactivated successfully. Jan 29 10:57:43.989648 kubelet[2409]: E0129 10:57:43.989581 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:44.029321 kubelet[2409]: I0129 10:57:44.029251 2409 topology_manager.go:215] "Topology Admit Handler" podUID="7ad6d355-1764-4b70-9308-9d135bac54fb" podNamespace="default" podName="nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:44.041548 systemd[1]: Created slice kubepods-besteffort-pod7ad6d355_1764_4b70_9308_9d135bac54fb.slice - libcontainer container kubepods-besteffort-pod7ad6d355_1764_4b70_9308_9d135bac54fb.slice. Jan 29 10:57:44.181826 kubelet[2409]: I0129 10:57:44.180063 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e" Jan 29 10:57:44.181976 containerd[1937]: time="2025-01-29T10:57:44.181247413Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\"" Jan 29 10:57:44.181976 containerd[1937]: time="2025-01-29T10:57:44.181580941Z" level=info msg="Ensure that sandbox 15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e in task-service has been cleanup successfully" Jan 29 10:57:44.182497 containerd[1937]: time="2025-01-29T10:57:44.182351473Z" level=info msg="TearDown network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" successfully" Jan 29 10:57:44.182618 containerd[1937]: time="2025-01-29T10:57:44.182589565Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" returns successfully" Jan 29 10:57:44.183938 containerd[1937]: time="2025-01-29T10:57:44.183886585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:1,}" Jan 29 10:57:44.185758 systemd[1]: run-netns-cni\x2d11967278\x2d1731\x2d36e4\x2deea0\x2d38ed238fac6c.mount: Deactivated successfully. Jan 29 10:57:44.201471 kubelet[2409]: I0129 10:57:44.201334 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84tfs\" (UniqueName: \"kubernetes.io/projected/7ad6d355-1764-4b70-9308-9d135bac54fb-kube-api-access-84tfs\") pod \"nginx-deployment-85f456d6dd-c77sv\" (UID: \"7ad6d355-1764-4b70-9308-9d135bac54fb\") " pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:44.353557 containerd[1937]: time="2025-01-29T10:57:44.352024850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:0,}" Jan 29 10:57:44.517686 containerd[1937]: time="2025-01-29T10:57:44.517151847Z" level=error msg="Failed to destroy network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:44.519064 containerd[1937]: time="2025-01-29T10:57:44.519003831Z" level=error msg="encountered an error cleaning up failed sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:44.520571 containerd[1937]: time="2025-01-29T10:57:44.519839115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:44.520700 kubelet[2409]: E0129 10:57:44.520142 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:44.520700 kubelet[2409]: E0129 10:57:44.520215 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:44.520700 kubelet[2409]: E0129 10:57:44.520248 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:44.521601 kubelet[2409]: E0129 10:57:44.520331 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:44.575957 containerd[1937]: time="2025-01-29T10:57:44.575628315Z" level=error msg="Failed to destroy network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:44.577705 containerd[1937]: time="2025-01-29T10:57:44.577438443Z" level=error msg="encountered an error cleaning up failed sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:44.577705 containerd[1937]: time="2025-01-29T10:57:44.577603215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:44.578812 kubelet[2409]: E0129 10:57:44.577910 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:44.578812 kubelet[2409]: E0129 10:57:44.577984 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:44.578812 kubelet[2409]: E0129 10:57:44.578038 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:44.579009 kubelet[2409]: E0129 10:57:44.578116 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-c77sv" podUID="7ad6d355-1764-4b70-9308-9d135bac54fb" Jan 29 10:57:44.846836 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 10:57:44.990704 kubelet[2409]: E0129 10:57:44.990600 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:45.189990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a-shm.mount: Deactivated successfully. Jan 29 10:57:45.192838 kubelet[2409]: I0129 10:57:45.191232 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae" Jan 29 10:57:45.190207 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae-shm.mount: Deactivated successfully. Jan 29 10:57:45.196341 containerd[1937]: time="2025-01-29T10:57:45.195952538Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\"" Jan 29 10:57:45.197037 containerd[1937]: time="2025-01-29T10:57:45.196770182Z" level=info msg="Ensure that sandbox 8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae in task-service has been cleanup successfully" Jan 29 10:57:45.200643 containerd[1937]: time="2025-01-29T10:57:45.200593802Z" level=info msg="TearDown network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" successfully" Jan 29 10:57:45.202267 containerd[1937]: time="2025-01-29T10:57:45.201328754Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" returns successfully" Jan 29 10:57:45.202161 systemd[1]: run-netns-cni\x2db3f7568a\x2d2ad2\x2d583b\x2d139a\x2ddb74eadd4492.mount: Deactivated successfully. Jan 29 10:57:45.203369 kubelet[2409]: I0129 10:57:45.202814 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a" Jan 29 10:57:45.205866 containerd[1937]: time="2025-01-29T10:57:45.205805210Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\"" Jan 29 10:57:45.206833 containerd[1937]: time="2025-01-29T10:57:45.206794178Z" level=info msg="TearDown network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" successfully" Jan 29 10:57:45.207097 containerd[1937]: time="2025-01-29T10:57:45.205934942Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\"" Jan 29 10:57:45.207169 containerd[1937]: time="2025-01-29T10:57:45.207101882Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" returns successfully" Jan 29 10:57:45.209007 containerd[1937]: time="2025-01-29T10:57:45.207890114Z" level=info msg="Ensure that sandbox bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a in task-service has been cleanup successfully" Jan 29 10:57:45.209007 containerd[1937]: time="2025-01-29T10:57:45.208061990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:2,}" Jan 29 10:57:45.209007 containerd[1937]: time="2025-01-29T10:57:45.208276958Z" level=info msg="TearDown network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" successfully" Jan 29 10:57:45.209007 containerd[1937]: time="2025-01-29T10:57:45.208317842Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" returns successfully" Jan 29 10:57:45.210223 containerd[1937]: time="2025-01-29T10:57:45.209569046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:1,}" Jan 29 10:57:45.213448 systemd[1]: run-netns-cni\x2df3ea8400\x2d1ba6\x2d6415\x2d8ff9\x2d3b585b9e1d11.mount: Deactivated successfully. Jan 29 10:57:45.432213 containerd[1937]: time="2025-01-29T10:57:45.432018484Z" level=error msg="Failed to destroy network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:45.433544 containerd[1937]: time="2025-01-29T10:57:45.433452784Z" level=error msg="encountered an error cleaning up failed sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:45.434293 containerd[1937]: time="2025-01-29T10:57:45.434204824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:45.435190 kubelet[2409]: E0129 10:57:45.435132 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:45.435776 kubelet[2409]: E0129 10:57:45.435214 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:45.435776 kubelet[2409]: E0129 10:57:45.435257 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:45.435776 kubelet[2409]: E0129 10:57:45.435327 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-c77sv" podUID="7ad6d355-1764-4b70-9308-9d135bac54fb" Jan 29 10:57:45.451780 containerd[1937]: time="2025-01-29T10:57:45.450054700Z" level=error msg="Failed to destroy network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:45.452183 containerd[1937]: time="2025-01-29T10:57:45.451931272Z" level=error msg="encountered an error cleaning up failed sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:45.452183 containerd[1937]: time="2025-01-29T10:57:45.452041264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:45.452885 kubelet[2409]: E0129 10:57:45.452597 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:45.453001 kubelet[2409]: E0129 10:57:45.452899 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:45.453001 kubelet[2409]: E0129 10:57:45.452943 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:45.453161 kubelet[2409]: E0129 10:57:45.453038 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:45.991085 kubelet[2409]: E0129 10:57:45.991005 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:46.188343 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835-shm.mount: Deactivated successfully. Jan 29 10:57:46.213863 kubelet[2409]: I0129 10:57:46.212823 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835" Jan 29 10:57:46.214897 containerd[1937]: time="2025-01-29T10:57:46.214804983Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\"" Jan 29 10:57:46.215146 containerd[1937]: time="2025-01-29T10:57:46.215089647Z" level=info msg="Ensure that sandbox 952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835 in task-service has been cleanup successfully" Jan 29 10:57:46.219456 systemd[1]: run-netns-cni\x2db4e7aa39\x2da454\x2d0201\x2d6047\x2d29cc1a98db2b.mount: Deactivated successfully. Jan 29 10:57:46.222611 containerd[1937]: time="2025-01-29T10:57:46.222562911Z" level=info msg="TearDown network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" successfully" Jan 29 10:57:46.223437 containerd[1937]: time="2025-01-29T10:57:46.223396803Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" returns successfully" Jan 29 10:57:46.224825 containerd[1937]: time="2025-01-29T10:57:46.224600884Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\"" Jan 29 10:57:46.226508 kubelet[2409]: I0129 10:57:46.225018 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48" Jan 29 10:57:46.228058 containerd[1937]: time="2025-01-29T10:57:46.228007756Z" level=info msg="TearDown network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" successfully" Jan 29 10:57:46.228246 containerd[1937]: time="2025-01-29T10:57:46.228218836Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" returns successfully" Jan 29 10:57:46.229999 containerd[1937]: time="2025-01-29T10:57:46.229938208Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\"" Jan 29 10:57:46.230272 containerd[1937]: time="2025-01-29T10:57:46.230228452Z" level=info msg="Ensure that sandbox 30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48 in task-service has been cleanup successfully" Jan 29 10:57:46.230527 containerd[1937]: time="2025-01-29T10:57:46.229944796Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\"" Jan 29 10:57:46.232749 containerd[1937]: time="2025-01-29T10:57:46.232704280Z" level=info msg="TearDown network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" successfully" Jan 29 10:57:46.233630 containerd[1937]: time="2025-01-29T10:57:46.233573476Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" returns successfully" Jan 29 10:57:46.234469 systemd[1]: run-netns-cni\x2dbbc4dbc8\x2d7b4b\x2d8999\x2d0f6c\x2d662059ca804a.mount: Deactivated successfully. Jan 29 10:57:46.234712 containerd[1937]: time="2025-01-29T10:57:46.233133832Z" level=info msg="TearDown network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" successfully" Jan 29 10:57:46.235839 containerd[1937]: time="2025-01-29T10:57:46.235182460Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" returns successfully" Jan 29 10:57:46.237382 containerd[1937]: time="2025-01-29T10:57:46.237039496Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\"" Jan 29 10:57:46.237382 containerd[1937]: time="2025-01-29T10:57:46.237215464Z" level=info msg="TearDown network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" successfully" Jan 29 10:57:46.237382 containerd[1937]: time="2025-01-29T10:57:46.237264088Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" returns successfully" Jan 29 10:57:46.237382 containerd[1937]: time="2025-01-29T10:57:46.237046480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:3,}" Jan 29 10:57:46.240858 containerd[1937]: time="2025-01-29T10:57:46.240319792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:2,}" Jan 29 10:57:46.478265 containerd[1937]: time="2025-01-29T10:57:46.478023053Z" level=error msg="Failed to destroy network for sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:46.479503 containerd[1937]: time="2025-01-29T10:57:46.479229521Z" level=error msg="encountered an error cleaning up failed sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:46.479503 containerd[1937]: time="2025-01-29T10:57:46.479331461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:46.479872 kubelet[2409]: E0129 10:57:46.479808 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:46.479953 kubelet[2409]: E0129 10:57:46.479904 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:46.480007 kubelet[2409]: E0129 10:57:46.479941 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:46.480080 kubelet[2409]: E0129 10:57:46.480007 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:46.483809 containerd[1937]: time="2025-01-29T10:57:46.483679361Z" level=error msg="Failed to destroy network for sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:46.484779 containerd[1937]: time="2025-01-29T10:57:46.484531877Z" level=error msg="encountered an error cleaning up failed sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:46.484779 containerd[1937]: time="2025-01-29T10:57:46.484639565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:46.485035 kubelet[2409]: E0129 10:57:46.484973 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:46.485237 kubelet[2409]: E0129 10:57:46.485169 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:46.485394 kubelet[2409]: E0129 10:57:46.485343 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:46.486543 kubelet[2409]: E0129 10:57:46.485600 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-c77sv" podUID="7ad6d355-1764-4b70-9308-9d135bac54fb" Jan 29 10:57:46.992087 kubelet[2409]: E0129 10:57:46.992025 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:47.187829 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890-shm.mount: Deactivated successfully. Jan 29 10:57:47.188007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa-shm.mount: Deactivated successfully. Jan 29 10:57:47.232201 kubelet[2409]: I0129 10:57:47.232144 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890" Jan 29 10:57:47.234395 containerd[1937]: time="2025-01-29T10:57:47.234333893Z" level=info msg="StopPodSandbox for \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\"" Jan 29 10:57:47.236242 containerd[1937]: time="2025-01-29T10:57:47.236178413Z" level=info msg="Ensure that sandbox 1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890 in task-service has been cleanup successfully" Jan 29 10:57:47.239276 containerd[1937]: time="2025-01-29T10:57:47.239152337Z" level=info msg="TearDown network for sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" successfully" Jan 29 10:57:47.239276 containerd[1937]: time="2025-01-29T10:57:47.239203529Z" level=info msg="StopPodSandbox for \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" returns successfully" Jan 29 10:57:47.242743 containerd[1937]: time="2025-01-29T10:57:47.242057153Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\"" Jan 29 10:57:47.242743 containerd[1937]: time="2025-01-29T10:57:47.242221625Z" level=info msg="TearDown network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" successfully" Jan 29 10:57:47.242743 containerd[1937]: time="2025-01-29T10:57:47.242243777Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" returns successfully" Jan 29 10:57:47.240358 systemd[1]: run-netns-cni\x2da386c8d6\x2da238\x2d4eba\x2d2c89\x2d23cd7a36be05.mount: Deactivated successfully. Jan 29 10:57:47.244506 containerd[1937]: time="2025-01-29T10:57:47.244159217Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\"" Jan 29 10:57:47.245986 kubelet[2409]: I0129 10:57:47.245358 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa" Jan 29 10:57:47.246572 containerd[1937]: time="2025-01-29T10:57:47.246317825Z" level=info msg="TearDown network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" successfully" Jan 29 10:57:47.246572 containerd[1937]: time="2025-01-29T10:57:47.246539969Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" returns successfully" Jan 29 10:57:47.248071 containerd[1937]: time="2025-01-29T10:57:47.248012957Z" level=info msg="StopPodSandbox for \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\"" Jan 29 10:57:47.248317 containerd[1937]: time="2025-01-29T10:57:47.248189465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:3,}" Jan 29 10:57:47.248767 containerd[1937]: time="2025-01-29T10:57:47.248722145Z" level=info msg="Ensure that sandbox 48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa in task-service has been cleanup successfully" Jan 29 10:57:47.250308 containerd[1937]: time="2025-01-29T10:57:47.250225253Z" level=info msg="TearDown network for sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" successfully" Jan 29 10:57:47.250308 containerd[1937]: time="2025-01-29T10:57:47.250275353Z" level=info msg="StopPodSandbox for \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" returns successfully" Jan 29 10:57:47.254183 systemd[1]: run-netns-cni\x2d59ebe523\x2d17ec\x2d7cf1\x2d9054\x2d22a3e2588263.mount: Deactivated successfully. Jan 29 10:57:47.257243 containerd[1937]: time="2025-01-29T10:57:47.256947557Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\"" Jan 29 10:57:47.258245 containerd[1937]: time="2025-01-29T10:57:47.257758457Z" level=info msg="TearDown network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" successfully" Jan 29 10:57:47.258245 containerd[1937]: time="2025-01-29T10:57:47.257795009Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" returns successfully" Jan 29 10:57:47.258618 containerd[1937]: time="2025-01-29T10:57:47.258468977Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\"" Jan 29 10:57:47.258728 containerd[1937]: time="2025-01-29T10:57:47.258685601Z" level=info msg="TearDown network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" successfully" Jan 29 10:57:47.258728 containerd[1937]: time="2025-01-29T10:57:47.258710729Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" returns successfully" Jan 29 10:57:47.259815 containerd[1937]: time="2025-01-29T10:57:47.259378997Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\"" Jan 29 10:57:47.259815 containerd[1937]: time="2025-01-29T10:57:47.259549745Z" level=info msg="TearDown network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" successfully" Jan 29 10:57:47.259815 containerd[1937]: time="2025-01-29T10:57:47.259574513Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" returns successfully" Jan 29 10:57:47.260796 containerd[1937]: time="2025-01-29T10:57:47.260737073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:4,}" Jan 29 10:57:47.511175 containerd[1937]: time="2025-01-29T10:57:47.511025994Z" level=error msg="Failed to destroy network for sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:47.513761 containerd[1937]: time="2025-01-29T10:57:47.513412074Z" level=error msg="encountered an error cleaning up failed sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:47.513761 containerd[1937]: time="2025-01-29T10:57:47.513539850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:47.514052 kubelet[2409]: E0129 10:57:47.513989 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:47.514134 kubelet[2409]: E0129 10:57:47.514074 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:47.514134 kubelet[2409]: E0129 10:57:47.514109 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:47.514252 kubelet[2409]: E0129 10:57:47.514173 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-c77sv" podUID="7ad6d355-1764-4b70-9308-9d135bac54fb" Jan 29 10:57:47.526214 containerd[1937]: time="2025-01-29T10:57:47.525840798Z" level=error msg="Failed to destroy network for sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:47.527537 containerd[1937]: time="2025-01-29T10:57:47.527257758Z" level=error msg="encountered an error cleaning up failed sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:47.527537 containerd[1937]: time="2025-01-29T10:57:47.527356338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:47.528366 kubelet[2409]: E0129 10:57:47.527686 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:47.528366 kubelet[2409]: E0129 10:57:47.527773 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:47.528366 kubelet[2409]: E0129 10:57:47.527813 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:47.528606 kubelet[2409]: E0129 10:57:47.527890 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:47.992531 kubelet[2409]: E0129 10:57:47.992298 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:48.186762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce-shm.mount: Deactivated successfully. Jan 29 10:57:48.186953 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73-shm.mount: Deactivated successfully. Jan 29 10:57:48.258324 kubelet[2409]: I0129 10:57:48.257557 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73" Jan 29 10:57:48.260914 containerd[1937]: time="2025-01-29T10:57:48.260598294Z" level=info msg="StopPodSandbox for \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\"" Jan 29 10:57:48.261051 containerd[1937]: time="2025-01-29T10:57:48.260926674Z" level=info msg="Ensure that sandbox 0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73 in task-service has been cleanup successfully" Jan 29 10:57:48.267148 containerd[1937]: time="2025-01-29T10:57:48.266618226Z" level=info msg="TearDown network for sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\" successfully" Jan 29 10:57:48.267148 containerd[1937]: time="2025-01-29T10:57:48.266674314Z" level=info msg="StopPodSandbox for \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\" returns successfully" Jan 29 10:57:48.269549 containerd[1937]: time="2025-01-29T10:57:48.267108918Z" level=info msg="StopPodSandbox for \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\"" Jan 29 10:57:48.267699 systemd[1]: run-netns-cni\x2d793b1bc9\x2dccf4\x2deb3a\x2d561e\x2dbe5819d1476f.mount: Deactivated successfully. Jan 29 10:57:48.272193 containerd[1937]: time="2025-01-29T10:57:48.271323066Z" level=info msg="TearDown network for sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" successfully" Jan 29 10:57:48.272193 containerd[1937]: time="2025-01-29T10:57:48.271372602Z" level=info msg="StopPodSandbox for \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" returns successfully" Jan 29 10:57:48.274934 containerd[1937]: time="2025-01-29T10:57:48.272996226Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\"" Jan 29 10:57:48.274934 containerd[1937]: time="2025-01-29T10:57:48.274015374Z" level=info msg="TearDown network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" successfully" Jan 29 10:57:48.274934 containerd[1937]: time="2025-01-29T10:57:48.274053126Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" returns successfully" Jan 29 10:57:48.276010 containerd[1937]: time="2025-01-29T10:57:48.275572650Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\"" Jan 29 10:57:48.276010 containerd[1937]: time="2025-01-29T10:57:48.275731374Z" level=info msg="TearDown network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" successfully" Jan 29 10:57:48.276010 containerd[1937]: time="2025-01-29T10:57:48.275755782Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" returns successfully" Jan 29 10:57:48.278246 containerd[1937]: time="2025-01-29T10:57:48.277851966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:4,}" Jan 29 10:57:48.279901 kubelet[2409]: I0129 10:57:48.279425 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce" Jan 29 10:57:48.281178 containerd[1937]: time="2025-01-29T10:57:48.281084526Z" level=info msg="StopPodSandbox for \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\"" Jan 29 10:57:48.281469 containerd[1937]: time="2025-01-29T10:57:48.281418786Z" level=info msg="Ensure that sandbox fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce in task-service has been cleanup successfully" Jan 29 10:57:48.288289 containerd[1937]: time="2025-01-29T10:57:48.285087606Z" level=info msg="TearDown network for sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\" successfully" Jan 29 10:57:48.288289 containerd[1937]: time="2025-01-29T10:57:48.285171270Z" level=info msg="StopPodSandbox for \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\" returns successfully" Jan 29 10:57:48.288289 containerd[1937]: time="2025-01-29T10:57:48.286853358Z" level=info msg="StopPodSandbox for \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\"" Jan 29 10:57:48.288289 containerd[1937]: time="2025-01-29T10:57:48.287073642Z" level=info msg="TearDown network for sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" successfully" Jan 29 10:57:48.288289 containerd[1937]: time="2025-01-29T10:57:48.287430498Z" level=info msg="StopPodSandbox for \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" returns successfully" Jan 29 10:57:48.286541 systemd[1]: run-netns-cni\x2d84b916f3\x2da86f\x2d8709\x2d72a5\x2db8bc5340cba4.mount: Deactivated successfully. Jan 29 10:57:48.291686 containerd[1937]: time="2025-01-29T10:57:48.291436674Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\"" Jan 29 10:57:48.292379 containerd[1937]: time="2025-01-29T10:57:48.292285842Z" level=info msg="TearDown network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" successfully" Jan 29 10:57:48.292379 containerd[1937]: time="2025-01-29T10:57:48.292325586Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" returns successfully" Jan 29 10:57:48.296269 containerd[1937]: time="2025-01-29T10:57:48.295962294Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\"" Jan 29 10:57:48.296269 containerd[1937]: time="2025-01-29T10:57:48.296134098Z" level=info msg="TearDown network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" successfully" Jan 29 10:57:48.296269 containerd[1937]: time="2025-01-29T10:57:48.296157138Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" returns successfully" Jan 29 10:57:48.297871 containerd[1937]: time="2025-01-29T10:57:48.297625446Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\"" Jan 29 10:57:48.297871 containerd[1937]: time="2025-01-29T10:57:48.297797034Z" level=info msg="TearDown network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" successfully" Jan 29 10:57:48.297871 containerd[1937]: time="2025-01-29T10:57:48.297824130Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" returns successfully" Jan 29 10:57:48.300440 containerd[1937]: time="2025-01-29T10:57:48.300387594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:5,}" Jan 29 10:57:48.568334 containerd[1937]: time="2025-01-29T10:57:48.567702367Z" level=error msg="Failed to destroy network for sandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:48.570949 containerd[1937]: time="2025-01-29T10:57:48.570679555Z" level=error msg="encountered an error cleaning up failed sandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:48.570949 containerd[1937]: time="2025-01-29T10:57:48.570931507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:48.573545 kubelet[2409]: E0129 10:57:48.571923 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:48.573545 kubelet[2409]: E0129 10:57:48.572010 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:48.573545 kubelet[2409]: E0129 10:57:48.572048 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:48.573818 kubelet[2409]: E0129 10:57:48.572116 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-c77sv" podUID="7ad6d355-1764-4b70-9308-9d135bac54fb" Jan 29 10:57:48.587830 containerd[1937]: time="2025-01-29T10:57:48.587581411Z" level=error msg="Failed to destroy network for sandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:48.589435 containerd[1937]: time="2025-01-29T10:57:48.589319323Z" level=error msg="encountered an error cleaning up failed sandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:48.589666 containerd[1937]: time="2025-01-29T10:57:48.589535383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:48.590759 kubelet[2409]: E0129 10:57:48.590686 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:48.590903 kubelet[2409]: E0129 10:57:48.590775 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:48.590903 kubelet[2409]: E0129 10:57:48.590817 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:48.590903 kubelet[2409]: E0129 10:57:48.590883 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:48.992610 kubelet[2409]: E0129 10:57:48.992521 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:49.187510 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb-shm.mount: Deactivated successfully. Jan 29 10:57:49.188553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2-shm.mount: Deactivated successfully. Jan 29 10:57:49.290886 kubelet[2409]: I0129 10:57:49.290729 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2" Jan 29 10:57:49.296310 containerd[1937]: time="2025-01-29T10:57:49.292307227Z" level=info msg="StopPodSandbox for \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\"" Jan 29 10:57:49.296310 containerd[1937]: time="2025-01-29T10:57:49.292636147Z" level=info msg="Ensure that sandbox 26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2 in task-service has been cleanup successfully" Jan 29 10:57:49.296310 containerd[1937]: time="2025-01-29T10:57:49.292929031Z" level=info msg="TearDown network for sandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\" successfully" Jan 29 10:57:49.296310 containerd[1937]: time="2025-01-29T10:57:49.292956655Z" level=info msg="StopPodSandbox for \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\" returns successfully" Jan 29 10:57:49.295682 systemd[1]: run-netns-cni\x2d6b876ca5\x2db7cf\x2d6b6c\x2d0443\x2d03067f6666f9.mount: Deactivated successfully. Jan 29 10:57:49.299356 containerd[1937]: time="2025-01-29T10:57:49.298351507Z" level=info msg="StopPodSandbox for \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\"" Jan 29 10:57:49.299356 containerd[1937]: time="2025-01-29T10:57:49.298568587Z" level=info msg="TearDown network for sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\" successfully" Jan 29 10:57:49.299356 containerd[1937]: time="2025-01-29T10:57:49.298593379Z" level=info msg="StopPodSandbox for \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\" returns successfully" Jan 29 10:57:49.300621 containerd[1937]: time="2025-01-29T10:57:49.300392035Z" level=info msg="StopPodSandbox for \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\"" Jan 29 10:57:49.300621 containerd[1937]: time="2025-01-29T10:57:49.300588547Z" level=info msg="TearDown network for sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" successfully" Jan 29 10:57:49.300621 containerd[1937]: time="2025-01-29T10:57:49.300615931Z" level=info msg="StopPodSandbox for \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" returns successfully" Jan 29 10:57:49.301420 containerd[1937]: time="2025-01-29T10:57:49.301228123Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\"" Jan 29 10:57:49.301420 containerd[1937]: time="2025-01-29T10:57:49.301403995Z" level=info msg="TearDown network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" successfully" Jan 29 10:57:49.301787 containerd[1937]: time="2025-01-29T10:57:49.301427263Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" returns successfully" Jan 29 10:57:49.303442 containerd[1937]: time="2025-01-29T10:57:49.303063019Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\"" Jan 29 10:57:49.303442 containerd[1937]: time="2025-01-29T10:57:49.303206527Z" level=info msg="TearDown network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" successfully" Jan 29 10:57:49.303442 containerd[1937]: time="2025-01-29T10:57:49.303229363Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" returns successfully" Jan 29 10:57:49.304949 containerd[1937]: time="2025-01-29T10:57:49.304730011Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\"" Jan 29 10:57:49.304949 containerd[1937]: time="2025-01-29T10:57:49.304932475Z" level=info msg="TearDown network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" successfully" Jan 29 10:57:49.305102 containerd[1937]: time="2025-01-29T10:57:49.304958383Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" returns successfully" Jan 29 10:57:49.306243 containerd[1937]: time="2025-01-29T10:57:49.306052963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:6,}" Jan 29 10:57:49.307305 kubelet[2409]: I0129 10:57:49.307262 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb" Jan 29 10:57:49.308587 containerd[1937]: time="2025-01-29T10:57:49.308125183Z" level=info msg="StopPodSandbox for \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\"" Jan 29 10:57:49.308587 containerd[1937]: time="2025-01-29T10:57:49.308447515Z" level=info msg="Ensure that sandbox f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb in task-service has been cleanup successfully" Jan 29 10:57:49.309131 containerd[1937]: time="2025-01-29T10:57:49.308769895Z" level=info msg="TearDown network for sandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\" successfully" Jan 29 10:57:49.309131 containerd[1937]: time="2025-01-29T10:57:49.308796835Z" level=info msg="StopPodSandbox for \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\" returns successfully" Jan 29 10:57:49.312177 systemd[1]: run-netns-cni\x2db1a9e0f6\x2d6911\x2d88ce\x2dedd8\x2d7ef23d65f5b5.mount: Deactivated successfully. Jan 29 10:57:49.315618 containerd[1937]: time="2025-01-29T10:57:49.312954307Z" level=info msg="StopPodSandbox for \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\"" Jan 29 10:57:49.315618 containerd[1937]: time="2025-01-29T10:57:49.315242851Z" level=info msg="TearDown network for sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\" successfully" Jan 29 10:57:49.315618 containerd[1937]: time="2025-01-29T10:57:49.315282115Z" level=info msg="StopPodSandbox for \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\" returns successfully" Jan 29 10:57:49.318021 containerd[1937]: time="2025-01-29T10:57:49.317768311Z" level=info msg="StopPodSandbox for \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\"" Jan 29 10:57:49.318235 containerd[1937]: time="2025-01-29T10:57:49.318195811Z" level=info msg="TearDown network for sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" successfully" Jan 29 10:57:49.318293 containerd[1937]: time="2025-01-29T10:57:49.318235387Z" level=info msg="StopPodSandbox for \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" returns successfully" Jan 29 10:57:49.319212 containerd[1937]: time="2025-01-29T10:57:49.319111483Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\"" Jan 29 10:57:49.319326 containerd[1937]: time="2025-01-29T10:57:49.319280143Z" level=info msg="TearDown network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" successfully" Jan 29 10:57:49.319326 containerd[1937]: time="2025-01-29T10:57:49.319303291Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" returns successfully" Jan 29 10:57:49.321030 containerd[1937]: time="2025-01-29T10:57:49.320982379Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\"" Jan 29 10:57:49.321596 containerd[1937]: time="2025-01-29T10:57:49.321554047Z" level=info msg="TearDown network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" successfully" Jan 29 10:57:49.321706 containerd[1937]: time="2025-01-29T10:57:49.321592555Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" returns successfully" Jan 29 10:57:49.323859 containerd[1937]: time="2025-01-29T10:57:49.323789815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:5,}" Jan 29 10:57:49.522518 containerd[1937]: time="2025-01-29T10:57:49.522400244Z" level=error msg="Failed to destroy network for sandbox \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:49.524070 containerd[1937]: time="2025-01-29T10:57:49.523077848Z" level=error msg="encountered an error cleaning up failed sandbox \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:49.524070 containerd[1937]: time="2025-01-29T10:57:49.523194248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:49.524280 kubelet[2409]: E0129 10:57:49.523473 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:49.524280 kubelet[2409]: E0129 10:57:49.523684 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:49.524280 kubelet[2409]: E0129 10:57:49.523770 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2rxwp" Jan 29 10:57:49.524464 kubelet[2409]: E0129 10:57:49.523880 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2rxwp_calico-system(d93af062-ea94-445a-8eee-553c377a0330)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2rxwp" podUID="d93af062-ea94-445a-8eee-553c377a0330" Jan 29 10:57:49.543524 containerd[1937]: time="2025-01-29T10:57:49.541642100Z" level=error msg="Failed to destroy network for sandbox \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:49.544218 containerd[1937]: time="2025-01-29T10:57:49.544106396Z" level=error msg="encountered an error cleaning up failed sandbox \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:49.544323 containerd[1937]: time="2025-01-29T10:57:49.544209596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:49.544958 kubelet[2409]: E0129 10:57:49.544551 2409 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:57:49.544958 kubelet[2409]: E0129 10:57:49.544624 2409 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:49.544958 kubelet[2409]: E0129 10:57:49.544657 2409 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-c77sv" Jan 29 10:57:49.545251 kubelet[2409]: E0129 10:57:49.544748 2409 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-c77sv_default(7ad6d355-1764-4b70-9308-9d135bac54fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-c77sv" podUID="7ad6d355-1764-4b70-9308-9d135bac54fb" Jan 29 10:57:49.550284 containerd[1937]: time="2025-01-29T10:57:49.550210508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:49.551603 containerd[1937]: time="2025-01-29T10:57:49.551517164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 10:57:49.553404 containerd[1937]: time="2025-01-29T10:57:49.553329440Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:49.557621 containerd[1937]: time="2025-01-29T10:57:49.557550956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:49.559170 containerd[1937]: time="2025-01-29T10:57:49.558977588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.383123193s" Jan 29 10:57:49.559170 containerd[1937]: time="2025-01-29T10:57:49.559032344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 10:57:49.572760 containerd[1937]: time="2025-01-29T10:57:49.572691176Z" level=info msg="CreateContainer within sandbox \"bf3e0bca9c4b594a91576d1ca934705d5f2d0f5959fa7ad168979a6de27e9ea7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 10:57:49.595643 containerd[1937]: time="2025-01-29T10:57:49.595561160Z" level=info msg="CreateContainer within sandbox \"bf3e0bca9c4b594a91576d1ca934705d5f2d0f5959fa7ad168979a6de27e9ea7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"059f96b3cc4d0e846c65bc86f65922ad9d24d90aa58cacc9b4d511c63089a842\"" Jan 29 10:57:49.596377 containerd[1937]: time="2025-01-29T10:57:49.596298512Z" level=info msg="StartContainer for \"059f96b3cc4d0e846c65bc86f65922ad9d24d90aa58cacc9b4d511c63089a842\"" Jan 29 10:57:49.638795 systemd[1]: Started cri-containerd-059f96b3cc4d0e846c65bc86f65922ad9d24d90aa58cacc9b4d511c63089a842.scope - libcontainer container 059f96b3cc4d0e846c65bc86f65922ad9d24d90aa58cacc9b4d511c63089a842. Jan 29 10:57:49.700988 containerd[1937]: time="2025-01-29T10:57:49.700903941Z" level=info msg="StartContainer for \"059f96b3cc4d0e846c65bc86f65922ad9d24d90aa58cacc9b4d511c63089a842\" returns successfully" Jan 29 10:57:49.823214 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 10:57:49.823370 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 10:57:49.980110 kubelet[2409]: E0129 10:57:49.980016 2409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:49.993536 kubelet[2409]: E0129 10:57:49.993372 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:50.197734 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0-shm.mount: Deactivated successfully. Jan 29 10:57:50.198110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157892712.mount: Deactivated successfully. Jan 29 10:57:50.315846 kubelet[2409]: I0129 10:57:50.315695 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06" Jan 29 10:57:50.318364 containerd[1937]: time="2025-01-29T10:57:50.317842268Z" level=info msg="StopPodSandbox for \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\"" Jan 29 10:57:50.319196 containerd[1937]: time="2025-01-29T10:57:50.318859916Z" level=info msg="Ensure that sandbox abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06 in task-service has been cleanup successfully" Jan 29 10:57:50.321531 containerd[1937]: time="2025-01-29T10:57:50.319660196Z" level=info msg="TearDown network for sandbox \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\" successfully" Jan 29 10:57:50.321531 containerd[1937]: time="2025-01-29T10:57:50.319697912Z" level=info msg="StopPodSandbox for \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\" returns successfully" Jan 29 10:57:50.323599 containerd[1937]: time="2025-01-29T10:57:50.322828136Z" level=info msg="StopPodSandbox for \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\"" Jan 29 10:57:50.323599 containerd[1937]: time="2025-01-29T10:57:50.323021864Z" level=info msg="TearDown network for sandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\" successfully" Jan 29 10:57:50.323599 containerd[1937]: time="2025-01-29T10:57:50.323050448Z" level=info msg="StopPodSandbox for \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\" returns successfully" Jan 29 10:57:50.324301 systemd[1]: run-netns-cni\x2d32a09d06\x2d74f0\x2d640b\x2d333c\x2d56542818b5c3.mount: Deactivated successfully. Jan 29 10:57:50.327978 containerd[1937]: time="2025-01-29T10:57:50.325596704Z" level=info msg="StopPodSandbox for \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\"" Jan 29 10:57:50.327978 containerd[1937]: time="2025-01-29T10:57:50.325754060Z" level=info msg="TearDown network for sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\" successfully" Jan 29 10:57:50.327978 containerd[1937]: time="2025-01-29T10:57:50.325776500Z" level=info msg="StopPodSandbox for \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\" returns successfully" Jan 29 10:57:50.328692 containerd[1937]: time="2025-01-29T10:57:50.328248404Z" level=info msg="StopPodSandbox for \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\"" Jan 29 10:57:50.329052 containerd[1937]: time="2025-01-29T10:57:50.328472888Z" level=info msg="TearDown network for sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" successfully" Jan 29 10:57:50.329052 containerd[1937]: time="2025-01-29T10:57:50.328810232Z" level=info msg="StopPodSandbox for \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" returns successfully" Jan 29 10:57:50.330320 containerd[1937]: time="2025-01-29T10:57:50.330222020Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\"" Jan 29 10:57:50.331052 containerd[1937]: time="2025-01-29T10:57:50.330891728Z" level=info msg="TearDown network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" successfully" Jan 29 10:57:50.331052 containerd[1937]: time="2025-01-29T10:57:50.330938768Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" returns successfully" Jan 29 10:57:50.332892 containerd[1937]: time="2025-01-29T10:57:50.332627192Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\"" Jan 29 10:57:50.333010 containerd[1937]: time="2025-01-29T10:57:50.332935880Z" level=info msg="TearDown network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" successfully" Jan 29 10:57:50.333010 containerd[1937]: time="2025-01-29T10:57:50.332962628Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" returns successfully" Jan 29 10:57:50.335967 containerd[1937]: time="2025-01-29T10:57:50.335415440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:6,}" Jan 29 10:57:50.345089 kubelet[2409]: I0129 10:57:50.345052 2409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0" Jan 29 10:57:50.346964 containerd[1937]: time="2025-01-29T10:57:50.346694276Z" level=info msg="StopPodSandbox for \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\"" Jan 29 10:57:50.350598 containerd[1937]: time="2025-01-29T10:57:50.347555660Z" level=info msg="Ensure that sandbox 3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0 in task-service has been cleanup successfully" Jan 29 10:57:50.350598 containerd[1937]: time="2025-01-29T10:57:50.348087548Z" level=info msg="TearDown network for sandbox \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\" successfully" Jan 29 10:57:50.350598 containerd[1937]: time="2025-01-29T10:57:50.348147440Z" level=info msg="StopPodSandbox for \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\" returns successfully" Jan 29 10:57:50.351851 containerd[1937]: time="2025-01-29T10:57:50.351218456Z" level=info msg="StopPodSandbox for \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\"" Jan 29 10:57:50.354288 systemd[1]: run-netns-cni\x2d3a71340e\x2d86c3\x2d0671\x2d489c\x2dfe5c4063be8e.mount: Deactivated successfully. Jan 29 10:57:50.355451 containerd[1937]: time="2025-01-29T10:57:50.354374684Z" level=info msg="TearDown network for sandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\" successfully" Jan 29 10:57:50.355451 containerd[1937]: time="2025-01-29T10:57:50.354415016Z" level=info msg="StopPodSandbox for \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\" returns successfully" Jan 29 10:57:50.357648 containerd[1937]: time="2025-01-29T10:57:50.357418928Z" level=info msg="StopPodSandbox for \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\"" Jan 29 10:57:50.357648 containerd[1937]: time="2025-01-29T10:57:50.357617612Z" level=info msg="TearDown network for sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\" successfully" Jan 29 10:57:50.357648 containerd[1937]: time="2025-01-29T10:57:50.357645428Z" level=info msg="StopPodSandbox for \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\" returns successfully" Jan 29 10:57:50.358913 containerd[1937]: time="2025-01-29T10:57:50.358775072Z" level=info msg="StopPodSandbox for \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\"" Jan 29 10:57:50.360019 containerd[1937]: time="2025-01-29T10:57:50.359774924Z" level=info msg="TearDown network for sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" successfully" Jan 29 10:57:50.360019 containerd[1937]: time="2025-01-29T10:57:50.359819492Z" level=info msg="StopPodSandbox for \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" returns successfully" Jan 29 10:57:50.361390 containerd[1937]: time="2025-01-29T10:57:50.361334444Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\"" Jan 29 10:57:50.361581 containerd[1937]: time="2025-01-29T10:57:50.361545764Z" level=info msg="TearDown network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" successfully" Jan 29 10:57:50.361641 containerd[1937]: time="2025-01-29T10:57:50.361582844Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" returns successfully" Jan 29 10:57:50.362555 containerd[1937]: time="2025-01-29T10:57:50.362117948Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\"" Jan 29 10:57:50.362555 containerd[1937]: time="2025-01-29T10:57:50.362266760Z" level=info msg="TearDown network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" successfully" Jan 29 10:57:50.362555 containerd[1937]: time="2025-01-29T10:57:50.362290940Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" returns successfully" Jan 29 10:57:50.367547 containerd[1937]: time="2025-01-29T10:57:50.366069224Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\"" Jan 29 10:57:50.367899 containerd[1937]: time="2025-01-29T10:57:50.367853492Z" level=info msg="TearDown network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" successfully" Jan 29 10:57:50.377303 containerd[1937]: time="2025-01-29T10:57:50.371585288Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" returns successfully" Jan 29 10:57:50.381185 containerd[1937]: time="2025-01-29T10:57:50.380612180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:7,}" Jan 29 10:57:50.386354 kubelet[2409]: I0129 10:57:50.386226 2409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5qkw9" podStartSLOduration=3.201442455 podStartE2EDuration="20.38620268s" podCreationTimestamp="2025-01-29 10:57:30 +0000 UTC" firstStartedPulling="2025-01-29 10:57:32.375911139 +0000 UTC m=+3.612962659" lastFinishedPulling="2025-01-29 10:57:49.560671352 +0000 UTC m=+20.797722884" observedRunningTime="2025-01-29 10:57:50.385594556 +0000 UTC m=+21.622646100" watchObservedRunningTime="2025-01-29 10:57:50.38620268 +0000 UTC m=+21.623254224" Jan 29 10:57:50.688010 (udev-worker)[3345]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:57:50.689977 systemd-networkd[1790]: cali8b7d8d321b7: Link UP Jan 29 10:57:50.690536 systemd-networkd[1790]: cali8b7d8d321b7: Gained carrier Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.444 [INFO][3376] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.485 [INFO][3376] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0 nginx-deployment-85f456d6dd- default 7ad6d355-1764-4b70-9308-9d135bac54fb 1024 0 2025-01-29 10:57:43 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.18.182 nginx-deployment-85f456d6dd-c77sv eth0 default [] [] [kns.default ksa.default.default] cali8b7d8d321b7 [] []}} ContainerID="f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-c77sv" WorkloadEndpoint="172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.485 [INFO][3376] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-c77sv" WorkloadEndpoint="172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.552 [INFO][3418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" HandleID="k8s-pod-network.f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" Workload="172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.580 [INFO][3418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" HandleID="k8s-pod-network.f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" Workload="172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d0f0), Attrs:map[string]string{"namespace":"default", "node":"172.31.18.182", "pod":"nginx-deployment-85f456d6dd-c77sv", "timestamp":"2025-01-29 10:57:50.552352677 +0000 UTC"}, Hostname:"172.31.18.182", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.580 [INFO][3418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.580 [INFO][3418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.580 [INFO][3418] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.18.182' Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.584 [INFO][3418] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" host="172.31.18.182" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.595 [INFO][3418] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.18.182" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.627 [INFO][3418] ipam/ipam.go 489: Trying affinity for 192.168.90.192/26 host="172.31.18.182" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.634 [INFO][3418] ipam/ipam.go 155: Attempting to load block cidr=192.168.90.192/26 host="172.31.18.182" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.642 [INFO][3418] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="172.31.18.182" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.642 [INFO][3418] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" host="172.31.18.182" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.645 [INFO][3418] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.656 [INFO][3418] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" host="172.31.18.182" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.672 [INFO][3418] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.90.193/26] block=192.168.90.192/26 handle="k8s-pod-network.f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" host="172.31.18.182" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.673 [INFO][3418] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.193/26] handle="k8s-pod-network.f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" host="172.31.18.182" Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.673 [INFO][3418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:57:50.715774 containerd[1937]: 2025-01-29 10:57:50.673 [INFO][3418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.193/26] IPv6=[] ContainerID="f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" HandleID="k8s-pod-network.f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" Workload="172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0" Jan 29 10:57:50.719018 containerd[1937]: 2025-01-29 10:57:50.678 [INFO][3376] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-c77sv" WorkloadEndpoint="172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"7ad6d355-1764-4b70-9308-9d135bac54fb", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.182", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-c77sv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali8b7d8d321b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:57:50.719018 containerd[1937]: 2025-01-29 10:57:50.678 [INFO][3376] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.90.193/32] ContainerID="f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-c77sv" WorkloadEndpoint="172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0" Jan 29 10:57:50.719018 containerd[1937]: 2025-01-29 10:57:50.678 [INFO][3376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b7d8d321b7 ContainerID="f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-c77sv" WorkloadEndpoint="172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0" Jan 29 10:57:50.719018 containerd[1937]: 2025-01-29 10:57:50.691 [INFO][3376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-c77sv" WorkloadEndpoint="172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0" Jan 29 10:57:50.719018 containerd[1937]: 2025-01-29 10:57:50.694 [INFO][3376] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-c77sv" WorkloadEndpoint="172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"7ad6d355-1764-4b70-9308-9d135bac54fb", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 57, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.182", ContainerID:"f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed", Pod:"nginx-deployment-85f456d6dd-c77sv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali8b7d8d321b7", MAC:"de:85:f1:c4:93:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:57:50.719018 containerd[1937]: 2025-01-29 10:57:50.713 [INFO][3376] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed" Namespace="default" Pod="nginx-deployment-85f456d6dd-c77sv" WorkloadEndpoint="172.31.18.182-k8s-nginx--deployment--85f456d6dd--c77sv-eth0" Jan 29 10:57:50.757087 containerd[1937]: time="2025-01-29T10:57:50.756324550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:57:50.757087 containerd[1937]: time="2025-01-29T10:57:50.756428110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:57:50.757087 containerd[1937]: time="2025-01-29T10:57:50.756452050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:57:50.757087 containerd[1937]: time="2025-01-29T10:57:50.756662338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:57:50.765239 (udev-worker)[3343]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:57:50.765564 systemd-networkd[1790]: calide2da342552: Link UP Jan 29 10:57:50.767635 systemd-networkd[1790]: calide2da342552: Gained carrier Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.490 [INFO][3395] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.533 [INFO][3395] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.18.182-k8s-csi--node--driver--2rxwp-eth0 csi-node-driver- calico-system d93af062-ea94-445a-8eee-553c377a0330 835 0 2025-01-29 10:57:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.18.182 csi-node-driver-2rxwp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calide2da342552 [] []}} ContainerID="543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" Namespace="calico-system" Pod="csi-node-driver-2rxwp" WorkloadEndpoint="172.31.18.182-k8s-csi--node--driver--2rxwp-" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.533 [INFO][3395] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" Namespace="calico-system" Pod="csi-node-driver-2rxwp" WorkloadEndpoint="172.31.18.182-k8s-csi--node--driver--2rxwp-eth0" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.603 [INFO][3425] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" HandleID="k8s-pod-network.543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" Workload="172.31.18.182-k8s-csi--node--driver--2rxwp-eth0" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.661 [INFO][3425] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" HandleID="k8s-pod-network.543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" Workload="172.31.18.182-k8s-csi--node--driver--2rxwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c910), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.18.182", "pod":"csi-node-driver-2rxwp", "timestamp":"2025-01-29 10:57:50.603619317 +0000 UTC"}, Hostname:"172.31.18.182", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.661 [INFO][3425] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.673 [INFO][3425] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.673 [INFO][3425] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.18.182' Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.676 [INFO][3425] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" host="172.31.18.182" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.694 [INFO][3425] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.18.182" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.706 [INFO][3425] ipam/ipam.go 489: Trying affinity for 192.168.90.192/26 host="172.31.18.182" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.717 [INFO][3425] ipam/ipam.go 155: Attempting to load block cidr=192.168.90.192/26 host="172.31.18.182" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.724 [INFO][3425] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="172.31.18.182" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.724 [INFO][3425] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" host="172.31.18.182" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.727 [INFO][3425] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.738 [INFO][3425] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" host="172.31.18.182" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.754 [INFO][3425] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.90.194/26] block=192.168.90.192/26 handle="k8s-pod-network.543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" host="172.31.18.182" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.755 [INFO][3425] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.194/26] handle="k8s-pod-network.543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" host="172.31.18.182" Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.755 [INFO][3425] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:57:50.801550 containerd[1937]: 2025-01-29 10:57:50.755 [INFO][3425] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.194/26] IPv6=[] ContainerID="543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" HandleID="k8s-pod-network.543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" Workload="172.31.18.182-k8s-csi--node--driver--2rxwp-eth0" Jan 29 10:57:50.805083 containerd[1937]: 2025-01-29 10:57:50.760 [INFO][3395] cni-plugin/k8s.go 386: Populated endpoint ContainerID="543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" Namespace="calico-system" Pod="csi-node-driver-2rxwp" WorkloadEndpoint="172.31.18.182-k8s-csi--node--driver--2rxwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.182-k8s-csi--node--driver--2rxwp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d93af062-ea94-445a-8eee-553c377a0330", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.182", ContainerID:"", Pod:"csi-node-driver-2rxwp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide2da342552", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:57:50.805083 containerd[1937]: 2025-01-29 10:57:50.760 [INFO][3395] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.90.194/32] ContainerID="543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" Namespace="calico-system" Pod="csi-node-driver-2rxwp" WorkloadEndpoint="172.31.18.182-k8s-csi--node--driver--2rxwp-eth0" Jan 29 10:57:50.805083 containerd[1937]: 2025-01-29 10:57:50.760 [INFO][3395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide2da342552 ContainerID="543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" Namespace="calico-system" Pod="csi-node-driver-2rxwp" WorkloadEndpoint="172.31.18.182-k8s-csi--node--driver--2rxwp-eth0" Jan 29 10:57:50.805083 containerd[1937]: 2025-01-29 10:57:50.769 [INFO][3395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" Namespace="calico-system" Pod="csi-node-driver-2rxwp" WorkloadEndpoint="172.31.18.182-k8s-csi--node--driver--2rxwp-eth0" Jan 29 10:57:50.805083 containerd[1937]: 2025-01-29 10:57:50.771 [INFO][3395] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" Namespace="calico-system" Pod="csi-node-driver-2rxwp" WorkloadEndpoint="172.31.18.182-k8s-csi--node--driver--2rxwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.182-k8s-csi--node--driver--2rxwp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d93af062-ea94-445a-8eee-553c377a0330", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 57, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.182", ContainerID:"543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e", Pod:"csi-node-driver-2rxwp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calide2da342552", MAC:"f6:f4:89:ff:73:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:57:50.805083 containerd[1937]: 2025-01-29 10:57:50.798 [INFO][3395] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e" Namespace="calico-system" Pod="csi-node-driver-2rxwp" WorkloadEndpoint="172.31.18.182-k8s-csi--node--driver--2rxwp-eth0" Jan 29 10:57:50.802860 systemd[1]: Started cri-containerd-f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed.scope - libcontainer container f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed. Jan 29 10:57:50.872956 containerd[1937]: time="2025-01-29T10:57:50.872713835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:57:50.873245 containerd[1937]: time="2025-01-29T10:57:50.873195791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:57:50.873404 containerd[1937]: time="2025-01-29T10:57:50.873240599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:57:50.873556 containerd[1937]: time="2025-01-29T10:57:50.873429923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:57:50.889760 containerd[1937]: time="2025-01-29T10:57:50.889706003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-c77sv,Uid:7ad6d355-1764-4b70-9308-9d135bac54fb,Namespace:default,Attempt:6,} returns sandbox id \"f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed\"" Jan 29 10:57:50.895724 containerd[1937]: time="2025-01-29T10:57:50.895505231Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 10:57:50.908810 systemd[1]: Started cri-containerd-543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e.scope - libcontainer container 543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e. Jan 29 10:57:50.955634 containerd[1937]: time="2025-01-29T10:57:50.954260387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2rxwp,Uid:d93af062-ea94-445a-8eee-553c377a0330,Namespace:calico-system,Attempt:7,} returns sandbox id \"543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e\"" Jan 29 10:57:50.994041 kubelet[2409]: E0129 10:57:50.993996 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:51.861538 kernel: bpftool[3671]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 10:57:51.907282 systemd-networkd[1790]: cali8b7d8d321b7: Gained IPv6LL Jan 29 10:57:51.995936 kubelet[2409]: E0129 10:57:51.995630 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:52.035475 systemd-networkd[1790]: calide2da342552: Gained IPv6LL Jan 29 10:57:52.270113 systemd-networkd[1790]: vxlan.calico: Link UP Jan 29 10:57:52.271775 systemd-networkd[1790]: vxlan.calico: Gained carrier Jan 29 10:57:52.319311 (udev-worker)[3342]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:57:52.996650 kubelet[2409]: E0129 10:57:52.996591 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:53.571679 systemd-networkd[1790]: vxlan.calico: Gained IPv6LL Jan 29 10:57:53.997320 kubelet[2409]: E0129 10:57:53.997233 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:54.796874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90529796.mount: Deactivated successfully. Jan 29 10:57:54.997507 kubelet[2409]: E0129 10:57:54.997443 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:55.998512 kubelet[2409]: E0129 10:57:55.998204 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:56.187548 containerd[1937]: time="2025-01-29T10:57:56.187446157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:56.189678 containerd[1937]: time="2025-01-29T10:57:56.189595657Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490" Jan 29 10:57:56.191171 containerd[1937]: time="2025-01-29T10:57:56.191078365Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:56.198155 containerd[1937]: time="2025-01-29T10:57:56.198097381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:56.200135 containerd[1937]: time="2025-01-29T10:57:56.199922425Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 5.30435801s" Jan 29 10:57:56.200135 containerd[1937]: time="2025-01-29T10:57:56.199977697Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 10:57:56.203062 containerd[1937]: time="2025-01-29T10:57:56.202771609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 10:57:56.206030 containerd[1937]: time="2025-01-29T10:57:56.205979437Z" level=info msg="CreateContainer within sandbox \"f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 10:57:56.230877 containerd[1937]: time="2025-01-29T10:57:56.230815489Z" level=info msg="CreateContainer within sandbox \"f69bd28c2cd15a5f78aba23675f1eee612b8e49b199af13d8a635e24193005ed\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f9e26b66f8ce33de30eb3643106674646bc5375da28fb208f2b5921e74f2c7d9\"" Jan 29 10:57:56.231732 containerd[1937]: time="2025-01-29T10:57:56.231675073Z" level=info msg="StartContainer for \"f9e26b66f8ce33de30eb3643106674646bc5375da28fb208f2b5921e74f2c7d9\"" Jan 29 10:57:56.259238 ntpd[1924]: Listen normally on 7 vxlan.calico 192.168.90.192:123 Jan 29 10:57:56.260910 ntpd[1924]: 29 Jan 10:57:56 ntpd[1924]: Listen normally on 7 vxlan.calico 192.168.90.192:123 Jan 29 10:57:56.260910 ntpd[1924]: 29 Jan 10:57:56 ntpd[1924]: Listen normally on 8 cali8b7d8d321b7 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 29 10:57:56.260910 ntpd[1924]: 29 Jan 10:57:56 ntpd[1924]: Listen normally on 9 calide2da342552 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 29 10:57:56.260910 ntpd[1924]: 29 Jan 10:57:56 ntpd[1924]: Listen normally on 10 vxlan.calico [fe80::646f:4dff:fea6:f8e2%5]:123 Jan 29 10:57:56.259358 ntpd[1924]: Listen normally on 8 cali8b7d8d321b7 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 29 10:57:56.259437 ntpd[1924]: Listen normally on 9 calide2da342552 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 29 10:57:56.259585 ntpd[1924]: Listen normally on 10 vxlan.calico [fe80::646f:4dff:fea6:f8e2%5]:123 Jan 29 10:57:56.287825 systemd[1]: Started cri-containerd-f9e26b66f8ce33de30eb3643106674646bc5375da28fb208f2b5921e74f2c7d9.scope - libcontainer container f9e26b66f8ce33de30eb3643106674646bc5375da28fb208f2b5921e74f2c7d9. Jan 29 10:57:56.336819 containerd[1937]: time="2025-01-29T10:57:56.336636254Z" level=info msg="StartContainer for \"f9e26b66f8ce33de30eb3643106674646bc5375da28fb208f2b5921e74f2c7d9\" returns successfully" Jan 29 10:57:56.998813 kubelet[2409]: E0129 10:57:56.998743 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:57.470830 containerd[1937]: time="2025-01-29T10:57:57.470669871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:57.472411 containerd[1937]: time="2025-01-29T10:57:57.472335975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 10:57:57.474095 containerd[1937]: time="2025-01-29T10:57:57.474039459Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:57.480069 containerd[1937]: time="2025-01-29T10:57:57.479984643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:57.481738 containerd[1937]: time="2025-01-29T10:57:57.481535955Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.278706374s" Jan 29 10:57:57.481738 containerd[1937]: time="2025-01-29T10:57:57.481596123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 10:57:57.485878 containerd[1937]: time="2025-01-29T10:57:57.485621163Z" level=info msg="CreateContainer within sandbox \"543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 10:57:57.518044 containerd[1937]: time="2025-01-29T10:57:57.517969720Z" level=info msg="CreateContainer within sandbox \"543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fff911f5077ccc60656fdf4ef8dff2338b77e9afd468c44c0a202beae09cdb45\"" Jan 29 10:57:57.519019 containerd[1937]: time="2025-01-29T10:57:57.518958520Z" level=info msg="StartContainer for \"fff911f5077ccc60656fdf4ef8dff2338b77e9afd468c44c0a202beae09cdb45\"" Jan 29 10:57:57.570786 systemd[1]: Started cri-containerd-fff911f5077ccc60656fdf4ef8dff2338b77e9afd468c44c0a202beae09cdb45.scope - libcontainer container fff911f5077ccc60656fdf4ef8dff2338b77e9afd468c44c0a202beae09cdb45. Jan 29 10:57:57.626596 containerd[1937]: time="2025-01-29T10:57:57.626515108Z" level=info msg="StartContainer for \"fff911f5077ccc60656fdf4ef8dff2338b77e9afd468c44c0a202beae09cdb45\" returns successfully" Jan 29 10:57:57.631625 containerd[1937]: time="2025-01-29T10:57:57.631550836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 10:57:57.999141 kubelet[2409]: E0129 10:57:57.999073 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:58.974831 update_engine[1930]: I20250129 10:57:58.973825 1930 update_attempter.cc:509] Updating boot flags... Jan 29 10:57:59.000535 kubelet[2409]: E0129 10:57:58.999621 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:59.093844 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3892) Jan 29 10:57:59.439533 containerd[1937]: time="2025-01-29T10:57:59.438768605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:59.444596 containerd[1937]: time="2025-01-29T10:57:59.444361181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 10:57:59.453547 containerd[1937]: time="2025-01-29T10:57:59.453152753Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:59.471034 containerd[1937]: time="2025-01-29T10:57:59.470779265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:59.477071 containerd[1937]: time="2025-01-29T10:57:59.476954321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.845211485s" Jan 29 10:57:59.477887 containerd[1937]: time="2025-01-29T10:57:59.477082241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 10:57:59.487154 containerd[1937]: time="2025-01-29T10:57:59.486739301Z" level=info msg="CreateContainer within sandbox \"543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 10:57:59.497640 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3894) Jan 29 10:57:59.519280 containerd[1937]: time="2025-01-29T10:57:59.519198690Z" level=info msg="CreateContainer within sandbox \"543d9fc943f069d38d4907aa32742eec49d56e116605e0a72a1f1d7c6ea5557e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e3fc0833f973d7c695b113438040cb92cd1b408d90f5024856cddcc88da6c197\"" Jan 29 10:57:59.520140 containerd[1937]: time="2025-01-29T10:57:59.520088574Z" level=info msg="StartContainer for \"e3fc0833f973d7c695b113438040cb92cd1b408d90f5024856cddcc88da6c197\"" Jan 29 10:57:59.601852 systemd[1]: Started cri-containerd-e3fc0833f973d7c695b113438040cb92cd1b408d90f5024856cddcc88da6c197.scope - libcontainer container e3fc0833f973d7c695b113438040cb92cd1b408d90f5024856cddcc88da6c197. Jan 29 10:57:59.748532 containerd[1937]: time="2025-01-29T10:57:59.743980027Z" level=info msg="StartContainer for \"e3fc0833f973d7c695b113438040cb92cd1b408d90f5024856cddcc88da6c197\" returns successfully" Jan 29 10:58:00.000212 kubelet[2409]: E0129 10:58:00.000053 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:00.156444 kubelet[2409]: I0129 10:58:00.156311 2409 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 10:58:00.156444 kubelet[2409]: I0129 10:58:00.156362 2409 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 10:58:00.463773 kubelet[2409]: I0129 10:58:00.463452 2409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-c77sv" podStartSLOduration=12.15561816 podStartE2EDuration="17.46343175s" podCreationTimestamp="2025-01-29 10:57:43 +0000 UTC" firstStartedPulling="2025-01-29 10:57:50.894647627 +0000 UTC m=+22.131699159" lastFinishedPulling="2025-01-29 10:57:56.202461229 +0000 UTC m=+27.439512749" observedRunningTime="2025-01-29 10:57:56.429439058 +0000 UTC m=+27.666490602" watchObservedRunningTime="2025-01-29 10:58:00.46343175 +0000 UTC m=+31.700483330" Jan 29 10:58:01.000607 kubelet[2409]: E0129 10:58:01.000540 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:02.001167 kubelet[2409]: E0129 10:58:02.001092 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:03.001644 kubelet[2409]: E0129 10:58:03.001578 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:04.002610 kubelet[2409]: E0129 10:58:04.002552 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:05.003669 kubelet[2409]: E0129 10:58:05.003600 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:05.992506 kubelet[2409]: I0129 10:58:05.992381 2409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2rxwp" podStartSLOduration=27.464940676 podStartE2EDuration="35.992357162s" podCreationTimestamp="2025-01-29 10:57:30 +0000 UTC" firstStartedPulling="2025-01-29 10:57:50.956685107 +0000 UTC m=+22.193736639" lastFinishedPulling="2025-01-29 10:57:59.484101593 +0000 UTC m=+30.721153125" observedRunningTime="2025-01-29 10:58:00.46337175 +0000 UTC m=+31.700423282" watchObservedRunningTime="2025-01-29 10:58:05.992357162 +0000 UTC m=+37.229408718" Jan 29 10:58:05.992782 kubelet[2409]: I0129 10:58:05.992750 2409 topology_manager.go:215] "Topology Admit Handler" podUID="09b34da9-5112-499a-a903-6d0d587e01cf" podNamespace="default" podName="nfs-server-provisioner-0" Jan 29 10:58:06.002643 systemd[1]: Created slice kubepods-besteffort-pod09b34da9_5112_499a_a903_6d0d587e01cf.slice - libcontainer container kubepods-besteffort-pod09b34da9_5112_499a_a903_6d0d587e01cf.slice. Jan 29 10:58:06.004551 kubelet[2409]: E0129 10:58:06.003979 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:06.149452 kubelet[2409]: I0129 10:58:06.149245 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/09b34da9-5112-499a-a903-6d0d587e01cf-data\") pod \"nfs-server-provisioner-0\" (UID: \"09b34da9-5112-499a-a903-6d0d587e01cf\") " pod="default/nfs-server-provisioner-0" Jan 29 10:58:06.149452 kubelet[2409]: I0129 10:58:06.149322 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk8bf\" (UniqueName: \"kubernetes.io/projected/09b34da9-5112-499a-a903-6d0d587e01cf-kube-api-access-mk8bf\") pod \"nfs-server-provisioner-0\" (UID: \"09b34da9-5112-499a-a903-6d0d587e01cf\") " pod="default/nfs-server-provisioner-0" Jan 29 10:58:06.308864 containerd[1937]: time="2025-01-29T10:58:06.308621915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:09b34da9-5112-499a-a903-6d0d587e01cf,Namespace:default,Attempt:0,}" Jan 29 10:58:06.522578 systemd-networkd[1790]: cali60e51b789ff: Link UP Jan 29 10:58:06.524146 systemd-networkd[1790]: cali60e51b789ff: Gained carrier Jan 29 10:58:06.529417 (udev-worker)[4129]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.399 [INFO][4111] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.18.182-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 09b34da9-5112-499a-a903-6d0d587e01cf 1203 0 2025-01-29 10:58:05 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.18.182 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.182-k8s-nfs--server--provisioner--0-" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.399 [INFO][4111] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.182-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.448 [INFO][4122] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" HandleID="k8s-pod-network.aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" Workload="172.31.18.182-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.469 [INFO][4122] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" HandleID="k8s-pod-network.aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" Workload="172.31.18.182-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c740), Attrs:map[string]string{"namespace":"default", "node":"172.31.18.182", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 10:58:06.44876814 +0000 UTC"}, Hostname:"172.31.18.182", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.470 [INFO][4122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.470 [INFO][4122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.470 [INFO][4122] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.18.182' Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.472 [INFO][4122] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" host="172.31.18.182" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.478 [INFO][4122] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.18.182" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.485 [INFO][4122] ipam/ipam.go 489: Trying affinity for 192.168.90.192/26 host="172.31.18.182" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.487 [INFO][4122] ipam/ipam.go 155: Attempting to load block cidr=192.168.90.192/26 host="172.31.18.182" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.490 [INFO][4122] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="172.31.18.182" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.491 [INFO][4122] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" host="172.31.18.182" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.493 [INFO][4122] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.501 [INFO][4122] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" host="172.31.18.182" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.514 [INFO][4122] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.90.195/26] block=192.168.90.192/26 handle="k8s-pod-network.aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" host="172.31.18.182" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.514 [INFO][4122] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.195/26] handle="k8s-pod-network.aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" host="172.31.18.182" Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.514 [INFO][4122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:58:06.547073 containerd[1937]: 2025-01-29 10:58:06.514 [INFO][4122] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.195/26] IPv6=[] ContainerID="aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" HandleID="k8s-pod-network.aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" Workload="172.31.18.182-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:58:06.548426 containerd[1937]: 2025-01-29 10:58:06.518 [INFO][4111] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.182-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.182-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"09b34da9-5112-499a-a903-6d0d587e01cf", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 58, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.182", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.90.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:58:06.548426 containerd[1937]: 2025-01-29 10:58:06.518 [INFO][4111] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.90.195/32] ContainerID="aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.182-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:58:06.548426 containerd[1937]: 2025-01-29 10:58:06.518 [INFO][4111] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.182-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:58:06.548426 containerd[1937]: 2025-01-29 10:58:06.523 [INFO][4111] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.182-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:58:06.550430 containerd[1937]: 2025-01-29 10:58:06.525 [INFO][4111] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.182-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.182-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"09b34da9-5112-499a-a903-6d0d587e01cf", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 58, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.182", ContainerID:"aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.90.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"aa:3f:02:0d:b0:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:58:06.550430 containerd[1937]: 2025-01-29 10:58:06.540 [INFO][4111] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.18.182-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:58:06.591653 containerd[1937]: time="2025-01-29T10:58:06.591377305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:58:06.591934 containerd[1937]: time="2025-01-29T10:58:06.591856933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:58:06.592110 containerd[1937]: time="2025-01-29T10:58:06.592057189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:58:06.592424 containerd[1937]: time="2025-01-29T10:58:06.592375081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:58:06.635812 systemd[1]: Started cri-containerd-aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af.scope - libcontainer container aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af. Jan 29 10:58:06.696605 containerd[1937]: time="2025-01-29T10:58:06.696429373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:09b34da9-5112-499a-a903-6d0d587e01cf,Namespace:default,Attempt:0,} returns sandbox id \"aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af\"" Jan 29 10:58:06.700006 containerd[1937]: time="2025-01-29T10:58:06.699548389Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 10:58:07.005205 kubelet[2409]: E0129 10:58:07.005133 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:08.006229 kubelet[2409]: E0129 10:58:08.006085 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:08.483827 systemd-networkd[1790]: cali60e51b789ff: Gained IPv6LL Jan 29 10:58:09.007142 kubelet[2409]: E0129 10:58:09.007089 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:09.595106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3193185859.mount: Deactivated successfully. Jan 29 10:58:09.980308 kubelet[2409]: E0129 10:58:09.980047 2409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:10.009146 kubelet[2409]: E0129 10:58:10.008222 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:10.036458 systemd[1]: run-containerd-runc-k8s.io-059f96b3cc4d0e846c65bc86f65922ad9d24d90aa58cacc9b4d511c63089a842-runc.n1raDY.mount: Deactivated successfully. Jan 29 10:58:11.009224 kubelet[2409]: E0129 10:58:11.009142 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:11.259323 ntpd[1924]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 10:58:11.261019 ntpd[1924]: 29 Jan 10:58:11 ntpd[1924]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 10:58:12.009530 kubelet[2409]: E0129 10:58:12.009428 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:12.683549 containerd[1937]: time="2025-01-29T10:58:12.682618963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:58:12.685082 containerd[1937]: time="2025-01-29T10:58:12.684996211Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jan 29 10:58:12.687144 containerd[1937]: time="2025-01-29T10:58:12.687048859Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:58:12.692519 containerd[1937]: time="2025-01-29T10:58:12.692419159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:58:12.694662 containerd[1937]: time="2025-01-29T10:58:12.694406143Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.994800178s" Jan 29 10:58:12.694662 containerd[1937]: time="2025-01-29T10:58:12.694466287Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 29 10:58:12.698994 containerd[1937]: time="2025-01-29T10:58:12.698918083Z" level=info msg="CreateContainer within sandbox \"aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 10:58:12.722649 containerd[1937]: time="2025-01-29T10:58:12.722569975Z" level=info msg="CreateContainer within sandbox \"aa73d2febd043cce4592490fef9bbbf79e338908bf4daa80d09388af544d49af\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a402fa77fb24765be3a71c72bcc017bf5fe89bf62a468d68f9c6d0afe954e2cc\"" Jan 29 10:58:12.724943 containerd[1937]: time="2025-01-29T10:58:12.723237751Z" level=info msg="StartContainer for \"a402fa77fb24765be3a71c72bcc017bf5fe89bf62a468d68f9c6d0afe954e2cc\"" Jan 29 10:58:12.776828 systemd[1]: Started cri-containerd-a402fa77fb24765be3a71c72bcc017bf5fe89bf62a468d68f9c6d0afe954e2cc.scope - libcontainer container a402fa77fb24765be3a71c72bcc017bf5fe89bf62a468d68f9c6d0afe954e2cc. Jan 29 10:58:12.841813 containerd[1937]: time="2025-01-29T10:58:12.841622396Z" level=info msg="StartContainer for \"a402fa77fb24765be3a71c72bcc017bf5fe89bf62a468d68f9c6d0afe954e2cc\" returns successfully" Jan 29 10:58:13.009983 kubelet[2409]: E0129 10:58:13.009921 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:13.501887 kubelet[2409]: I0129 10:58:13.501814 2409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.5044267810000003 podStartE2EDuration="8.501792079s" podCreationTimestamp="2025-01-29 10:58:05 +0000 UTC" firstStartedPulling="2025-01-29 10:58:06.698786353 +0000 UTC m=+37.935837885" lastFinishedPulling="2025-01-29 10:58:12.696151651 +0000 UTC m=+43.933203183" observedRunningTime="2025-01-29 10:58:13.501558559 +0000 UTC m=+44.738610115" watchObservedRunningTime="2025-01-29 10:58:13.501792079 +0000 UTC m=+44.738843623" Jan 29 10:58:14.010600 kubelet[2409]: E0129 10:58:14.010534 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:15.010773 kubelet[2409]: E0129 10:58:15.010691 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:16.011598 kubelet[2409]: E0129 10:58:16.011533 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:17.012628 kubelet[2409]: E0129 10:58:17.012569 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:18.012979 kubelet[2409]: E0129 10:58:18.012915 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:19.013535 kubelet[2409]: E0129 10:58:19.013429 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:20.014434 kubelet[2409]: E0129 10:58:20.014358 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:21.014596 kubelet[2409]: E0129 10:58:21.014519 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:22.014872 kubelet[2409]: E0129 10:58:22.014799 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:23.015654 kubelet[2409]: E0129 10:58:23.015595 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:24.016034 kubelet[2409]: E0129 10:58:24.015959 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:25.016438 kubelet[2409]: E0129 10:58:25.016369 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:26.016623 kubelet[2409]: E0129 10:58:26.016553 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:27.017275 kubelet[2409]: E0129 10:58:27.017210 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:28.017690 kubelet[2409]: E0129 10:58:28.017619 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:29.018438 kubelet[2409]: E0129 10:58:29.018376 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:29.980340 kubelet[2409]: E0129 10:58:29.980278 2409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:30.011656 containerd[1937]: time="2025-01-29T10:58:30.011599629Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\"" Jan 29 10:58:30.012583 containerd[1937]: time="2025-01-29T10:58:30.011778405Z" level=info msg="TearDown network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" successfully" Jan 29 10:58:30.012583 containerd[1937]: time="2025-01-29T10:58:30.011801913Z" level=info msg="StopPodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" returns successfully" Jan 29 10:58:30.013264 containerd[1937]: time="2025-01-29T10:58:30.013119129Z" level=info msg="RemovePodSandbox for \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\"" Jan 29 10:58:30.013264 containerd[1937]: time="2025-01-29T10:58:30.013186209Z" level=info msg="Forcibly stopping sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\"" Jan 29 10:58:30.013473 containerd[1937]: time="2025-01-29T10:58:30.013321245Z" level=info msg="TearDown network for sandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" successfully" Jan 29 10:58:30.017901 containerd[1937]: time="2025-01-29T10:58:30.017842137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.018155 containerd[1937]: time="2025-01-29T10:58:30.017928561Z" level=info msg="RemovePodSandbox \"15f7ffedb0ee8bae0dc8d6abd9a4f27ecb9ed9c0320b71a296b9dc031aa4d54e\" returns successfully" Jan 29 10:58:30.018529 kubelet[2409]: E0129 10:58:30.018471 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:30.020093 containerd[1937]: time="2025-01-29T10:58:30.018767721Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\"" Jan 29 10:58:30.020093 containerd[1937]: time="2025-01-29T10:58:30.018927969Z" level=info msg="TearDown network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" successfully" Jan 29 10:58:30.020093 containerd[1937]: time="2025-01-29T10:58:30.018953229Z" level=info msg="StopPodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" returns successfully" Jan 29 10:58:30.020093 containerd[1937]: time="2025-01-29T10:58:30.019619997Z" level=info msg="RemovePodSandbox for \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\"" Jan 29 10:58:30.020093 containerd[1937]: time="2025-01-29T10:58:30.019660101Z" level=info msg="Forcibly stopping sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\"" Jan 29 10:58:30.020093 containerd[1937]: time="2025-01-29T10:58:30.019818993Z" level=info msg="TearDown network for sandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" successfully" Jan 29 10:58:30.024682 containerd[1937]: time="2025-01-29T10:58:30.024579609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.024682 containerd[1937]: time="2025-01-29T10:58:30.024655785Z" level=info msg="RemovePodSandbox \"8f3f30365f968489745b5a1f552fdee27dd1c512ba80a6f76930272f5ba9afae\" returns successfully" Jan 29 10:58:30.025608 containerd[1937]: time="2025-01-29T10:58:30.025473957Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\"" Jan 29 10:58:30.025754 containerd[1937]: time="2025-01-29T10:58:30.025660749Z" level=info msg="TearDown network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" successfully" Jan 29 10:58:30.025754 containerd[1937]: time="2025-01-29T10:58:30.025683189Z" level=info msg="StopPodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" returns successfully" Jan 29 10:58:30.026328 containerd[1937]: time="2025-01-29T10:58:30.026284509Z" level=info msg="RemovePodSandbox for \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\"" Jan 29 10:58:30.026411 containerd[1937]: time="2025-01-29T10:58:30.026340237Z" level=info msg="Forcibly stopping sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\"" Jan 29 10:58:30.026547 containerd[1937]: time="2025-01-29T10:58:30.026506977Z" level=info msg="TearDown network for sandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" successfully" Jan 29 10:58:30.030685 containerd[1937]: time="2025-01-29T10:58:30.030614493Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.030812 containerd[1937]: time="2025-01-29T10:58:30.030697881Z" level=info msg="RemovePodSandbox \"952508b2e82f854f7f6c43699f8e522bbe4579227b5252f3648f292efbeac835\" returns successfully" Jan 29 10:58:30.031496 containerd[1937]: time="2025-01-29T10:58:30.031443357Z" level=info msg="StopPodSandbox for \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\"" Jan 29 10:58:30.031667 containerd[1937]: time="2025-01-29T10:58:30.031631697Z" level=info msg="TearDown network for sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" successfully" Jan 29 10:58:30.031753 containerd[1937]: time="2025-01-29T10:58:30.031663881Z" level=info msg="StopPodSandbox for \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" returns successfully" Jan 29 10:58:30.033533 containerd[1937]: time="2025-01-29T10:58:30.032123793Z" level=info msg="RemovePodSandbox for \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\"" Jan 29 10:58:30.033533 containerd[1937]: time="2025-01-29T10:58:30.032167461Z" level=info msg="Forcibly stopping sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\"" Jan 29 10:58:30.033533 containerd[1937]: time="2025-01-29T10:58:30.032289585Z" level=info msg="TearDown network for sandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" successfully" Jan 29 10:58:30.037220 containerd[1937]: time="2025-01-29T10:58:30.037164897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.037544 containerd[1937]: time="2025-01-29T10:58:30.037471557Z" level=info msg="RemovePodSandbox \"48fc6c7d80f4263fc02d59daf82db28871f1065d37fd89fb1c95a1826c1f6eaa\" returns successfully" Jan 29 10:58:30.038472 containerd[1937]: time="2025-01-29T10:58:30.038435805Z" level=info msg="StopPodSandbox for \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\"" Jan 29 10:58:30.038775 containerd[1937]: time="2025-01-29T10:58:30.038746245Z" level=info msg="TearDown network for sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\" successfully" Jan 29 10:58:30.038884 containerd[1937]: time="2025-01-29T10:58:30.038857749Z" level=info msg="StopPodSandbox for \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\" returns successfully" Jan 29 10:58:30.039739 containerd[1937]: time="2025-01-29T10:58:30.039693597Z" level=info msg="RemovePodSandbox for \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\"" Jan 29 10:58:30.040101 containerd[1937]: time="2025-01-29T10:58:30.039888621Z" level=info msg="Forcibly stopping sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\"" Jan 29 10:58:30.040101 containerd[1937]: time="2025-01-29T10:58:30.040018665Z" level=info msg="TearDown network for sandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\" successfully" Jan 29 10:58:30.044980 containerd[1937]: time="2025-01-29T10:58:30.044781213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.044980 containerd[1937]: time="2025-01-29T10:58:30.044853561Z" level=info msg="RemovePodSandbox \"fe6345410a8a34f3665279221ed1d63788ec5cbfa6f9b8470c57870b02a163ce\" returns successfully" Jan 29 10:58:30.046202 containerd[1937]: time="2025-01-29T10:58:30.045909873Z" level=info msg="StopPodSandbox for \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\"" Jan 29 10:58:30.046202 containerd[1937]: time="2025-01-29T10:58:30.046067517Z" level=info msg="TearDown network for sandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\" successfully" Jan 29 10:58:30.046202 containerd[1937]: time="2025-01-29T10:58:30.046087953Z" level=info msg="StopPodSandbox for \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\" returns successfully" Jan 29 10:58:30.047539 containerd[1937]: time="2025-01-29T10:58:30.047027121Z" level=info msg="RemovePodSandbox for \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\"" Jan 29 10:58:30.047539 containerd[1937]: time="2025-01-29T10:58:30.047082945Z" level=info msg="Forcibly stopping sandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\"" Jan 29 10:58:30.047539 containerd[1937]: time="2025-01-29T10:58:30.047203113Z" level=info msg="TearDown network for sandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\" successfully" Jan 29 10:58:30.051567 containerd[1937]: time="2025-01-29T10:58:30.051463065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.051681 containerd[1937]: time="2025-01-29T10:58:30.051624681Z" level=info msg="RemovePodSandbox \"26efa8d5dc9b4453d83852cf606edb02dfdb69f1af6ca7aa1b41caf085a429d2\" returns successfully" Jan 29 10:58:30.052289 containerd[1937]: time="2025-01-29T10:58:30.052251501Z" level=info msg="StopPodSandbox for \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\"" Jan 29 10:58:30.052764 containerd[1937]: time="2025-01-29T10:58:30.052626897Z" level=info msg="TearDown network for sandbox \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\" successfully" Jan 29 10:58:30.052764 containerd[1937]: time="2025-01-29T10:58:30.052655865Z" level=info msg="StopPodSandbox for \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\" returns successfully" Jan 29 10:58:30.053194 containerd[1937]: time="2025-01-29T10:58:30.053150241Z" level=info msg="RemovePodSandbox for \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\"" Jan 29 10:58:30.053270 containerd[1937]: time="2025-01-29T10:58:30.053198997Z" level=info msg="Forcibly stopping sandbox \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\"" Jan 29 10:58:30.053354 containerd[1937]: time="2025-01-29T10:58:30.053318829Z" level=info msg="TearDown network for sandbox \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\" successfully" Jan 29 10:58:30.058336 containerd[1937]: time="2025-01-29T10:58:30.058272105Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.058854 containerd[1937]: time="2025-01-29T10:58:30.058352709Z" level=info msg="RemovePodSandbox \"3983855039260ca8b3ea12a4e1a25df565a19223bf4012a33aced926323ebbd0\" returns successfully" Jan 29 10:58:30.059597 containerd[1937]: time="2025-01-29T10:58:30.059154933Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\"" Jan 29 10:58:30.059597 containerd[1937]: time="2025-01-29T10:58:30.059310345Z" level=info msg="TearDown network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" successfully" Jan 29 10:58:30.059597 containerd[1937]: time="2025-01-29T10:58:30.059331681Z" level=info msg="StopPodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" returns successfully" Jan 29 10:58:30.060189 containerd[1937]: time="2025-01-29T10:58:30.060133245Z" level=info msg="RemovePodSandbox for \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\"" Jan 29 10:58:30.060291 containerd[1937]: time="2025-01-29T10:58:30.060195249Z" level=info msg="Forcibly stopping sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\"" Jan 29 10:58:30.060367 containerd[1937]: time="2025-01-29T10:58:30.060328269Z" level=info msg="TearDown network for sandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" successfully" Jan 29 10:58:30.065164 containerd[1937]: time="2025-01-29T10:58:30.065058525Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.065164 containerd[1937]: time="2025-01-29T10:58:30.065150637Z" level=info msg="RemovePodSandbox \"bfbf1a21dde4d94fc7c4dbdedc6ea4c5a837a764d44a7fa690c8b63655d58c3a\" returns successfully" Jan 29 10:58:30.066507 containerd[1937]: time="2025-01-29T10:58:30.065993733Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\"" Jan 29 10:58:30.066507 containerd[1937]: time="2025-01-29T10:58:30.066150201Z" level=info msg="TearDown network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" successfully" Jan 29 10:58:30.066507 containerd[1937]: time="2025-01-29T10:58:30.066174177Z" level=info msg="StopPodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" returns successfully" Jan 29 10:58:30.067215 containerd[1937]: time="2025-01-29T10:58:30.067155801Z" level=info msg="RemovePodSandbox for \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\"" Jan 29 10:58:30.067317 containerd[1937]: time="2025-01-29T10:58:30.067228125Z" level=info msg="Forcibly stopping sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\"" Jan 29 10:58:30.067432 containerd[1937]: time="2025-01-29T10:58:30.067358853Z" level=info msg="TearDown network for sandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" successfully" Jan 29 10:58:30.071729 containerd[1937]: time="2025-01-29T10:58:30.071662593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.071995 containerd[1937]: time="2025-01-29T10:58:30.071740185Z" level=info msg="RemovePodSandbox \"30159b8ee03c832775ff8dc32cf5a15ce5742bf5035e70fbcd044e44d08dcc48\" returns successfully" Jan 29 10:58:30.072985 containerd[1937]: time="2025-01-29T10:58:30.072725517Z" level=info msg="StopPodSandbox for \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\"" Jan 29 10:58:30.072985 containerd[1937]: time="2025-01-29T10:58:30.072882573Z" level=info msg="TearDown network for sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" successfully" Jan 29 10:58:30.072985 containerd[1937]: time="2025-01-29T10:58:30.072905577Z" level=info msg="StopPodSandbox for \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" returns successfully" Jan 29 10:58:30.073490 containerd[1937]: time="2025-01-29T10:58:30.073430217Z" level=info msg="RemovePodSandbox for \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\"" Jan 29 10:58:30.073578 containerd[1937]: time="2025-01-29T10:58:30.073508577Z" level=info msg="Forcibly stopping sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\"" Jan 29 10:58:30.073885 containerd[1937]: time="2025-01-29T10:58:30.073846461Z" level=info msg="TearDown network for sandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" successfully" Jan 29 10:58:30.078493 containerd[1937]: time="2025-01-29T10:58:30.078407769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.078624 containerd[1937]: time="2025-01-29T10:58:30.078506997Z" level=info msg="RemovePodSandbox \"1cde8198eb9fdfb7ca621b3b5f0b298e4ca867bdca1fbefbc59219be764de890\" returns successfully" Jan 29 10:58:30.079528 containerd[1937]: time="2025-01-29T10:58:30.079165533Z" level=info msg="StopPodSandbox for \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\"" Jan 29 10:58:30.079678 containerd[1937]: time="2025-01-29T10:58:30.079585497Z" level=info msg="TearDown network for sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\" successfully" Jan 29 10:58:30.079678 containerd[1937]: time="2025-01-29T10:58:30.079610961Z" level=info msg="StopPodSandbox for \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\" returns successfully" Jan 29 10:58:30.080100 containerd[1937]: time="2025-01-29T10:58:30.080044773Z" level=info msg="RemovePodSandbox for \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\"" Jan 29 10:58:30.080185 containerd[1937]: time="2025-01-29T10:58:30.080096109Z" level=info msg="Forcibly stopping sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\"" Jan 29 10:58:30.080305 containerd[1937]: time="2025-01-29T10:58:30.080238501Z" level=info msg="TearDown network for sandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\" successfully" Jan 29 10:58:30.085096 containerd[1937]: time="2025-01-29T10:58:30.085025769Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.085243 containerd[1937]: time="2025-01-29T10:58:30.085108101Z" level=info msg="RemovePodSandbox \"0e758d722cd53ef3e6636117c0f1e8e7bb045fcf1731c1d6fccd6a280265ae73\" returns successfully" Jan 29 10:58:30.085871 containerd[1937]: time="2025-01-29T10:58:30.085832589Z" level=info msg="StopPodSandbox for \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\"" Jan 29 10:58:30.086309 containerd[1937]: time="2025-01-29T10:58:30.086167209Z" level=info msg="TearDown network for sandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\" successfully" Jan 29 10:58:30.086309 containerd[1937]: time="2025-01-29T10:58:30.086195865Z" level=info msg="StopPodSandbox for \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\" returns successfully" Jan 29 10:58:30.087074 containerd[1937]: time="2025-01-29T10:58:30.087027105Z" level=info msg="RemovePodSandbox for \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\"" Jan 29 10:58:30.087199 containerd[1937]: time="2025-01-29T10:58:30.087083145Z" level=info msg="Forcibly stopping sandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\"" Jan 29 10:58:30.087260 containerd[1937]: time="2025-01-29T10:58:30.087210957Z" level=info msg="TearDown network for sandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\" successfully" Jan 29 10:58:30.093873 containerd[1937]: time="2025-01-29T10:58:30.093808977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.094004 containerd[1937]: time="2025-01-29T10:58:30.093894429Z" level=info msg="RemovePodSandbox \"f58d9e94013b5ec8a399f6327067310d32fcb258f5efbc14373bdc154a59f3bb\" returns successfully" Jan 29 10:58:30.094988 containerd[1937]: time="2025-01-29T10:58:30.094663941Z" level=info msg="StopPodSandbox for \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\"" Jan 29 10:58:30.094988 containerd[1937]: time="2025-01-29T10:58:30.094817925Z" level=info msg="TearDown network for sandbox \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\" successfully" Jan 29 10:58:30.094988 containerd[1937]: time="2025-01-29T10:58:30.094843305Z" level=info msg="StopPodSandbox for \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\" returns successfully" Jan 29 10:58:30.095546 containerd[1937]: time="2025-01-29T10:58:30.095472369Z" level=info msg="RemovePodSandbox for \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\"" Jan 29 10:58:30.096504 containerd[1937]: time="2025-01-29T10:58:30.095668581Z" level=info msg="Forcibly stopping sandbox \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\"" Jan 29 10:58:30.096504 containerd[1937]: time="2025-01-29T10:58:30.095794725Z" level=info msg="TearDown network for sandbox \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\" successfully" Jan 29 10:58:30.115708 containerd[1937]: time="2025-01-29T10:58:30.115633930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:58:30.115867 containerd[1937]: time="2025-01-29T10:58:30.115748674Z" level=info msg="RemovePodSandbox \"abcd3a17b850f78d466a231c797e7a25d1efc766349fbd954af30ef370884e06\" returns successfully" Jan 29 10:58:31.018961 kubelet[2409]: E0129 10:58:31.018883 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:32.020064 kubelet[2409]: E0129 10:58:32.019992 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:33.020805 kubelet[2409]: E0129 10:58:33.020723 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:34.021799 kubelet[2409]: E0129 10:58:34.021734 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:35.022182 kubelet[2409]: E0129 10:58:35.022114 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:36.023167 kubelet[2409]: E0129 10:58:36.023101 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:37.024078 kubelet[2409]: E0129 10:58:37.024016 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:37.904852 kubelet[2409]: I0129 10:58:37.904782 2409 topology_manager.go:215] "Topology Admit Handler" podUID="4e61e6be-09ae-47f7-8ae4-60c616051e10" podNamespace="default" podName="test-pod-1" Jan 29 10:58:37.917073 systemd[1]: Created slice kubepods-besteffort-pod4e61e6be_09ae_47f7_8ae4_60c616051e10.slice - libcontainer container kubepods-besteffort-pod4e61e6be_09ae_47f7_8ae4_60c616051e10.slice. Jan 29 10:58:38.025196 kubelet[2409]: E0129 10:58:38.025147 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:38.042767 kubelet[2409]: I0129 10:58:38.042387 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-33e2e8da-8426-434f-97eb-9e5f21bffab4\" (UniqueName: \"kubernetes.io/nfs/4e61e6be-09ae-47f7-8ae4-60c616051e10-pvc-33e2e8da-8426-434f-97eb-9e5f21bffab4\") pod \"test-pod-1\" (UID: \"4e61e6be-09ae-47f7-8ae4-60c616051e10\") " pod="default/test-pod-1" Jan 29 10:58:38.042767 kubelet[2409]: I0129 10:58:38.042445 2409 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9mfw\" (UniqueName: \"kubernetes.io/projected/4e61e6be-09ae-47f7-8ae4-60c616051e10-kube-api-access-j9mfw\") pod \"test-pod-1\" (UID: \"4e61e6be-09ae-47f7-8ae4-60c616051e10\") " pod="default/test-pod-1" Jan 29 10:58:38.179597 kernel: FS-Cache: Loaded Jan 29 10:58:38.222987 kernel: RPC: Registered named UNIX socket transport module. Jan 29 10:58:38.223112 kernel: RPC: Registered udp transport module. Jan 29 10:58:38.223147 kernel: RPC: Registered tcp transport module. Jan 29 10:58:38.223752 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 10:58:38.224771 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 10:58:38.562184 kernel: NFS: Registering the id_resolver key type Jan 29 10:58:38.562386 kernel: Key type id_resolver registered Jan 29 10:58:38.562446 kernel: Key type id_legacy registered Jan 29 10:58:38.597020 nfsidmap[4350]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 29 10:58:38.603630 nfsidmap[4351]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 29 10:58:38.823151 containerd[1937]: time="2025-01-29T10:58:38.822905241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4e61e6be-09ae-47f7-8ae4-60c616051e10,Namespace:default,Attempt:0,}" Jan 29 10:58:39.017287 systemd-networkd[1790]: cali5ec59c6bf6e: Link UP Jan 29 10:58:39.018113 (udev-worker)[4348]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:58:39.018849 systemd-networkd[1790]: cali5ec59c6bf6e: Gained carrier Jan 29 10:58:39.025714 kubelet[2409]: E0129 10:58:39.025647 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.905 [INFO][4352] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.18.182-k8s-test--pod--1-eth0 default 4e61e6be-09ae-47f7-8ae4-60c616051e10 1321 0 2025-01-29 10:58:06 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.18.182 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.182-k8s-test--pod--1-" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.906 [INFO][4352] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.182-k8s-test--pod--1-eth0" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.953 [INFO][4364] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" HandleID="k8s-pod-network.0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" Workload="172.31.18.182-k8s-test--pod--1-eth0" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.971 [INFO][4364] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" HandleID="k8s-pod-network.0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" Workload="172.31.18.182-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400040a750), Attrs:map[string]string{"namespace":"default", "node":"172.31.18.182", "pod":"test-pod-1", "timestamp":"2025-01-29 10:58:38.953107905 +0000 UTC"}, Hostname:"172.31.18.182", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.971 [INFO][4364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.971 [INFO][4364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.971 [INFO][4364] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.18.182' Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.974 [INFO][4364] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" host="172.31.18.182" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.979 [INFO][4364] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.18.182" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.985 [INFO][4364] ipam/ipam.go 489: Trying affinity for 192.168.90.192/26 host="172.31.18.182" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.987 [INFO][4364] ipam/ipam.go 155: Attempting to load block cidr=192.168.90.192/26 host="172.31.18.182" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.991 [INFO][4364] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="172.31.18.182" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.991 [INFO][4364] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" host="172.31.18.182" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.993 [INFO][4364] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108 Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:38.998 [INFO][4364] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" host="172.31.18.182" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:39.010 [INFO][4364] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.90.196/26] block=192.168.90.192/26 handle="k8s-pod-network.0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" host="172.31.18.182" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:39.010 [INFO][4364] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.196/26] handle="k8s-pod-network.0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" host="172.31.18.182" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:39.010 [INFO][4364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:39.010 [INFO][4364] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.196/26] IPv6=[] ContainerID="0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" HandleID="k8s-pod-network.0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" Workload="172.31.18.182-k8s-test--pod--1-eth0" Jan 29 10:58:39.039890 containerd[1937]: 2025-01-29 10:58:39.013 [INFO][4352] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.182-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.182-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4e61e6be-09ae-47f7-8ae4-60c616051e10", ResourceVersion:"1321", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 58, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.182", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:58:39.041653 containerd[1937]: 2025-01-29 10:58:39.013 [INFO][4352] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.90.196/32] ContainerID="0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.182-k8s-test--pod--1-eth0" Jan 29 10:58:39.041653 containerd[1937]: 2025-01-29 10:58:39.013 [INFO][4352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.182-k8s-test--pod--1-eth0" Jan 29 10:58:39.041653 containerd[1937]: 2025-01-29 10:58:39.019 [INFO][4352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.182-k8s-test--pod--1-eth0" Jan 29 10:58:39.041653 containerd[1937]: 2025-01-29 10:58:39.020 [INFO][4352] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.182-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.18.182-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"4e61e6be-09ae-47f7-8ae4-60c616051e10", ResourceVersion:"1321", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 58, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.18.182", ContainerID:"0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"e2:80:e8:8d:50:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:58:39.041653 containerd[1937]: 2025-01-29 10:58:39.031 [INFO][4352] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.18.182-k8s-test--pod--1-eth0" Jan 29 10:58:39.081259 containerd[1937]: time="2025-01-29T10:58:39.080910222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:58:39.081259 containerd[1937]: time="2025-01-29T10:58:39.081022266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:58:39.081259 containerd[1937]: time="2025-01-29T10:58:39.081059982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:58:39.083070 containerd[1937]: time="2025-01-29T10:58:39.082778382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:58:39.112797 systemd[1]: Started cri-containerd-0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108.scope - libcontainer container 0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108. Jan 29 10:58:39.177763 containerd[1937]: time="2025-01-29T10:58:39.177698719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4e61e6be-09ae-47f7-8ae4-60c616051e10,Namespace:default,Attempt:0,} returns sandbox id \"0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108\"" Jan 29 10:58:39.181979 containerd[1937]: time="2025-01-29T10:58:39.181926823Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 10:58:39.523594 containerd[1937]: time="2025-01-29T10:58:39.522724784Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:58:39.525145 containerd[1937]: time="2025-01-29T10:58:39.524581976Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 10:58:39.530132 containerd[1937]: time="2025-01-29T10:58:39.529964324Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 347.977837ms" Jan 29 10:58:39.530132 containerd[1937]: time="2025-01-29T10:58:39.530018312Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 10:58:39.534034 containerd[1937]: time="2025-01-29T10:58:39.533710256Z" level=info msg="CreateContainer within sandbox \"0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 10:58:39.560738 containerd[1937]: time="2025-01-29T10:58:39.560668940Z" level=info msg="CreateContainer within sandbox \"0fc960b559685e3f07eff9c62d82bfe7f6415e5164cbfff9ee77b467f48a2108\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"458268fa22a52c8b7e084e500d2ea39b2c3fd9a97bdaa107e76b36c8f1207626\"" Jan 29 10:58:39.561837 containerd[1937]: time="2025-01-29T10:58:39.561703796Z" level=info msg="StartContainer for \"458268fa22a52c8b7e084e500d2ea39b2c3fd9a97bdaa107e76b36c8f1207626\"" Jan 29 10:58:39.616780 systemd[1]: Started cri-containerd-458268fa22a52c8b7e084e500d2ea39b2c3fd9a97bdaa107e76b36c8f1207626.scope - libcontainer container 458268fa22a52c8b7e084e500d2ea39b2c3fd9a97bdaa107e76b36c8f1207626. Jan 29 10:58:39.661690 containerd[1937]: time="2025-01-29T10:58:39.661628217Z" level=info msg="StartContainer for \"458268fa22a52c8b7e084e500d2ea39b2c3fd9a97bdaa107e76b36c8f1207626\" returns successfully" Jan 29 10:58:40.026375 kubelet[2409]: E0129 10:58:40.026299 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:40.482933 systemd-networkd[1790]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 10:58:40.576279 kubelet[2409]: I0129 10:58:40.576187 2409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=34.225773444 podStartE2EDuration="34.576165969s" podCreationTimestamp="2025-01-29 10:58:06 +0000 UTC" firstStartedPulling="2025-01-29 10:58:39.180959791 +0000 UTC m=+70.418011311" lastFinishedPulling="2025-01-29 10:58:39.531352316 +0000 UTC m=+70.768403836" observedRunningTime="2025-01-29 10:58:40.575939445 +0000 UTC m=+71.812991001" watchObservedRunningTime="2025-01-29 10:58:40.576165969 +0000 UTC m=+71.813217501" Jan 29 10:58:41.026568 kubelet[2409]: E0129 10:58:41.026461 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:42.027070 kubelet[2409]: E0129 10:58:42.026972 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:43.027209 kubelet[2409]: E0129 10:58:43.027139 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:43.259259 ntpd[1924]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 10:58:43.259809 ntpd[1924]: 29 Jan 10:58:43 ntpd[1924]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 10:58:44.027468 kubelet[2409]: E0129 10:58:44.027408 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:45.028073 kubelet[2409]: E0129 10:58:45.028004 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:46.028501 kubelet[2409]: E0129 10:58:46.028411 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:47.028631 kubelet[2409]: E0129 10:58:47.028567 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:48.029720 kubelet[2409]: E0129 10:58:48.029653 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:49.030399 kubelet[2409]: E0129 10:58:49.030329 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:49.979908 kubelet[2409]: E0129 10:58:49.979857 2409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:50.031412 kubelet[2409]: E0129 10:58:50.031346 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:51.031849 kubelet[2409]: E0129 10:58:51.031785 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:52.032766 kubelet[2409]: E0129 10:58:52.032706 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:53.032904 kubelet[2409]: E0129 10:58:53.032840 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:54.033829 kubelet[2409]: E0129 10:58:54.033760 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:55.034915 kubelet[2409]: E0129 10:58:55.034839 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:56.035517 kubelet[2409]: E0129 10:58:56.035448 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:57.036165 kubelet[2409]: E0129 10:58:57.036096 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:58.037173 kubelet[2409]: E0129 10:58:58.037109 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:59.037902 kubelet[2409]: E0129 10:58:59.037843 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:00.038064 kubelet[2409]: E0129 10:59:00.038006 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:01.038561 kubelet[2409]: E0129 10:59:01.038510 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:01.933459 kubelet[2409]: E0129 10:59:01.933400 2409 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.182?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 10:59:02.038896 kubelet[2409]: E0129 10:59:02.038831 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:03.039223 kubelet[2409]: E0129 10:59:03.039130 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:04.039703 kubelet[2409]: E0129 10:59:04.039641 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:05.039895 kubelet[2409]: E0129 10:59:05.039818 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:06.040425 kubelet[2409]: E0129 10:59:06.040364 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:07.040897 kubelet[2409]: E0129 10:59:07.040834 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:08.041895 kubelet[2409]: E0129 10:59:08.041830 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:09.042366 kubelet[2409]: E0129 10:59:09.042311 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:09.979636 kubelet[2409]: E0129 10:59:09.979539 2409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:10.043197 kubelet[2409]: E0129 10:59:10.043136 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:11.044023 kubelet[2409]: E0129 10:59:11.043959 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:11.934400 kubelet[2409]: E0129 10:59:11.934315 2409 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.182?timeout=10s\": context deadline exceeded" Jan 29 10:59:12.044602 kubelet[2409]: E0129 10:59:12.044535 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:13.045283 kubelet[2409]: E0129 10:59:13.045221 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:14.045848 kubelet[2409]: E0129 10:59:14.045774 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:15.046318 kubelet[2409]: E0129 10:59:15.046254 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:16.047423 kubelet[2409]: E0129 10:59:16.047362 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:17.047977 kubelet[2409]: E0129 10:59:17.047915 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:18.048554 kubelet[2409]: E0129 10:59:18.048466 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:19.049505 kubelet[2409]: E0129 10:59:19.049432 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:20.050653 kubelet[2409]: E0129 10:59:20.050566 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:21.051790 kubelet[2409]: E0129 10:59:21.051728 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:21.935446 kubelet[2409]: E0129 10:59:21.935363 2409 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.182?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 10:59:22.052565 kubelet[2409]: E0129 10:59:22.052501 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:23.053541 kubelet[2409]: E0129 10:59:23.053458 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:24.054355 kubelet[2409]: E0129 10:59:24.054307 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:25.056030 kubelet[2409]: E0129 10:59:25.055964 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:26.056625 kubelet[2409]: E0129 10:59:26.056549 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:27.056795 kubelet[2409]: E0129 10:59:27.056733 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:28.057811 kubelet[2409]: E0129 10:59:28.057746 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:29.058362 kubelet[2409]: E0129 10:59:29.058297 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:29.979586 kubelet[2409]: E0129 10:59:29.979535 2409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:30.058831 kubelet[2409]: E0129 10:59:30.058769 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:31.059496 kubelet[2409]: E0129 10:59:31.059422 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:31.936796 kubelet[2409]: E0129 10:59:31.936578 2409 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.182?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 10:59:32.060066 kubelet[2409]: E0129 10:59:32.059992 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:33.060297 kubelet[2409]: E0129 10:59:33.060213 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:34.060524 kubelet[2409]: E0129 10:59:34.060442 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:35.060978 kubelet[2409]: E0129 10:59:35.060915 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:36.061398 kubelet[2409]: E0129 10:59:36.061334 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:37.062270 kubelet[2409]: E0129 10:59:37.062206 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:38.062976 kubelet[2409]: E0129 10:59:38.062914 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:39.063717 kubelet[2409]: E0129 10:59:39.063651 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:40.064355 kubelet[2409]: E0129 10:59:40.064279 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:41.065347 kubelet[2409]: E0129 10:59:41.065287 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:41.937162 kubelet[2409]: E0129 10:59:41.937079 2409 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.182?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 10:59:41.937162 kubelet[2409]: I0129 10:59:41.937155 2409 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 10:59:42.066393 kubelet[2409]: E0129 10:59:42.066334 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:43.067060 kubelet[2409]: E0129 10:59:43.067000 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:44.068127 kubelet[2409]: E0129 10:59:44.068064 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:45.068523 kubelet[2409]: E0129 10:59:45.068426 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:46.069335 kubelet[2409]: E0129 10:59:46.069279 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:47.069548 kubelet[2409]: E0129 10:59:47.069453 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:48.070570 kubelet[2409]: E0129 10:59:48.070508 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:49.071020 kubelet[2409]: E0129 10:59:49.070961 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:49.980304 kubelet[2409]: E0129 10:59:49.980242 2409 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:50.072140 kubelet[2409]: E0129 10:59:50.072100 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:51.072711 kubelet[2409]: E0129 10:59:51.072639 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:51.657520 kubelet[2409]: E0129 10:59:51.654224 2409 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.149:6443/api/v1/namespaces/calico-system/events\": unexpected EOF" event=< Jan 29 10:59:51.657520 kubelet[2409]: &Event{ObjectMeta:{calico-node-5qkw9.181f24be18f89089 calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-node-5qkw9,UID:bb140f78-f754-4fd1-a6db-7a145c56d3a2,APIVersion:v1,ResourceVersion:805,FieldPath:spec.containers{calico-node},},Reason:Unhealthy,Message:Readiness probe failed: 2025-01-29 10:59:40.063 [INFO][331] node/health.go 202: Number of node(s) with BGP peering established = 0 Jan 29 10:59:51.657520 kubelet[2409]: calico/node is not ready: BIRD is not ready: BGP not established with 172.31.16.149 Jan 29 10:59:51.657520 kubelet[2409]: ,Source:EventSource{Component:kubelet,Host:172.31.18.182,},FirstTimestamp:2025-01-29 10:59:40.069630089 +0000 UTC m=+131.306681633,LastTimestamp:2025-01-29 10:59:40.069630089 +0000 UTC m=+131.306681633,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.18.182,} Jan 29 10:59:51.657520 kubelet[2409]: > Jan 29 10:59:52.073734 kubelet[2409]: E0129 10:59:52.073655 2409 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:59:52.656376 kubelet[2409]: E0129 10:59:52.656301 2409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.18.182?timeout=10s\": context deadline exceeded - error from a previous attempt: unexpected EOF" interval="200ms"