Jan 29 10:55:48.163320 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 29 10:55:48.163366 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 10:55:48.163392 kernel: KASLR disabled due to lack of seed Jan 29 10:55:48.163408 kernel: efi: EFI v2.7 by EDK II Jan 29 10:55:48.163424 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Jan 29 10:55:48.163439 kernel: secureboot: Secure boot disabled Jan 29 10:55:48.163456 kernel: ACPI: Early table checksum verification disabled Jan 29 10:55:48.163471 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 29 10:55:48.163487 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 29 10:55:48.163502 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 29 10:55:48.163522 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 29 10:55:48.163537 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 29 10:55:48.163553 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 29 10:55:48.163568 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 29 10:55:48.163586 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 29 10:55:48.163607 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 29 10:55:48.163624 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 29 10:55:48.163640 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 29 10:55:48.163656 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 29 10:55:48.163672 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 29 10:55:48.163688 kernel: printk: bootconsole [uart0] enabled Jan 29 10:55:48.163704 kernel: NUMA: Failed to initialise from firmware Jan 29 10:55:48.163721 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 10:55:48.163738 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 29 10:55:48.163754 kernel: Zone ranges: Jan 29 10:55:48.163770 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 10:55:48.163791 kernel: DMA32 empty Jan 29 10:55:48.163808 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 29 10:55:48.163824 kernel: Movable zone start for each node Jan 29 10:55:48.163840 kernel: Early memory node ranges Jan 29 10:55:48.163856 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 29 10:55:48.163872 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 29 10:55:48.163889 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 29 10:55:48.163905 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 29 10:55:48.163921 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 29 10:55:48.163938 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 29 10:55:48.163954 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 29 10:55:48.163970 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 29 10:55:48.163991 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 29 10:55:48.164009 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 29 10:55:48.164053 kernel: psci: probing for conduit method from ACPI. Jan 29 10:55:48.164073 kernel: psci: PSCIv1.0 detected in firmware. Jan 29 10:55:48.164092 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 10:55:48.164116 kernel: psci: Trusted OS migration not required Jan 29 10:55:48.164133 kernel: psci: SMC Calling Convention v1.1 Jan 29 10:55:48.164170 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 10:55:48.164218 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 10:55:48.164239 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 10:55:48.164256 kernel: Detected PIPT I-cache on CPU0 Jan 29 10:55:48.164274 kernel: CPU features: detected: GIC system register CPU interface Jan 29 10:55:48.164291 kernel: CPU features: detected: Spectre-v2 Jan 29 10:55:48.164308 kernel: CPU features: detected: Spectre-v3a Jan 29 10:55:48.164325 kernel: CPU features: detected: Spectre-BHB Jan 29 10:55:48.164342 kernel: CPU features: detected: ARM erratum 1742098 Jan 29 10:55:48.164372 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 29 10:55:48.164398 kernel: alternatives: applying boot alternatives Jan 29 10:55:48.164417 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 10:55:48.164436 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 10:55:48.164453 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 10:55:48.164471 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 10:55:48.164488 kernel: Fallback order for Node 0: 0 Jan 29 10:55:48.164505 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 29 10:55:48.164522 kernel: Policy zone: Normal Jan 29 10:55:48.164540 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 10:55:48.164557 kernel: software IO TLB: area num 2. Jan 29 10:55:48.166266 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 29 10:55:48.166290 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Jan 29 10:55:48.166308 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 10:55:48.166325 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 10:55:48.166344 kernel: rcu: RCU event tracing is enabled. Jan 29 10:55:48.166362 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 10:55:48.166380 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 10:55:48.166398 kernel: Tracing variant of Tasks RCU enabled. Jan 29 10:55:48.166415 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 10:55:48.166432 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 10:55:48.166449 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 10:55:48.166476 kernel: GICv3: 96 SPIs implemented Jan 29 10:55:48.166501 kernel: GICv3: 0 Extended SPIs implemented Jan 29 10:55:48.166518 kernel: Root IRQ handler: gic_handle_irq Jan 29 10:55:48.166535 kernel: GICv3: GICv3 features: 16 PPIs Jan 29 10:55:48.166554 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 29 10:55:48.166573 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 29 10:55:48.166592 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 10:55:48.166611 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 29 10:55:48.166629 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 29 10:55:48.166647 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 29 10:55:48.166665 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 29 10:55:48.166683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 10:55:48.166705 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 29 10:55:48.166724 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 29 10:55:48.166741 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 29 10:55:48.166759 kernel: Console: colour dummy device 80x25 Jan 29 10:55:48.166778 kernel: printk: console [tty1] enabled Jan 29 10:55:48.166797 kernel: ACPI: Core revision 20230628 Jan 29 10:55:48.166815 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 29 10:55:48.166834 kernel: pid_max: default: 32768 minimum: 301 Jan 29 10:55:48.166852 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 10:55:48.166870 kernel: landlock: Up and running. Jan 29 10:55:48.166892 kernel: SELinux: Initializing. Jan 29 10:55:48.166910 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:55:48.166927 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:55:48.166945 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:55:48.166962 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:55:48.166980 kernel: rcu: Hierarchical SRCU implementation. Jan 29 10:55:48.166998 kernel: rcu: Max phase no-delay instances is 400. Jan 29 10:55:48.167015 kernel: Platform MSI: ITS@0x10080000 domain created Jan 29 10:55:48.167037 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 29 10:55:48.167054 kernel: Remapping and enabling EFI services. Jan 29 10:55:48.167071 kernel: smp: Bringing up secondary CPUs ... Jan 29 10:55:48.167089 kernel: Detected PIPT I-cache on CPU1 Jan 29 10:55:48.167106 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 29 10:55:48.167124 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 29 10:55:48.167142 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 29 10:55:48.168264 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 10:55:48.168290 kernel: SMP: Total of 2 processors activated. Jan 29 10:55:48.168308 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 10:55:48.168334 kernel: CPU features: detected: 32-bit EL1 Support Jan 29 10:55:48.168352 kernel: CPU features: detected: CRC32 instructions Jan 29 10:55:48.168383 kernel: CPU: All CPU(s) started at EL1 Jan 29 10:55:48.168406 kernel: alternatives: applying system-wide alternatives Jan 29 10:55:48.168424 kernel: devtmpfs: initialized Jan 29 10:55:48.168442 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 10:55:48.168461 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 10:55:48.168479 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 10:55:48.168497 kernel: SMBIOS 3.0.0 present. Jan 29 10:55:48.168520 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 29 10:55:48.168538 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 10:55:48.168556 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 10:55:48.168574 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 10:55:48.168593 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 10:55:48.168611 kernel: audit: initializing netlink subsys (disabled) Jan 29 10:55:48.168629 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Jan 29 10:55:48.168651 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 10:55:48.168669 kernel: cpuidle: using governor menu Jan 29 10:55:48.168687 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 10:55:48.168706 kernel: ASID allocator initialised with 65536 entries Jan 29 10:55:48.168724 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 10:55:48.168742 kernel: Serial: AMBA PL011 UART driver Jan 29 10:55:48.168760 kernel: Modules: 17440 pages in range for non-PLT usage Jan 29 10:55:48.168779 kernel: Modules: 508960 pages in range for PLT usage Jan 29 10:55:48.168797 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 10:55:48.168819 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 10:55:48.168837 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 10:55:48.168855 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 10:55:48.168874 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 10:55:48.168892 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 10:55:48.168910 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 10:55:48.168928 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 10:55:48.168946 kernel: ACPI: Added _OSI(Module Device) Jan 29 10:55:48.168964 kernel: ACPI: Added _OSI(Processor Device) Jan 29 10:55:48.168986 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 10:55:48.169005 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 10:55:48.169023 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 10:55:48.169041 kernel: ACPI: Interpreter enabled Jan 29 10:55:48.169059 kernel: ACPI: Using GIC for interrupt routing Jan 29 10:55:48.169077 kernel: ACPI: MCFG table detected, 1 entries Jan 29 10:55:48.169095 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 29 10:55:48.170513 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 10:55:48.170789 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 10:55:48.171284 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 10:55:48.172488 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 29 10:55:48.172718 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 29 10:55:48.172744 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 29 10:55:48.172763 kernel: acpiphp: Slot [1] registered Jan 29 10:55:48.172782 kernel: acpiphp: Slot [2] registered Jan 29 10:55:48.172800 kernel: acpiphp: Slot [3] registered Jan 29 10:55:48.172827 kernel: acpiphp: Slot [4] registered Jan 29 10:55:48.172846 kernel: acpiphp: Slot [5] registered Jan 29 10:55:48.172865 kernel: acpiphp: Slot [6] registered Jan 29 10:55:48.172883 kernel: acpiphp: Slot [7] registered Jan 29 10:55:48.172901 kernel: acpiphp: Slot [8] registered Jan 29 10:55:48.172919 kernel: acpiphp: Slot [9] registered Jan 29 10:55:48.172937 kernel: acpiphp: Slot [10] registered Jan 29 10:55:48.172955 kernel: acpiphp: Slot [11] registered Jan 29 10:55:48.172974 kernel: acpiphp: Slot [12] registered Jan 29 10:55:48.172992 kernel: acpiphp: Slot [13] registered Jan 29 10:55:48.173014 kernel: acpiphp: Slot [14] registered Jan 29 10:55:48.173032 kernel: acpiphp: Slot [15] registered Jan 29 10:55:48.173051 kernel: acpiphp: Slot [16] registered Jan 29 10:55:48.173069 kernel: acpiphp: Slot [17] registered Jan 29 10:55:48.173088 kernel: acpiphp: Slot [18] registered Jan 29 10:55:48.173107 kernel: acpiphp: Slot [19] registered Jan 29 10:55:48.173125 kernel: acpiphp: Slot [20] registered Jan 29 10:55:48.173143 kernel: acpiphp: Slot [21] registered Jan 29 10:55:48.173205 kernel: acpiphp: Slot [22] registered Jan 29 10:55:48.173232 kernel: acpiphp: Slot [23] registered Jan 29 10:55:48.173251 kernel: acpiphp: Slot [24] registered Jan 29 10:55:48.173269 kernel: acpiphp: Slot [25] registered Jan 29 10:55:48.173287 kernel: acpiphp: Slot [26] registered Jan 29 10:55:48.173305 kernel: acpiphp: Slot [27] registered Jan 29 10:55:48.173323 kernel: acpiphp: Slot [28] registered Jan 29 10:55:48.173341 kernel: acpiphp: Slot [29] registered Jan 29 10:55:48.173359 kernel: acpiphp: Slot [30] registered Jan 29 10:55:48.173377 kernel: acpiphp: Slot [31] registered Jan 29 10:55:48.173394 kernel: PCI host bridge to bus 0000:00 Jan 29 10:55:48.173603 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 29 10:55:48.173816 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 10:55:48.174048 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 29 10:55:48.175669 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 29 10:55:48.175947 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 29 10:55:48.178349 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 29 10:55:48.178578 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 29 10:55:48.178795 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 29 10:55:48.179001 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 29 10:55:48.180308 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 10:55:48.180566 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 29 10:55:48.180775 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 29 10:55:48.180985 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 29 10:55:48.181464 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 29 10:55:48.181682 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 10:55:48.183479 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 29 10:55:48.183701 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 29 10:55:48.183908 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 29 10:55:48.184107 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 29 10:55:48.185397 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 29 10:55:48.185604 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 29 10:55:48.185815 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 10:55:48.186001 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 29 10:55:48.186026 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 10:55:48.186046 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 10:55:48.186065 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 10:55:48.186083 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 10:55:48.186102 kernel: iommu: Default domain type: Translated Jan 29 10:55:48.186128 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 10:55:48.186147 kernel: efivars: Registered efivars operations Jan 29 10:55:48.187240 kernel: vgaarb: loaded Jan 29 10:55:48.187261 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 10:55:48.187279 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 10:55:48.187298 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 10:55:48.187316 kernel: pnp: PnP ACPI init Jan 29 10:55:48.187566 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 29 10:55:48.187601 kernel: pnp: PnP ACPI: found 1 devices Jan 29 10:55:48.187620 kernel: NET: Registered PF_INET protocol family Jan 29 10:55:48.187639 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 10:55:48.187657 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 10:55:48.187676 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 10:55:48.187694 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 10:55:48.187712 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 10:55:48.187730 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 10:55:48.187749 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:55:48.187772 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:55:48.187807 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 10:55:48.187826 kernel: PCI: CLS 0 bytes, default 64 Jan 29 10:55:48.187844 kernel: kvm [1]: HYP mode not available Jan 29 10:55:48.187863 kernel: Initialise system trusted keyrings Jan 29 10:55:48.187881 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 10:55:48.187899 kernel: Key type asymmetric registered Jan 29 10:55:48.187917 kernel: Asymmetric key parser 'x509' registered Jan 29 10:55:48.187935 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 10:55:48.187959 kernel: io scheduler mq-deadline registered Jan 29 10:55:48.187978 kernel: io scheduler kyber registered Jan 29 10:55:48.187996 kernel: io scheduler bfq registered Jan 29 10:55:48.189257 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 29 10:55:48.189288 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 10:55:48.189309 kernel: ACPI: button: Power Button [PWRB] Jan 29 10:55:48.189328 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 29 10:55:48.189346 kernel: ACPI: button: Sleep Button [SLPB] Jan 29 10:55:48.189371 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 10:55:48.189391 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 10:55:48.189603 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 29 10:55:48.189629 kernel: printk: console [ttyS0] disabled Jan 29 10:55:48.189648 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 29 10:55:48.189667 kernel: printk: console [ttyS0] enabled Jan 29 10:55:48.189698 kernel: printk: bootconsole [uart0] disabled Jan 29 10:55:48.189722 kernel: thunder_xcv, ver 1.0 Jan 29 10:55:48.189741 kernel: thunder_bgx, ver 1.0 Jan 29 10:55:48.189760 kernel: nicpf, ver 1.0 Jan 29 10:55:48.189785 kernel: nicvf, ver 1.0 Jan 29 10:55:48.189999 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 10:55:48.191935 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T10:55:47 UTC (1738148147) Jan 29 10:55:48.191982 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 10:55:48.192002 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 29 10:55:48.192021 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 10:55:48.192039 kernel: watchdog: Hard watchdog permanently disabled Jan 29 10:55:48.192066 kernel: NET: Registered PF_INET6 protocol family Jan 29 10:55:48.192085 kernel: Segment Routing with IPv6 Jan 29 10:55:48.192103 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 10:55:48.192121 kernel: NET: Registered PF_PACKET protocol family Jan 29 10:55:48.192140 kernel: Key type dns_resolver registered Jan 29 10:55:48.192176 kernel: registered taskstats version 1 Jan 29 10:55:48.192197 kernel: Loading compiled-in X.509 certificates Jan 29 10:55:48.192215 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 10:55:48.192234 kernel: Key type .fscrypt registered Jan 29 10:55:48.192252 kernel: Key type fscrypt-provisioning registered Jan 29 10:55:48.192276 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 10:55:48.192294 kernel: ima: Allocated hash algorithm: sha1 Jan 29 10:55:48.192312 kernel: ima: No architecture policies found Jan 29 10:55:48.192330 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 10:55:48.192349 kernel: clk: Disabling unused clocks Jan 29 10:55:48.192367 kernel: Freeing unused kernel memory: 39680K Jan 29 10:55:48.192385 kernel: Run /init as init process Jan 29 10:55:48.192403 kernel: with arguments: Jan 29 10:55:48.192421 kernel: /init Jan 29 10:55:48.192443 kernel: with environment: Jan 29 10:55:48.192461 kernel: HOME=/ Jan 29 10:55:48.192479 kernel: TERM=linux Jan 29 10:55:48.192497 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 10:55:48.192520 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:55:48.192543 systemd[1]: Detected virtualization amazon. Jan 29 10:55:48.192563 systemd[1]: Detected architecture arm64. Jan 29 10:55:48.192587 systemd[1]: Running in initrd. Jan 29 10:55:48.192606 systemd[1]: No hostname configured, using default hostname. Jan 29 10:55:48.192625 systemd[1]: Hostname set to . Jan 29 10:55:48.192646 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:55:48.192665 systemd[1]: Queued start job for default target initrd.target. Jan 29 10:55:48.192685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:55:48.192704 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:55:48.192725 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 10:55:48.192750 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:55:48.192770 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 10:55:48.192810 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 10:55:48.192835 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 10:55:48.192857 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 10:55:48.192877 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:55:48.192897 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:55:48.192922 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:55:48.192942 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:55:48.192961 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:55:48.192981 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:55:48.193001 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:55:48.193021 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:55:48.193041 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 10:55:48.193060 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 10:55:48.193080 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:55:48.193104 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:55:48.193124 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:55:48.193143 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:55:48.193215 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 10:55:48.193239 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:55:48.193260 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 10:55:48.193284 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 10:55:48.193306 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:55:48.193334 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:55:48.193354 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:55:48.193375 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 10:55:48.193395 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:55:48.193415 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 10:55:48.193485 systemd-journald[252]: Collecting audit messages is disabled. Jan 29 10:55:48.193535 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:55:48.193558 systemd-journald[252]: Journal started Jan 29 10:55:48.193601 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2479316391bab673421a65cb624068) is 8.0M, max 75.3M, 67.3M free. Jan 29 10:55:48.172411 systemd-modules-load[253]: Inserted module 'overlay' Jan 29 10:55:48.197832 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:55:48.205681 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:55:48.226487 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 10:55:48.226547 kernel: Bridge firewalling registered Jan 29 10:55:48.213413 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:55:48.224610 systemd-modules-load[253]: Inserted module 'br_netfilter' Jan 29 10:55:48.234508 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:55:48.246498 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:55:48.254419 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:55:48.270936 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:55:48.292454 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:55:48.297906 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:55:48.302960 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:55:48.313531 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 10:55:48.329541 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:55:48.335834 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:55:48.356576 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:55:48.376181 dracut-cmdline[285]: dracut-dracut-053 Jan 29 10:55:48.384997 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 10:55:48.432999 systemd-resolved[289]: Positive Trust Anchors: Jan 29 10:55:48.433057 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:55:48.433118 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:55:48.542194 kernel: SCSI subsystem initialized Jan 29 10:55:48.549188 kernel: Loading iSCSI transport class v2.0-870. Jan 29 10:55:48.562196 kernel: iscsi: registered transport (tcp) Jan 29 10:55:48.584196 kernel: iscsi: registered transport (qla4xxx) Jan 29 10:55:48.584266 kernel: QLogic iSCSI HBA Driver Jan 29 10:55:48.659298 kernel: random: crng init done Jan 29 10:55:48.659412 systemd-resolved[289]: Defaulting to hostname 'linux'. Jan 29 10:55:48.662786 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:55:48.667494 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:55:48.692299 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 10:55:48.704462 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 10:55:48.736851 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 10:55:48.736926 kernel: device-mapper: uevent: version 1.0.3 Jan 29 10:55:48.736952 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 10:55:48.803217 kernel: raid6: neonx8 gen() 6698 MB/s Jan 29 10:55:48.820187 kernel: raid6: neonx4 gen() 6512 MB/s Jan 29 10:55:48.837186 kernel: raid6: neonx2 gen() 5425 MB/s Jan 29 10:55:48.854186 kernel: raid6: neonx1 gen() 3949 MB/s Jan 29 10:55:48.871186 kernel: raid6: int64x8 gen() 3817 MB/s Jan 29 10:55:48.888188 kernel: raid6: int64x4 gen() 3719 MB/s Jan 29 10:55:48.905186 kernel: raid6: int64x2 gen() 3600 MB/s Jan 29 10:55:48.922944 kernel: raid6: int64x1 gen() 2746 MB/s Jan 29 10:55:48.922979 kernel: raid6: using algorithm neonx8 gen() 6698 MB/s Jan 29 10:55:48.940911 kernel: raid6: .... xor() 4788 MB/s, rmw enabled Jan 29 10:55:48.940949 kernel: raid6: using neon recovery algorithm Jan 29 10:55:48.948191 kernel: xor: measuring software checksum speed Jan 29 10:55:48.950270 kernel: 8regs : 10232 MB/sec Jan 29 10:55:48.950310 kernel: 32regs : 11905 MB/sec Jan 29 10:55:48.951428 kernel: arm64_neon : 9300 MB/sec Jan 29 10:55:48.951469 kernel: xor: using function: 32regs (11905 MB/sec) Jan 29 10:55:49.035204 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 10:55:49.053999 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:55:49.063479 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:55:49.102796 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 29 10:55:49.111530 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:55:49.124448 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 10:55:49.170500 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Jan 29 10:55:49.226537 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:55:49.236518 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:55:49.359058 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:55:49.371947 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 10:55:49.415201 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 10:55:49.420502 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:55:49.422788 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:55:49.426959 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:55:49.441533 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 10:55:49.469653 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:55:49.557179 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 10:55:49.557249 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 29 10:55:49.583884 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 29 10:55:49.584185 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 29 10:55:49.584457 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:8c:bf:e5:36:15 Jan 29 10:55:49.557633 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:55:49.557799 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:55:49.560403 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:55:49.564649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:55:49.564778 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:55:49.567993 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:55:49.576405 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:55:49.609211 (udev-worker)[526]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:55:49.631692 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 10:55:49.631757 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 29 10:55:49.640206 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 29 10:55:49.642749 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:55:49.653185 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 10:55:49.653255 kernel: GPT:9289727 != 16777215 Jan 29 10:55:49.654469 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:55:49.665030 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 10:55:49.665094 kernel: GPT:9289727 != 16777215 Jan 29 10:55:49.665119 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 10:55:49.666390 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:55:49.688356 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:55:49.780265 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (525) Jan 29 10:55:49.784623 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (533) Jan 29 10:55:49.793611 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 29 10:55:49.871897 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 29 10:55:49.888452 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 29 10:55:49.894168 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 29 10:55:49.930492 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 10:55:49.943580 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 10:55:49.956151 disk-uuid[665]: Primary Header is updated. Jan 29 10:55:49.956151 disk-uuid[665]: Secondary Entries is updated. Jan 29 10:55:49.956151 disk-uuid[665]: Secondary Header is updated. Jan 29 10:55:49.969220 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:55:50.988536 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 29 10:55:50.988604 disk-uuid[666]: The operation has completed successfully. Jan 29 10:55:51.165196 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 10:55:51.165390 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 10:55:51.219395 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 10:55:51.227462 sh[927]: Success Jan 29 10:55:51.251239 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 10:55:51.360007 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 10:55:51.373386 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 10:55:51.379519 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 10:55:51.417801 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 10:55:51.417863 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:55:51.420199 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 10:55:51.420234 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 10:55:51.420844 kernel: BTRFS info (device dm-0): using free space tree Jan 29 10:55:51.500208 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 10:55:51.520808 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 10:55:51.524626 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 10:55:51.538386 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 10:55:51.545466 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 10:55:51.574695 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 10:55:51.574763 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:55:51.576035 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:55:51.584367 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:55:51.600786 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 10:55:51.604196 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 10:55:51.615097 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 10:55:51.626540 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 10:55:51.734047 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:55:51.748501 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:55:51.796236 systemd-networkd[1120]: lo: Link UP Jan 29 10:55:51.796258 systemd-networkd[1120]: lo: Gained carrier Jan 29 10:55:51.800897 systemd-networkd[1120]: Enumeration completed Jan 29 10:55:51.801068 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:55:51.803395 systemd[1]: Reached target network.target - Network. Jan 29 10:55:51.803506 systemd-networkd[1120]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:55:51.803546 systemd-networkd[1120]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:55:51.818214 systemd-networkd[1120]: eth0: Link UP Jan 29 10:55:51.818226 systemd-networkd[1120]: eth0: Gained carrier Jan 29 10:55:51.818243 systemd-networkd[1120]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:55:51.836243 systemd-networkd[1120]: eth0: DHCPv4 address 172.31.16.43/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 10:55:51.985819 ignition[1030]: Ignition 2.20.0 Jan 29 10:55:51.986374 ignition[1030]: Stage: fetch-offline Jan 29 10:55:51.986814 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:55:51.986839 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:55:51.987516 ignition[1030]: Ignition finished successfully Jan 29 10:55:51.996319 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:55:52.005466 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 10:55:52.040514 ignition[1129]: Ignition 2.20.0 Jan 29 10:55:52.040535 ignition[1129]: Stage: fetch Jan 29 10:55:52.041079 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:55:52.041103 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:55:52.041813 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:55:52.063054 ignition[1129]: PUT result: OK Jan 29 10:55:52.066274 ignition[1129]: parsed url from cmdline: "" Jan 29 10:55:52.066405 ignition[1129]: no config URL provided Jan 29 10:55:52.066430 ignition[1129]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:55:52.066456 ignition[1129]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:55:52.066512 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:55:52.073677 ignition[1129]: PUT result: OK Jan 29 10:55:52.073897 ignition[1129]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 29 10:55:52.077812 ignition[1129]: GET result: OK Jan 29 10:55:52.077898 ignition[1129]: parsing config with SHA512: 96e285e6bfe6f9e1da473691754167d8b5a8846e9f5364c61dd6e25853845921a26fe1bbe4a3644b715161f57bdb704f8939430c14807bba484073b08039d0fa Jan 29 10:55:52.083287 unknown[1129]: fetched base config from "system" Jan 29 10:55:52.083912 ignition[1129]: fetch: fetch complete Jan 29 10:55:52.083303 unknown[1129]: fetched base config from "system" Jan 29 10:55:52.083924 ignition[1129]: fetch: fetch passed Jan 29 10:55:52.083317 unknown[1129]: fetched user config from "aws" Jan 29 10:55:52.083998 ignition[1129]: Ignition finished successfully Jan 29 10:55:52.094032 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 10:55:52.103544 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 10:55:52.135309 ignition[1135]: Ignition 2.20.0 Jan 29 10:55:52.135787 ignition[1135]: Stage: kargs Jan 29 10:55:52.136419 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:55:52.136444 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:55:52.136619 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:55:52.141038 ignition[1135]: PUT result: OK Jan 29 10:55:52.148703 ignition[1135]: kargs: kargs passed Jan 29 10:55:52.148803 ignition[1135]: Ignition finished successfully Jan 29 10:55:52.159212 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 10:55:52.170400 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 10:55:52.196115 ignition[1141]: Ignition 2.20.0 Jan 29 10:55:52.196150 ignition[1141]: Stage: disks Jan 29 10:55:52.197735 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:55:52.197763 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:55:52.198422 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:55:52.200932 ignition[1141]: PUT result: OK Jan 29 10:55:52.208421 ignition[1141]: disks: disks passed Jan 29 10:55:52.208511 ignition[1141]: Ignition finished successfully Jan 29 10:55:52.212621 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 10:55:52.217375 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 10:55:52.223361 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 10:55:52.225562 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:55:52.227378 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:55:52.229197 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:55:52.243577 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 10:55:52.284143 systemd-fsck[1149]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 10:55:52.291083 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 10:55:52.301057 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 10:55:52.401199 kernel: EXT4-fs (nvme0n1p9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 10:55:52.402482 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 10:55:52.404301 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 10:55:52.422341 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:55:52.433431 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 10:55:52.436969 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 10:55:52.437045 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 10:55:52.437091 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:55:52.452669 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 10:55:52.463545 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 10:55:52.473082 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1168) Jan 29 10:55:52.477604 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 10:55:52.477670 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:55:52.478847 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:55:52.495190 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:55:52.497801 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:55:52.888430 initrd-setup-root[1192]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 10:55:52.906938 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory Jan 29 10:55:52.915457 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 10:55:52.923872 initrd-setup-root[1213]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 10:55:53.099445 systemd-networkd[1120]: eth0: Gained IPv6LL Jan 29 10:55:53.198937 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 10:55:53.208440 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 10:55:53.215437 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 10:55:53.234714 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 10:55:53.236917 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 10:55:53.277735 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 10:55:53.281338 ignition[1281]: INFO : Ignition 2.20.0 Jan 29 10:55:53.283310 ignition[1281]: INFO : Stage: mount Jan 29 10:55:53.285375 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:55:53.285375 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:55:53.285375 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:55:53.292129 ignition[1281]: INFO : PUT result: OK Jan 29 10:55:53.296052 ignition[1281]: INFO : mount: mount passed Jan 29 10:55:53.296052 ignition[1281]: INFO : Ignition finished successfully Jan 29 10:55:53.300542 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 10:55:53.310417 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 10:55:53.339460 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:55:53.365243 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1293) Jan 29 10:55:53.369174 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 10:55:53.369216 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:55:53.370405 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 29 10:55:53.376181 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 29 10:55:53.380047 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:55:53.414037 ignition[1310]: INFO : Ignition 2.20.0 Jan 29 10:55:53.416698 ignition[1310]: INFO : Stage: files Jan 29 10:55:53.416698 ignition[1310]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:55:53.416698 ignition[1310]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:55:53.416698 ignition[1310]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:55:53.424664 ignition[1310]: INFO : PUT result: OK Jan 29 10:55:53.428522 ignition[1310]: DEBUG : files: compiled without relabeling support, skipping Jan 29 10:55:53.431273 ignition[1310]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 10:55:53.431273 ignition[1310]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 10:55:53.461190 ignition[1310]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 10:55:53.464191 ignition[1310]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 10:55:53.464191 ignition[1310]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 10:55:53.463848 unknown[1310]: wrote ssh authorized keys file for user: core Jan 29 10:55:53.476530 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 10:55:53.479723 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 10:55:53.479723 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:55:53.486523 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:55:53.486523 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:55:53.486523 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:55:53.486523 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:55:53.486523 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 10:55:54.011423 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 10:55:54.369558 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:55:54.373381 ignition[1310]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:55:54.373381 ignition[1310]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:55:54.380868 ignition[1310]: INFO : files: files passed Jan 29 10:55:54.380868 ignition[1310]: INFO : Ignition finished successfully Jan 29 10:55:54.384484 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 10:55:54.402591 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 10:55:54.409258 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 10:55:54.418737 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 10:55:54.418956 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 10:55:54.449522 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:55:54.449522 initrd-setup-root-after-ignition[1339]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:55:54.457388 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:55:54.463008 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:55:54.468402 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 10:55:54.481533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 10:55:54.527130 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 10:55:54.529445 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 10:55:54.535434 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 10:55:54.537426 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 10:55:54.539310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 10:55:54.554915 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 10:55:54.582220 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:55:54.602526 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 10:55:54.625922 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:55:54.630536 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:55:54.634839 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 10:55:54.636886 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 10:55:54.637133 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:55:54.644374 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 10:55:54.646355 systemd[1]: Stopped target basic.target - Basic System. Jan 29 10:55:54.648123 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 10:55:54.650232 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:55:54.652459 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 10:55:54.654661 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 10:55:54.656647 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:55:54.659011 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 10:55:54.661006 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 10:55:54.662967 systemd[1]: Stopped target swap.target - Swaps. Jan 29 10:55:54.664829 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 10:55:54.665066 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:55:54.683266 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:55:54.691790 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:55:54.694005 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 10:55:54.698328 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:55:54.700660 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 10:55:54.700879 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 10:55:54.703264 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 10:55:54.703508 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:55:54.715422 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 10:55:54.715633 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 10:55:54.735653 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 10:55:54.742528 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 10:55:54.747893 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 10:55:54.750122 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:55:54.756630 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 10:55:54.757503 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:55:54.772523 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 10:55:54.772711 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 10:55:54.796428 ignition[1363]: INFO : Ignition 2.20.0 Jan 29 10:55:54.800020 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 10:55:54.803132 ignition[1363]: INFO : Stage: umount Jan 29 10:55:54.803132 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:55:54.803132 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 29 10:55:54.803132 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 29 10:55:54.816863 ignition[1363]: INFO : PUT result: OK Jan 29 10:55:54.814912 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 10:55:54.820093 ignition[1363]: INFO : umount: umount passed Jan 29 10:55:54.820093 ignition[1363]: INFO : Ignition finished successfully Jan 29 10:55:54.815113 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 10:55:54.825867 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 10:55:54.827722 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 10:55:54.830583 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 10:55:54.830737 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 10:55:54.836786 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 10:55:54.836890 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 10:55:54.838832 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 10:55:54.839480 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 10:55:54.842340 systemd[1]: Stopped target network.target - Network. Jan 29 10:55:54.843928 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 10:55:54.844015 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:55:54.846178 systemd[1]: Stopped target paths.target - Path Units. Jan 29 10:55:54.847771 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 10:55:54.861679 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:55:54.864011 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 10:55:54.865655 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 10:55:54.867443 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 10:55:54.867524 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:55:54.869336 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 10:55:54.869402 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:55:54.871277 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 10:55:54.871360 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 10:55:54.873208 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 10:55:54.873283 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 10:55:54.875242 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 10:55:54.875318 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 10:55:54.877508 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 10:55:54.879490 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 10:55:54.919302 systemd-networkd[1120]: eth0: DHCPv6 lease lost Jan 29 10:55:54.919435 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 10:55:54.919676 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 10:55:54.924071 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 10:55:54.924413 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 10:55:54.943707 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 10:55:54.943819 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:55:54.962453 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 10:55:54.962911 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 10:55:54.964116 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:55:54.969843 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:55:54.969951 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:55:54.973595 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 10:55:54.973707 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 10:55:54.985001 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 10:55:54.985113 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:55:54.989291 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:55:55.015081 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 10:55:55.015346 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 10:55:55.023002 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 10:55:55.024756 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:55:55.028150 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 10:55:55.029088 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 10:55:55.031115 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 10:55:55.031204 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:55:55.033809 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 10:55:55.034178 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:55:55.037285 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 10:55:55.037368 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 10:55:55.051549 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:55:55.051655 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:55:55.070519 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 10:55:55.075292 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 10:55:55.075409 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:55:55.081309 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 10:55:55.081417 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:55:55.086348 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 10:55:55.086445 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:55:55.091706 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:55:55.091817 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:55:55.109749 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 10:55:55.109933 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 10:55:55.113901 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 10:55:55.128457 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 10:55:55.148115 systemd[1]: Switching root. Jan 29 10:55:55.208958 systemd-journald[252]: Journal stopped Jan 29 10:55:57.646734 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Jan 29 10:55:57.646874 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 10:55:57.646916 kernel: SELinux: policy capability open_perms=1 Jan 29 10:55:57.646949 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 10:55:57.646979 kernel: SELinux: policy capability always_check_network=0 Jan 29 10:55:57.647008 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 10:55:57.647038 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 10:55:57.647076 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 10:55:57.647106 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 10:55:57.647137 kernel: audit: type=1403 audit(1738148155.786:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 10:55:57.647203 systemd[1]: Successfully loaded SELinux policy in 58.953ms. Jan 29 10:55:57.647242 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.195ms. Jan 29 10:55:57.647277 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:55:57.647309 systemd[1]: Detected virtualization amazon. Jan 29 10:55:57.647338 systemd[1]: Detected architecture arm64. Jan 29 10:55:57.647365 systemd[1]: Detected first boot. Jan 29 10:55:57.647398 systemd[1]: Initializing machine ID from VM UUID. Jan 29 10:55:57.647434 zram_generator::config[1407]: No configuration found. Jan 29 10:55:57.647468 systemd[1]: Populated /etc with preset unit settings. Jan 29 10:55:57.647510 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 10:55:57.647544 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 10:55:57.647575 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 10:55:57.647607 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 10:55:57.647638 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 10:55:57.647672 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 10:55:57.647703 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 10:55:57.647735 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 10:55:57.647774 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 10:55:57.647805 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 10:55:57.647842 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 10:55:57.647872 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:55:57.647903 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:55:57.647934 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 10:55:57.647967 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 10:55:57.647998 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 10:55:57.648027 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:55:57.648058 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 10:55:57.648089 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:55:57.648118 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 10:55:57.648146 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 10:55:57.650274 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 10:55:57.650323 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 10:55:57.650428 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:55:57.651029 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:55:57.651598 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:55:57.651635 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:55:57.651667 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 10:55:57.651697 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 10:55:57.651728 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:55:57.651763 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:55:57.651794 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:55:57.651823 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 10:55:57.651851 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 10:55:57.651881 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 10:55:57.651912 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 10:55:57.651944 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 10:55:57.651974 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 10:55:57.652003 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 10:55:57.652038 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 10:55:57.652067 systemd[1]: Reached target machines.target - Containers. Jan 29 10:55:57.652098 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 10:55:57.652129 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:55:57.654240 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:55:57.654299 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 10:55:57.654331 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:55:57.654360 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:55:57.654397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:55:57.654431 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 10:55:57.654461 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:55:57.654490 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 10:55:57.654523 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 10:55:57.654551 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 10:55:57.654580 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 10:55:57.654609 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 10:55:57.654637 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:55:57.655795 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:55:57.655861 kernel: loop: module loaded Jan 29 10:55:57.655892 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 10:55:57.655924 kernel: fuse: init (API version 7.39) Jan 29 10:55:57.655952 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 10:55:57.655983 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:55:57.656012 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 10:55:57.656042 systemd[1]: Stopped verity-setup.service. Jan 29 10:55:57.656073 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 10:55:57.656109 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 10:55:57.656139 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 10:55:57.661305 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 10:55:57.661354 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 10:55:57.661384 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 10:55:57.661414 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:55:57.661453 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 10:55:57.661484 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 10:55:57.661514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:55:57.661543 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:55:57.661573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:55:57.661603 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:55:57.661649 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 10:55:57.661685 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 10:55:57.661722 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:55:57.661752 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:55:57.661786 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:55:57.661815 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 10:55:57.661844 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 10:55:57.661877 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 10:55:57.661953 systemd-journald[1485]: Collecting audit messages is disabled. Jan 29 10:55:57.662010 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 10:55:57.662040 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 10:55:57.662071 systemd-journald[1485]: Journal started Jan 29 10:55:57.662117 systemd-journald[1485]: Runtime Journal (/run/log/journal/ec2479316391bab673421a65cb624068) is 8.0M, max 75.3M, 67.3M free. Jan 29 10:55:57.050072 systemd[1]: Queued start job for default target multi-user.target. Jan 29 10:55:57.117464 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 29 10:55:57.118251 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 10:55:57.673306 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 10:55:57.676909 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:55:57.688013 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 10:55:57.693392 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 10:55:57.705079 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 10:55:57.708205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:55:57.726181 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 10:55:57.726281 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:55:57.736220 kernel: ACPI: bus type drm_connector registered Jan 29 10:55:57.749185 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 10:55:57.749276 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:55:57.761361 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:55:57.769702 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 10:55:57.778356 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:55:57.785225 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:55:57.787621 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:55:57.787972 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:55:57.790356 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 10:55:57.793517 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 10:55:57.796415 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 10:55:57.827293 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 10:55:57.870847 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 10:55:57.888669 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 10:55:57.893174 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 10:55:57.905131 systemd-journald[1485]: Time spent on flushing to /var/log/journal/ec2479316391bab673421a65cb624068 is 39.402ms for 893 entries. Jan 29 10:55:57.905131 systemd-journald[1485]: System Journal (/var/log/journal/ec2479316391bab673421a65cb624068) is 8.0M, max 195.6M, 187.6M free. Jan 29 10:55:57.956032 systemd-journald[1485]: Received client request to flush runtime journal. Jan 29 10:55:57.956109 kernel: loop0: detected capacity change from 0 to 113536 Jan 29 10:55:57.932762 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 10:55:57.964795 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 10:55:57.988728 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jan 29 10:55:57.988761 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Jan 29 10:55:57.996783 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:55:58.015749 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 10:55:58.023282 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:55:58.028493 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 10:55:58.039658 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 10:55:58.044073 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:55:58.062474 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 10:55:58.074418 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 10:55:58.099775 kernel: loop1: detected capacity change from 0 to 189592 Jan 29 10:55:58.120092 udevadm[1555]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 10:55:58.159263 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 10:55:58.172836 kernel: loop2: detected capacity change from 0 to 116808 Jan 29 10:55:58.170067 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:55:58.212840 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Jan 29 10:55:58.213462 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Jan 29 10:55:58.223846 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:55:58.281684 kernel: loop3: detected capacity change from 0 to 53784 Jan 29 10:55:58.401202 kernel: loop4: detected capacity change from 0 to 113536 Jan 29 10:55:58.423464 kernel: loop5: detected capacity change from 0 to 189592 Jan 29 10:55:58.464196 kernel: loop6: detected capacity change from 0 to 116808 Jan 29 10:55:58.478207 kernel: loop7: detected capacity change from 0 to 53784 Jan 29 10:55:58.485556 (sd-merge)[1564]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 29 10:55:58.486585 (sd-merge)[1564]: Merged extensions into '/usr'. Jan 29 10:55:58.497952 systemd[1]: Reloading requested from client PID 1511 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 10:55:58.497980 systemd[1]: Reloading... Jan 29 10:55:58.654190 zram_generator::config[1590]: No configuration found. Jan 29 10:55:59.008970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:55:59.118396 systemd[1]: Reloading finished in 619 ms. Jan 29 10:55:59.157259 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 10:55:59.161281 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 10:55:59.172554 systemd[1]: Starting ensure-sysext.service... Jan 29 10:55:59.177504 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:55:59.185838 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:55:59.208467 systemd[1]: Reloading requested from client PID 1642 ('systemctl') (unit ensure-sysext.service)... Jan 29 10:55:59.208503 systemd[1]: Reloading... Jan 29 10:55:59.281728 systemd-tmpfiles[1643]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 10:55:59.282423 systemd-tmpfiles[1643]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 10:55:59.286576 systemd-tmpfiles[1643]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 10:55:59.287138 systemd-tmpfiles[1643]: ACLs are not supported, ignoring. Jan 29 10:55:59.287993 systemd-tmpfiles[1643]: ACLs are not supported, ignoring. Jan 29 10:55:59.305482 systemd-tmpfiles[1643]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:55:59.305504 systemd-tmpfiles[1643]: Skipping /boot Jan 29 10:55:59.323613 systemd-udevd[1644]: Using default interface naming scheme 'v255'. Jan 29 10:55:59.344876 systemd-tmpfiles[1643]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:55:59.344902 systemd-tmpfiles[1643]: Skipping /boot Jan 29 10:55:59.427198 zram_generator::config[1671]: No configuration found. Jan 29 10:55:59.577048 ldconfig[1504]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 10:55:59.618363 (udev-worker)[1692]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:55:59.820203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:55:59.911333 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1733) Jan 29 10:55:59.954682 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 10:55:59.954985 systemd[1]: Reloading finished in 745 ms. Jan 29 10:55:59.982016 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:55:59.986130 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 10:56:00.003285 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:56:00.051583 systemd[1]: Finished ensure-sysext.service. Jan 29 10:56:00.072795 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:56:00.083475 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 10:56:00.085942 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:56:00.093517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:56:00.102954 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:56:00.110289 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:56:00.116494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:56:00.118642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:56:00.123476 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 10:56:00.133097 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:56:00.143534 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:56:00.145639 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 10:56:00.152517 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 10:56:00.160740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:56:00.195421 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 10:56:00.211198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:56:00.211585 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:56:00.274015 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 10:56:00.288858 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:56:00.289218 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:56:00.296916 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:56:00.300953 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:56:00.301338 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:56:00.304335 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:56:00.304688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:56:00.307637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:56:00.359078 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 29 10:56:00.372481 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 10:56:00.377366 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 10:56:00.403510 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 10:56:00.416880 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 10:56:00.419517 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 10:56:00.431718 augenrules[1881]: No rules Jan 29 10:56:00.433906 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:56:00.435382 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:56:00.455269 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 10:56:00.467650 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 10:56:00.470346 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 10:56:00.487074 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 10:56:00.498381 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 10:56:00.518182 lvm[1889]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:56:00.560061 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 10:56:00.564602 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:56:00.578469 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 10:56:00.598192 lvm[1900]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:56:00.615614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:56:00.650919 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 10:56:00.664867 systemd-networkd[1835]: lo: Link UP Jan 29 10:56:00.664889 systemd-networkd[1835]: lo: Gained carrier Jan 29 10:56:00.667701 systemd-networkd[1835]: Enumeration completed Jan 29 10:56:00.667876 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:56:00.672800 systemd-networkd[1835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:56:00.672822 systemd-networkd[1835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:56:00.674840 systemd-networkd[1835]: eth0: Link UP Jan 29 10:56:00.675251 systemd-networkd[1835]: eth0: Gained carrier Jan 29 10:56:00.675285 systemd-networkd[1835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:56:00.678482 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 10:56:00.685273 systemd-networkd[1835]: eth0: DHCPv4 address 172.31.16.43/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 29 10:56:00.709286 systemd-resolved[1837]: Positive Trust Anchors: Jan 29 10:56:00.709347 systemd-resolved[1837]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:56:00.709410 systemd-resolved[1837]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:56:00.717672 systemd-resolved[1837]: Defaulting to hostname 'linux'. Jan 29 10:56:00.720729 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:56:00.723086 systemd[1]: Reached target network.target - Network. Jan 29 10:56:00.724712 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:56:00.726819 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:56:00.728936 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 10:56:00.731283 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 10:56:00.733926 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 10:56:00.736147 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 10:56:00.738469 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 10:56:00.740769 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 10:56:00.740837 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:56:00.742613 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:56:00.746264 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 10:56:00.750877 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 10:56:00.764407 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 10:56:00.767474 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 10:56:00.769745 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:56:00.771516 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:56:00.773271 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:56:00.773324 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:56:00.777355 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 10:56:00.786148 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 10:56:00.791320 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 10:56:00.797316 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 10:56:00.805567 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 10:56:00.807516 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 10:56:00.822299 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 10:56:00.829526 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 10:56:00.840716 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 10:56:00.847510 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 10:56:00.854521 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 10:56:00.870811 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 10:56:00.873596 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 10:56:00.876088 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 10:56:00.881354 jq[1911]: false Jan 29 10:56:00.890461 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 10:56:00.911893 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 10:56:00.919871 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 10:56:00.922745 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 10:56:00.977973 jq[1922]: true Jan 29 10:56:00.996180 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 10:56:00.995849 dbus-daemon[1910]: [system] SELinux support is enabled Jan 29 10:56:01.003276 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 10:56:01.024333 dbus-daemon[1910]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1835 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 10:56:01.005261 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 10:56:01.015971 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 10:56:01.028927 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 10:56:01.016023 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 10:56:01.018439 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 10:56:01.018474 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 10:56:01.055348 (ntainerd)[1934]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 10:56:01.064530 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 10:56:01.067746 extend-filesystems[1912]: Found loop4 Jan 29 10:56:01.067746 extend-filesystems[1912]: Found loop5 Jan 29 10:56:01.067746 extend-filesystems[1912]: Found loop6 Jan 29 10:56:01.067746 extend-filesystems[1912]: Found loop7 Jan 29 10:56:01.067746 extend-filesystems[1912]: Found nvme0n1 Jan 29 10:56:01.067746 extend-filesystems[1912]: Found nvme0n1p1 Jan 29 10:56:01.067746 extend-filesystems[1912]: Found nvme0n1p2 Jan 29 10:56:01.067746 extend-filesystems[1912]: Found nvme0n1p3 Jan 29 10:56:01.067746 extend-filesystems[1912]: Found usr Jan 29 10:56:01.067746 extend-filesystems[1912]: Found nvme0n1p4 Jan 29 10:56:01.067746 extend-filesystems[1912]: Found nvme0n1p6 Jan 29 10:56:01.067746 extend-filesystems[1912]: Found nvme0n1p7 Jan 29 10:56:01.067746 extend-filesystems[1912]: Found nvme0n1p9 Jan 29 10:56:01.067746 extend-filesystems[1912]: Checking size of /dev/nvme0n1p9 Jan 29 10:56:01.188350 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 29 10:56:01.058727 ntpd[1914]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:04:55 UTC 2025 (1): Starting Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:04:55 UTC 2025 (1): Starting Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: ---------------------------------------------------- Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: corporation. Support and training for ntp-4 are Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: available at https://www.nwtime.org/support Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: ---------------------------------------------------- Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: proto: precision = 0.096 usec (-23) Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: basedate set to 2025-01-17 Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: gps base set to 2025-01-19 (week 2350) Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: Listen normally on 3 eth0 172.31.16.43:123 Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: Listen normally on 4 lo [::1]:123 Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: bind(21) AF_INET6 fe80::48c:bfff:fee5:3615%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: unable to create socket on eth0 (5) for fe80::48c:bfff:fee5:3615%2#123 Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: failed to init interface for address fe80::48c:bfff:fee5:3615%2 Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: Listening on routing socket on fd #21 for interface updates Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:56:01.188787 ntpd[1914]: 29 Jan 10:56:01 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:56:01.067134 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 10:56:01.198635 update_engine[1919]: I20250129 10:56:01.090945 1919 main.cc:92] Flatcar Update Engine starting Jan 29 10:56:01.198635 update_engine[1919]: I20250129 10:56:01.099010 1919 update_check_scheduler.cc:74] Next update check in 6m42s Jan 29 10:56:01.223694 extend-filesystems[1912]: Resized partition /dev/nvme0n1p9 Jan 29 10:56:01.228063 jq[1940]: true Jan 29 10:56:01.058775 ntpd[1914]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 10:56:01.069290 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 10:56:01.237658 extend-filesystems[1957]: resize2fs 1.47.1 (20-May-2024) Jan 29 10:56:01.058796 ntpd[1914]: ---------------------------------------------------- Jan 29 10:56:01.100710 systemd[1]: Started update-engine.service - Update Engine. Jan 29 10:56:01.058815 ntpd[1914]: ntp-4 is maintained by Network Time Foundation, Jan 29 10:56:01.151849 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 10:56:01.058835 ntpd[1914]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 10:56:01.186641 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 10:56:01.058853 ntpd[1914]: corporation. Support and training for ntp-4 are Jan 29 10:56:01.231750 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 10:56:01.058872 ntpd[1914]: available at https://www.nwtime.org/support Jan 29 10:56:01.235247 systemd-logind[1918]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 10:56:01.058890 ntpd[1914]: ---------------------------------------------------- Jan 29 10:56:01.235282 systemd-logind[1918]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 29 10:56:01.067890 ntpd[1914]: proto: precision = 0.096 usec (-23) Jan 29 10:56:01.239489 systemd-logind[1918]: New seat seat0. Jan 29 10:56:01.072553 ntpd[1914]: basedate set to 2025-01-17 Jan 29 10:56:01.241626 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 10:56:01.072587 ntpd[1914]: gps base set to 2025-01-19 (week 2350) Jan 29 10:56:01.082234 ntpd[1914]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 10:56:01.082324 ntpd[1914]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 10:56:01.091398 ntpd[1914]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 10:56:01.091469 ntpd[1914]: Listen normally on 3 eth0 172.31.16.43:123 Jan 29 10:56:01.091544 ntpd[1914]: Listen normally on 4 lo [::1]:123 Jan 29 10:56:01.091630 ntpd[1914]: bind(21) AF_INET6 fe80::48c:bfff:fee5:3615%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 10:56:01.091677 ntpd[1914]: unable to create socket on eth0 (5) for fe80::48c:bfff:fee5:3615%2#123 Jan 29 10:56:01.091709 ntpd[1914]: failed to init interface for address fe80::48c:bfff:fee5:3615%2 Jan 29 10:56:01.091771 ntpd[1914]: Listening on routing socket on fd #21 for interface updates Jan 29 10:56:01.136727 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:56:01.136787 ntpd[1914]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 10:56:01.302742 coreos-metadata[1909]: Jan 29 10:56:01.300 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 10:56:01.307377 coreos-metadata[1909]: Jan 29 10:56:01.304 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 29 10:56:01.307377 coreos-metadata[1909]: Jan 29 10:56:01.305 INFO Fetch successful Jan 29 10:56:01.307377 coreos-metadata[1909]: Jan 29 10:56:01.305 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 29 10:56:01.309481 coreos-metadata[1909]: Jan 29 10:56:01.307 INFO Fetch successful Jan 29 10:56:01.309481 coreos-metadata[1909]: Jan 29 10:56:01.307 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 29 10:56:01.309481 coreos-metadata[1909]: Jan 29 10:56:01.308 INFO Fetch successful Jan 29 10:56:01.309481 coreos-metadata[1909]: Jan 29 10:56:01.308 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 29 10:56:01.309481 coreos-metadata[1909]: Jan 29 10:56:01.309 INFO Fetch successful Jan 29 10:56:01.309481 coreos-metadata[1909]: Jan 29 10:56:01.309 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 29 10:56:01.316331 coreos-metadata[1909]: Jan 29 10:56:01.310 INFO Fetch failed with 404: resource not found Jan 29 10:56:01.316331 coreos-metadata[1909]: Jan 29 10:56:01.310 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 29 10:56:01.317543 coreos-metadata[1909]: Jan 29 10:56:01.317 INFO Fetch successful Jan 29 10:56:01.317543 coreos-metadata[1909]: Jan 29 10:56:01.317 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 29 10:56:01.319502 coreos-metadata[1909]: Jan 29 10:56:01.319 INFO Fetch successful Jan 29 10:56:01.319502 coreos-metadata[1909]: Jan 29 10:56:01.319 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 29 10:56:01.320469 coreos-metadata[1909]: Jan 29 10:56:01.320 INFO Fetch successful Jan 29 10:56:01.320469 coreos-metadata[1909]: Jan 29 10:56:01.320 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 29 10:56:01.323020 coreos-metadata[1909]: Jan 29 10:56:01.322 INFO Fetch successful Jan 29 10:56:01.323020 coreos-metadata[1909]: Jan 29 10:56:01.323 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 29 10:56:01.328318 coreos-metadata[1909]: Jan 29 10:56:01.323 INFO Fetch successful Jan 29 10:56:01.330227 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 29 10:56:01.358949 extend-filesystems[1957]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 29 10:56:01.358949 extend-filesystems[1957]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 10:56:01.358949 extend-filesystems[1957]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 29 10:56:01.371297 extend-filesystems[1912]: Resized filesystem in /dev/nvme0n1p9 Jan 29 10:56:01.366879 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 10:56:01.373275 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 10:56:01.431879 bash[1983]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:56:01.428149 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 10:56:01.439330 systemd[1]: Starting sshkeys.service... Jan 29 10:56:01.445533 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 10:56:01.449905 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 10:56:01.464734 locksmithd[1953]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 10:56:01.479505 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 10:56:01.500066 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 10:56:01.502870 dbus-daemon[1910]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1944 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 10:56:01.512981 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 10:56:01.515965 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 10:56:01.542822 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 10:56:01.569233 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1692) Jan 29 10:56:01.635337 polkitd[2005]: Started polkitd version 121 Jan 29 10:56:01.667485 polkitd[2005]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 10:56:01.667613 polkitd[2005]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 10:56:01.670612 polkitd[2005]: Finished loading, compiling and executing 2 rules Jan 29 10:56:01.673968 dbus-daemon[1910]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 10:56:01.674263 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 10:56:01.679562 polkitd[2005]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 10:56:01.727630 systemd-resolved[1837]: System hostname changed to 'ip-172-31-16-43'. Jan 29 10:56:01.727872 systemd-hostnamed[1944]: Hostname set to (transient) Jan 29 10:56:01.736430 coreos-metadata[2000]: Jan 29 10:56:01.734 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 29 10:56:01.737989 coreos-metadata[2000]: Jan 29 10:56:01.737 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 29 10:56:01.740876 coreos-metadata[2000]: Jan 29 10:56:01.740 INFO Fetch successful Jan 29 10:56:01.741290 coreos-metadata[2000]: Jan 29 10:56:01.741 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 29 10:56:01.742102 coreos-metadata[2000]: Jan 29 10:56:01.742 INFO Fetch successful Jan 29 10:56:01.748537 unknown[2000]: wrote ssh authorized keys file for user: core Jan 29 10:56:01.757062 containerd[1934]: time="2025-01-29T10:56:01.756938914Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 10:56:01.802285 update-ssh-keys[2052]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:56:01.803417 systemd-networkd[1835]: eth0: Gained IPv6LL Jan 29 10:56:01.807258 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 10:56:01.819848 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 10:56:01.823322 systemd[1]: Finished sshkeys.service. Jan 29 10:56:01.827112 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 10:56:01.843679 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 29 10:56:01.849980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:56:01.857873 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 10:56:01.914611 containerd[1934]: time="2025-01-29T10:56:01.914546087Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:01.919492 containerd[1934]: time="2025-01-29T10:56:01.919427003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:56:01.922611 containerd[1934]: time="2025-01-29T10:56:01.920791763Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 10:56:01.922611 containerd[1934]: time="2025-01-29T10:56:01.920849147Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 10:56:01.922611 containerd[1934]: time="2025-01-29T10:56:01.921144791Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 10:56:01.922611 containerd[1934]: time="2025-01-29T10:56:01.921240611Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:01.922611 containerd[1934]: time="2025-01-29T10:56:01.921375971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:56:01.922611 containerd[1934]: time="2025-01-29T10:56:01.921414119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:01.922611 containerd[1934]: time="2025-01-29T10:56:01.921728387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:56:01.922611 containerd[1934]: time="2025-01-29T10:56:01.921762719Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:01.922611 containerd[1934]: time="2025-01-29T10:56:01.921795359Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:56:01.922611 containerd[1934]: time="2025-01-29T10:56:01.921819515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:01.922611 containerd[1934]: time="2025-01-29T10:56:01.922001963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:01.937269 containerd[1934]: time="2025-01-29T10:56:01.935575343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:56:01.937269 containerd[1934]: time="2025-01-29T10:56:01.935836307Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:56:01.937269 containerd[1934]: time="2025-01-29T10:56:01.935869343Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 10:56:01.937269 containerd[1934]: time="2025-01-29T10:56:01.936051035Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 10:56:01.937269 containerd[1934]: time="2025-01-29T10:56:01.936144971Z" level=info msg="metadata content store policy set" policy=shared Jan 29 10:56:01.948212 containerd[1934]: time="2025-01-29T10:56:01.947306879Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 10:56:01.948212 containerd[1934]: time="2025-01-29T10:56:01.947421239Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 10:56:01.948212 containerd[1934]: time="2025-01-29T10:56:01.947457275Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 10:56:01.948212 containerd[1934]: time="2025-01-29T10:56:01.947492507Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 10:56:01.948212 containerd[1934]: time="2025-01-29T10:56:01.947527151Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 10:56:01.948212 containerd[1934]: time="2025-01-29T10:56:01.947783795Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 10:56:01.953871 containerd[1934]: time="2025-01-29T10:56:01.950185511Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 10:56:01.953871 containerd[1934]: time="2025-01-29T10:56:01.950468891Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 10:56:01.953871 containerd[1934]: time="2025-01-29T10:56:01.951677039Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 10:56:01.954205 containerd[1934]: time="2025-01-29T10:56:01.954123323Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 10:56:01.954345 containerd[1934]: time="2025-01-29T10:56:01.954316127Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 10:56:01.954518 containerd[1934]: time="2025-01-29T10:56:01.954489827Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 10:56:01.955129 containerd[1934]: time="2025-01-29T10:56:01.954638363Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 10:56:01.955129 containerd[1934]: time="2025-01-29T10:56:01.954702587Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 10:56:01.955129 containerd[1934]: time="2025-01-29T10:56:01.954737927Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 10:56:01.955129 containerd[1934]: time="2025-01-29T10:56:01.954796559Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 10:56:01.955129 containerd[1934]: time="2025-01-29T10:56:01.954828503Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 10:56:01.955129 containerd[1934]: time="2025-01-29T10:56:01.954878975Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 10:56:01.955129 containerd[1934]: time="2025-01-29T10:56:01.954925463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.955129 containerd[1934]: time="2025-01-29T10:56:01.954985607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.955129 containerd[1934]: time="2025-01-29T10:56:01.955017419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.955129 containerd[1934]: time="2025-01-29T10:56:01.955075079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.955755 containerd[1934]: time="2025-01-29T10:56:01.955104371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.955755 containerd[1934]: time="2025-01-29T10:56:01.955628927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.955755 containerd[1934]: time="2025-01-29T10:56:01.955687499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.955755 containerd[1934]: time="2025-01-29T10:56:01.955722227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.955993 containerd[1934]: time="2025-01-29T10:56:01.955965383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.960522 containerd[1934]: time="2025-01-29T10:56:01.957253703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.960522 containerd[1934]: time="2025-01-29T10:56:01.957335627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.960522 containerd[1934]: time="2025-01-29T10:56:01.957392267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.960522 containerd[1934]: time="2025-01-29T10:56:01.957436079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.960522 containerd[1934]: time="2025-01-29T10:56:01.959217971Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 10:56:01.960522 containerd[1934]: time="2025-01-29T10:56:01.959303483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.960522 containerd[1934]: time="2025-01-29T10:56:01.959362967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.960522 containerd[1934]: time="2025-01-29T10:56:01.959393771Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 10:56:01.962909 containerd[1934]: time="2025-01-29T10:56:01.962492615Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 10:56:01.962909 containerd[1934]: time="2025-01-29T10:56:01.962578847Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 10:56:01.962909 containerd[1934]: time="2025-01-29T10:56:01.962611055Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 10:56:01.962909 containerd[1934]: time="2025-01-29T10:56:01.962666351Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 10:56:01.962909 containerd[1934]: time="2025-01-29T10:56:01.962695439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.962909 containerd[1934]: time="2025-01-29T10:56:01.962751971Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 10:56:01.962909 containerd[1934]: time="2025-01-29T10:56:01.962778371Z" level=info msg="NRI interface is disabled by configuration." Jan 29 10:56:01.962909 containerd[1934]: time="2025-01-29T10:56:01.962827871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 10:56:01.985227 containerd[1934]: time="2025-01-29T10:56:01.980276675Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 10:56:01.985227 containerd[1934]: time="2025-01-29T10:56:01.982426895Z" level=info msg="Connect containerd service" Jan 29 10:56:01.988249 containerd[1934]: time="2025-01-29T10:56:01.986222795Z" level=info msg="using legacy CRI server" Jan 29 10:56:01.988249 containerd[1934]: time="2025-01-29T10:56:01.986395187Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 10:56:01.988249 containerd[1934]: time="2025-01-29T10:56:01.986815835Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 10:56:02.002447 containerd[1934]: time="2025-01-29T10:56:01.999980616Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:56:02.002447 containerd[1934]: time="2025-01-29T10:56:02.000702872Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 10:56:02.002447 containerd[1934]: time="2025-01-29T10:56:02.000829580Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 10:56:02.009182 containerd[1934]: time="2025-01-29T10:56:02.005436620Z" level=info msg="Start subscribing containerd event" Jan 29 10:56:02.009182 containerd[1934]: time="2025-01-29T10:56:02.008220272Z" level=info msg="Start recovering state" Jan 29 10:56:02.009182 containerd[1934]: time="2025-01-29T10:56:02.008355260Z" level=info msg="Start event monitor" Jan 29 10:56:02.009182 containerd[1934]: time="2025-01-29T10:56:02.008378192Z" level=info msg="Start snapshots syncer" Jan 29 10:56:02.009182 containerd[1934]: time="2025-01-29T10:56:02.008399912Z" level=info msg="Start cni network conf syncer for default" Jan 29 10:56:02.009182 containerd[1934]: time="2025-01-29T10:56:02.008418020Z" level=info msg="Start streaming server" Jan 29 10:56:02.020400 containerd[1934]: time="2025-01-29T10:56:02.018579464Z" level=info msg="containerd successfully booted in 0.263982s" Jan 29 10:56:02.051996 amazon-ssm-agent[2076]: Initializing new seelog logger Jan 29 10:56:02.053528 amazon-ssm-agent[2076]: New Seelog Logger Creation Complete Jan 29 10:56:02.055597 amazon-ssm-agent[2076]: 2025/01/29 10:56:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:56:02.055597 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:56:02.056435 amazon-ssm-agent[2076]: 2025/01/29 10:56:02 processing appconfig overrides Jan 29 10:56:02.059178 amazon-ssm-agent[2076]: 2025/01/29 10:56:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:56:02.059178 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:56:02.059178 amazon-ssm-agent[2076]: 2025/01/29 10:56:02 processing appconfig overrides Jan 29 10:56:02.060340 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO Proxy environment variables: Jan 29 10:56:02.060573 amazon-ssm-agent[2076]: 2025/01/29 10:56:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:56:02.063183 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:56:02.063183 amazon-ssm-agent[2076]: 2025/01/29 10:56:02 processing appconfig overrides Jan 29 10:56:02.063185 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 10:56:02.067013 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 10:56:02.073186 amazon-ssm-agent[2076]: 2025/01/29 10:56:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:56:02.073321 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 29 10:56:02.073587 amazon-ssm-agent[2076]: 2025/01/29 10:56:02 processing appconfig overrides Jan 29 10:56:02.161773 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO https_proxy: Jan 29 10:56:02.262206 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO http_proxy: Jan 29 10:56:02.360630 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO no_proxy: Jan 29 10:56:02.460185 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO Checking if agent identity type OnPrem can be assumed Jan 29 10:56:02.559110 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO Checking if agent identity type EC2 can be assumed Jan 29 10:56:02.658565 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO Agent will take identity from EC2 Jan 29 10:56:02.759547 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:56:02.858901 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:56:02.960238 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 29 10:56:02.963192 sshd_keygen[1945]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 10:56:03.007287 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 10:56:03.021573 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 10:56:03.033687 systemd[1]: Started sshd@0-172.31.16.43:22-139.178.89.65:37766.service - OpenSSH per-connection server daemon (139.178.89.65:37766). Jan 29 10:56:03.061298 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 10:56:03.061719 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 10:56:03.063577 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 29 10:56:03.078642 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 10:56:03.110386 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 10:56:03.125769 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 10:56:03.140699 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 10:56:03.143119 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 10:56:03.163839 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 29 10:56:03.264375 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO [amazon-ssm-agent] Starting Core Agent Jan 29 10:56:03.333242 sshd[2137]: Accepted publickey for core from 139.178.89.65 port 37766 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:56:03.337452 sshd-session[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:03.361403 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 10:56:03.367130 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 29 10:56:03.375634 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 10:56:03.389698 systemd-logind[1918]: New session 1 of user core. Jan 29 10:56:03.424590 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 10:56:03.440720 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 10:56:03.455632 (systemd)[2149]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 10:56:03.464849 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO [Registrar] Starting registrar module Jan 29 10:56:03.484942 amazon-ssm-agent[2076]: 2025-01-29 10:56:02 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 29 10:56:03.484942 amazon-ssm-agent[2076]: 2025-01-29 10:56:03 INFO [EC2Identity] EC2 registration was successful. Jan 29 10:56:03.485219 amazon-ssm-agent[2076]: 2025-01-29 10:56:03 INFO [CredentialRefresher] credentialRefresher has started Jan 29 10:56:03.485219 amazon-ssm-agent[2076]: 2025-01-29 10:56:03 INFO [CredentialRefresher] Starting credentials refresher loop Jan 29 10:56:03.485219 amazon-ssm-agent[2076]: 2025-01-29 10:56:03 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 29 10:56:03.565379 amazon-ssm-agent[2076]: 2025-01-29 10:56:03 INFO [CredentialRefresher] Next credential rotation will be in 31.666657928966668 minutes Jan 29 10:56:03.660601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:56:03.666304 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 10:56:03.668346 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:56:03.695406 systemd[2149]: Queued start job for default target default.target. Jan 29 10:56:03.704513 systemd[2149]: Created slice app.slice - User Application Slice. Jan 29 10:56:03.704769 systemd[2149]: Reached target paths.target - Paths. Jan 29 10:56:03.704806 systemd[2149]: Reached target timers.target - Timers. Jan 29 10:56:03.707598 systemd[2149]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 10:56:03.745837 systemd[2149]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 10:56:03.747283 systemd[2149]: Reached target sockets.target - Sockets. Jan 29 10:56:03.747336 systemd[2149]: Reached target basic.target - Basic System. Jan 29 10:56:03.747425 systemd[2149]: Reached target default.target - Main User Target. Jan 29 10:56:03.747486 systemd[2149]: Startup finished in 273ms. Jan 29 10:56:03.747662 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 10:56:03.763465 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 10:56:03.767296 systemd[1]: Startup finished in 1.080s (kernel) + 7.980s (initrd) + 8.038s (userspace) = 17.099s. Jan 29 10:56:03.934882 systemd[1]: Started sshd@1-172.31.16.43:22-139.178.89.65:37808.service - OpenSSH per-connection server daemon (139.178.89.65:37808). Jan 29 10:56:04.059506 ntpd[1914]: Listen normally on 6 eth0 [fe80::48c:bfff:fee5:3615%2]:123 Jan 29 10:56:04.059958 ntpd[1914]: 29 Jan 10:56:04 ntpd[1914]: Listen normally on 6 eth0 [fe80::48c:bfff:fee5:3615%2]:123 Jan 29 10:56:04.126272 sshd[2174]: Accepted publickey for core from 139.178.89.65 port 37808 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:56:04.129398 sshd-session[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:04.136841 systemd-logind[1918]: New session 2 of user core. Jan 29 10:56:04.144913 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 10:56:04.277378 sshd[2176]: Connection closed by 139.178.89.65 port 37808 Jan 29 10:56:04.278455 sshd-session[2174]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:04.285787 systemd[1]: sshd@1-172.31.16.43:22-139.178.89.65:37808.service: Deactivated successfully. Jan 29 10:56:04.291327 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 10:56:04.292960 systemd-logind[1918]: Session 2 logged out. Waiting for processes to exit. Jan 29 10:56:04.295861 systemd-logind[1918]: Removed session 2. Jan 29 10:56:04.324804 systemd[1]: Started sshd@2-172.31.16.43:22-139.178.89.65:37816.service - OpenSSH per-connection server daemon (139.178.89.65:37816). Jan 29 10:56:04.509383 sshd[2181]: Accepted publickey for core from 139.178.89.65 port 37816 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:56:04.513798 sshd-session[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:04.528267 systemd-logind[1918]: New session 3 of user core. Jan 29 10:56:04.534457 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 10:56:04.535106 amazon-ssm-agent[2076]: 2025-01-29 10:56:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 29 10:56:04.625623 kubelet[2160]: E0129 10:56:04.625494 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:56:04.630744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:56:04.631773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:56:04.632637 systemd[1]: kubelet.service: Consumed 1.237s CPU time. Jan 29 10:56:04.634660 amazon-ssm-agent[2076]: 2025-01-29 10:56:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2186) started Jan 29 10:56:04.664202 sshd[2187]: Connection closed by 139.178.89.65 port 37816 Jan 29 10:56:04.666425 sshd-session[2181]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:04.673467 systemd[1]: sshd@2-172.31.16.43:22-139.178.89.65:37816.service: Deactivated successfully. Jan 29 10:56:04.678929 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 10:56:04.680859 systemd-logind[1918]: Session 3 logged out. Waiting for processes to exit. Jan 29 10:56:04.701993 systemd[1]: Started sshd@3-172.31.16.43:22-139.178.89.65:37824.service - OpenSSH per-connection server daemon (139.178.89.65:37824). Jan 29 10:56:04.705103 systemd-logind[1918]: Removed session 3. Jan 29 10:56:04.735423 amazon-ssm-agent[2076]: 2025-01-29 10:56:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 29 10:56:04.896392 sshd[2198]: Accepted publickey for core from 139.178.89.65 port 37824 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:56:04.898865 sshd-session[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:04.907243 systemd-logind[1918]: New session 4 of user core. Jan 29 10:56:04.915458 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 10:56:05.037430 sshd[2203]: Connection closed by 139.178.89.65 port 37824 Jan 29 10:56:05.038244 sshd-session[2198]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:05.042855 systemd-logind[1918]: Session 4 logged out. Waiting for processes to exit. Jan 29 10:56:05.043767 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 10:56:05.046053 systemd[1]: sshd@3-172.31.16.43:22-139.178.89.65:37824.service: Deactivated successfully. Jan 29 10:56:05.078675 systemd[1]: Started sshd@4-172.31.16.43:22-139.178.89.65:37840.service - OpenSSH per-connection server daemon (139.178.89.65:37840). Jan 29 10:56:05.259303 sshd[2208]: Accepted publickey for core from 139.178.89.65 port 37840 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:56:05.261457 sshd-session[2208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:05.269491 systemd-logind[1918]: New session 5 of user core. Jan 29 10:56:05.275407 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 10:56:05.422585 sudo[2211]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 10:56:05.423802 sudo[2211]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:56:05.439717 sudo[2211]: pam_unix(sudo:session): session closed for user root Jan 29 10:56:05.462712 sshd[2210]: Connection closed by 139.178.89.65 port 37840 Jan 29 10:56:05.463769 sshd-session[2208]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:05.470365 systemd[1]: sshd@4-172.31.16.43:22-139.178.89.65:37840.service: Deactivated successfully. Jan 29 10:56:05.473955 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 10:56:05.475575 systemd-logind[1918]: Session 5 logged out. Waiting for processes to exit. Jan 29 10:56:05.477667 systemd-logind[1918]: Removed session 5. Jan 29 10:56:05.501687 systemd[1]: Started sshd@5-172.31.16.43:22-139.178.89.65:37842.service - OpenSSH per-connection server daemon (139.178.89.65:37842). Jan 29 10:56:05.689342 sshd[2217]: Accepted publickey for core from 139.178.89.65 port 37842 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:56:05.691862 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:05.700508 systemd-logind[1918]: New session 6 of user core. Jan 29 10:56:05.709415 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 10:56:05.821994 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 10:56:05.823212 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:56:05.829555 sudo[2221]: pam_unix(sudo:session): session closed for user root Jan 29 10:56:05.839482 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 10:56:05.840108 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:56:05.861792 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:56:05.921075 augenrules[2243]: No rules Jan 29 10:56:05.923467 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:56:05.925249 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:56:05.927444 sudo[2220]: pam_unix(sudo:session): session closed for user root Jan 29 10:56:05.950668 sshd[2219]: Connection closed by 139.178.89.65 port 37842 Jan 29 10:56:05.952231 sshd-session[2217]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:05.957531 systemd[1]: sshd@5-172.31.16.43:22-139.178.89.65:37842.service: Deactivated successfully. Jan 29 10:56:05.960828 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 10:56:05.964341 systemd-logind[1918]: Session 6 logged out. Waiting for processes to exit. Jan 29 10:56:05.966117 systemd-logind[1918]: Removed session 6. Jan 29 10:56:05.987692 systemd[1]: Started sshd@6-172.31.16.43:22-139.178.89.65:37856.service - OpenSSH per-connection server daemon (139.178.89.65:37856). Jan 29 10:56:06.174430 sshd[2251]: Accepted publickey for core from 139.178.89.65 port 37856 ssh2: RSA SHA256:cIZr/MEwQ13qQ/md8fQDjCFsLmoY1mjzTaFel2uuBoU Jan 29 10:56:06.176844 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:06.184929 systemd-logind[1918]: New session 7 of user core. Jan 29 10:56:06.191424 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 10:56:06.293558 sudo[2254]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 10:56:06.294956 sudo[2254]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:56:07.272055 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:56:07.272410 systemd[1]: kubelet.service: Consumed 1.237s CPU time. Jan 29 10:56:07.284632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:56:07.342589 systemd[1]: Reloading requested from client PID 2287 ('systemctl') (unit session-7.scope)... Jan 29 10:56:07.342767 systemd[1]: Reloading... Jan 29 10:56:07.592209 zram_generator::config[2330]: No configuration found. Jan 29 10:56:07.808776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:56:07.971768 systemd[1]: Reloading finished in 628 ms. Jan 29 10:56:08.055473 systemd-logind[1918]: Removed session 4. Jan 29 10:56:08.060421 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 10:56:08.060785 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 10:56:08.061367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:56:08.071655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:56:08.444419 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:56:08.456705 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:56:08.532181 kubelet[2390]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:56:08.532181 kubelet[2390]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:56:08.532181 kubelet[2390]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:56:08.532729 kubelet[2390]: I0129 10:56:08.532318 2390 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:56:09.428229 kubelet[2390]: I0129 10:56:09.427444 2390 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 10:56:09.428229 kubelet[2390]: I0129 10:56:09.427490 2390 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:56:09.428229 kubelet[2390]: I0129 10:56:09.427895 2390 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 10:56:09.468071 kubelet[2390]: I0129 10:56:09.467758 2390 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:56:09.484701 kubelet[2390]: E0129 10:56:09.484651 2390 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 10:56:09.485004 kubelet[2390]: I0129 10:56:09.484980 2390 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 10:56:09.491433 kubelet[2390]: I0129 10:56:09.491397 2390 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:56:09.492185 kubelet[2390]: I0129 10:56:09.491859 2390 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 10:56:09.492280 kubelet[2390]: I0129 10:56:09.492149 2390 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:56:09.492488 kubelet[2390]: I0129 10:56:09.492214 2390 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.16.43","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 10:56:09.492665 kubelet[2390]: I0129 10:56:09.492506 2390 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:56:09.492665 kubelet[2390]: I0129 10:56:09.492528 2390 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 10:56:09.492778 kubelet[2390]: I0129 10:56:09.492703 2390 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:56:09.497381 kubelet[2390]: I0129 10:56:09.496705 2390 kubelet.go:408] "Attempting to sync node with API server" Jan 29 10:56:09.497381 kubelet[2390]: I0129 10:56:09.496755 2390 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:56:09.497381 kubelet[2390]: I0129 10:56:09.496820 2390 kubelet.go:314] "Adding apiserver pod source" Jan 29 10:56:09.497381 kubelet[2390]: I0129 10:56:09.496844 2390 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:56:09.499970 kubelet[2390]: E0129 10:56:09.499217 2390 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:09.499970 kubelet[2390]: E0129 10:56:09.499296 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:09.500642 kubelet[2390]: I0129 10:56:09.500602 2390 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:56:09.503830 kubelet[2390]: I0129 10:56:09.503793 2390 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:56:09.505304 kubelet[2390]: W0129 10:56:09.505266 2390 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 10:56:09.506539 kubelet[2390]: I0129 10:56:09.506489 2390 server.go:1269] "Started kubelet" Jan 29 10:56:09.509469 kubelet[2390]: I0129 10:56:09.509216 2390 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:56:09.521412 kubelet[2390]: I0129 10:56:09.521325 2390 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:56:09.523613 kubelet[2390]: I0129 10:56:09.523003 2390 server.go:460] "Adding debug handlers to kubelet server" Jan 29 10:56:09.523979 kubelet[2390]: I0129 10:56:09.523934 2390 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 10:56:09.524433 kubelet[2390]: E0129 10:56:09.524362 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:09.525012 kubelet[2390]: I0129 10:56:09.524756 2390 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 10:56:09.525012 kubelet[2390]: I0129 10:56:09.524866 2390 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:56:09.527519 kubelet[2390]: I0129 10:56:09.526640 2390 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:56:09.527519 kubelet[2390]: I0129 10:56:09.527008 2390 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:56:09.527519 kubelet[2390]: I0129 10:56:09.527509 2390 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:56:09.527768 kubelet[2390]: I0129 10:56:09.527659 2390 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:56:09.528590 kubelet[2390]: I0129 10:56:09.528273 2390 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 10:56:09.533210 kubelet[2390]: E0129 10:56:09.531103 2390 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 10:56:09.533210 kubelet[2390]: I0129 10:56:09.531389 2390 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:56:09.562015 kubelet[2390]: I0129 10:56:09.561969 2390 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:56:09.562015 kubelet[2390]: I0129 10:56:09.562001 2390 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:56:09.562264 kubelet[2390]: I0129 10:56:09.562030 2390 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:56:09.567467 kubelet[2390]: I0129 10:56:09.567427 2390 policy_none.go:49] "None policy: Start" Jan 29 10:56:09.573249 kubelet[2390]: I0129 10:56:09.573211 2390 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:56:09.573249 kubelet[2390]: I0129 10:56:09.573276 2390 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:56:09.583488 kubelet[2390]: W0129 10:56:09.583287 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 10:56:09.584808 kubelet[2390]: W0129 10:56:09.584274 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.16.43" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 10:56:09.586466 kubelet[2390]: E0129 10:56:09.585978 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.16.43\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 10:56:09.588776 kubelet[2390]: E0129 10:56:09.588099 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 29 10:56:09.588776 kubelet[2390]: W0129 10:56:09.588262 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 10:56:09.588776 kubelet[2390]: E0129 10:56:09.588299 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 10:56:09.588776 kubelet[2390]: E0129 10:56:09.588622 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.16.43\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 10:56:09.590483 kubelet[2390]: E0129 10:56:09.586304 2390 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.16.43.181f248d126d26d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.16.43,UID:172.31.16.43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.16.43,},FirstTimestamp:2025-01-29 10:56:09.506432721 +0000 UTC m=+1.041814546,LastTimestamp:2025-01-29 10:56:09.506432721 +0000 UTC m=+1.041814546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.16.43,}" Jan 29 10:56:09.593028 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 10:56:09.602052 kubelet[2390]: E0129 10:56:09.598909 2390 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.16.43.181f248d13e545da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.16.43,UID:172.31.16.43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.16.43,},FirstTimestamp:2025-01-29 10:56:09.531082202 +0000 UTC m=+1.066464039,LastTimestamp:2025-01-29 10:56:09.531082202 +0000 UTC m=+1.066464039,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.16.43,}" Jan 29 10:56:09.618049 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 10:56:09.625457 kubelet[2390]: E0129 10:56:09.625281 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:09.627846 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 10:56:09.636260 kubelet[2390]: I0129 10:56:09.636187 2390 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:56:09.637079 kubelet[2390]: I0129 10:56:09.636497 2390 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 10:56:09.637079 kubelet[2390]: I0129 10:56:09.636531 2390 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:56:09.637079 kubelet[2390]: I0129 10:56:09.636889 2390 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:56:09.641505 kubelet[2390]: E0129 10:56:09.641455 2390 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.43\" not found" Jan 29 10:56:09.648310 kubelet[2390]: E0129 10:56:09.647823 2390 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.16.43.181f248d1595a397 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.16.43,UID:172.31.16.43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.16.43 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.16.43,},FirstTimestamp:2025-01-29 10:56:09.559417751 +0000 UTC m=+1.094799564,LastTimestamp:2025-01-29 10:56:09.559417751 +0000 UTC m=+1.094799564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.16.43,}" Jan 29 10:56:09.678605 kubelet[2390]: E0129 10:56:09.678008 2390 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.16.43.181f248d1595c0df default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.16.43,UID:172.31.16.43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.16.43 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.16.43,},FirstTimestamp:2025-01-29 10:56:09.559425247 +0000 UTC m=+1.094807072,LastTimestamp:2025-01-29 10:56:09.559425247 +0000 UTC m=+1.094807072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.16.43,}" Jan 29 10:56:09.697726 kubelet[2390]: I0129 10:56:09.697663 2390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:56:09.700998 kubelet[2390]: I0129 10:56:09.700408 2390 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:56:09.700998 kubelet[2390]: I0129 10:56:09.700466 2390 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:56:09.700998 kubelet[2390]: I0129 10:56:09.700498 2390 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 10:56:09.700998 kubelet[2390]: E0129 10:56:09.700571 2390 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 10:56:09.713478 kubelet[2390]: E0129 10:56:09.713300 2390 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.16.43.181f248d1595d21d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.16.43,UID:172.31.16.43,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 172.31.16.43 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:172.31.16.43,},FirstTimestamp:2025-01-29 10:56:09.559429661 +0000 UTC m=+1.094811474,LastTimestamp:2025-01-29 10:56:09.559429661 +0000 UTC m=+1.094811474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.16.43,}" Jan 29 10:56:09.714783 kubelet[2390]: W0129 10:56:09.714717 2390 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jan 29 10:56:09.715090 kubelet[2390]: E0129 10:56:09.714859 2390 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 29 10:56:09.737619 kubelet[2390]: I0129 10:56:09.737520 2390 kubelet_node_status.go:72] "Attempting to register node" node="172.31.16.43" Jan 29 10:56:09.771213 kubelet[2390]: I0129 10:56:09.771132 2390 kubelet_node_status.go:75] "Successfully registered node" node="172.31.16.43" Jan 29 10:56:09.771213 kubelet[2390]: E0129 10:56:09.771212 2390 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.16.43\": node \"172.31.16.43\" not found" Jan 29 10:56:09.874345 kubelet[2390]: E0129 10:56:09.874301 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:09.974801 kubelet[2390]: E0129 10:56:09.974648 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:10.029408 sudo[2254]: pam_unix(sudo:session): session closed for user root Jan 29 10:56:10.052457 sshd[2253]: Connection closed by 139.178.89.65 port 37856 Jan 29 10:56:10.052277 sshd-session[2251]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:10.058607 systemd[1]: sshd@6-172.31.16.43:22-139.178.89.65:37856.service: Deactivated successfully. Jan 29 10:56:10.062328 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 10:56:10.066636 systemd-logind[1918]: Session 7 logged out. Waiting for processes to exit. Jan 29 10:56:10.068443 systemd-logind[1918]: Removed session 7. Jan 29 10:56:10.075694 kubelet[2390]: E0129 10:56:10.075626 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:10.176287 kubelet[2390]: E0129 10:56:10.176221 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:10.277379 kubelet[2390]: E0129 10:56:10.276747 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:10.377418 kubelet[2390]: E0129 10:56:10.377376 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:10.430992 kubelet[2390]: I0129 10:56:10.430937 2390 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 10:56:10.478338 kubelet[2390]: E0129 10:56:10.478292 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:10.499570 kubelet[2390]: E0129 10:56:10.499526 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:10.579434 kubelet[2390]: E0129 10:56:10.579389 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:10.679918 kubelet[2390]: E0129 10:56:10.679867 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:10.780268 kubelet[2390]: E0129 10:56:10.780208 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:10.881079 kubelet[2390]: E0129 10:56:10.880703 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:10.981463 kubelet[2390]: E0129 10:56:10.981424 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:11.082286 kubelet[2390]: E0129 10:56:11.082241 2390 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.16.43\" not found" Jan 29 10:56:11.184003 kubelet[2390]: I0129 10:56:11.183865 2390 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 10:56:11.185046 containerd[1934]: time="2025-01-29T10:56:11.184978981Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 10:56:11.185861 kubelet[2390]: I0129 10:56:11.185826 2390 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 10:56:11.500181 kubelet[2390]: I0129 10:56:11.500008 2390 apiserver.go:52] "Watching apiserver" Jan 29 10:56:11.500181 kubelet[2390]: E0129 10:56:11.500011 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:11.511993 kubelet[2390]: E0129 10:56:11.511586 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:11.521798 systemd[1]: Created slice kubepods-besteffort-podf81dd188_31e4_4e1d_b044_3db057c3fd01.slice - libcontainer container kubepods-besteffort-podf81dd188_31e4_4e1d_b044_3db057c3fd01.slice. Jan 29 10:56:11.525588 kubelet[2390]: I0129 10:56:11.525553 2390 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 10:56:11.537557 kubelet[2390]: I0129 10:56:11.537516 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f81dd188-31e4-4e1d-b044-3db057c3fd01-lib-modules\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.538761 systemd[1]: Created slice kubepods-besteffort-pod202491bb_08c7_46cb_a547_f06ce2684acb.slice - libcontainer container kubepods-besteffort-pod202491bb_08c7_46cb_a547_f06ce2684acb.slice. Jan 29 10:56:11.539083 kubelet[2390]: I0129 10:56:11.539042 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f81dd188-31e4-4e1d-b044-3db057c3fd01-policysync\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.539264 kubelet[2390]: I0129 10:56:11.539237 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f81dd188-31e4-4e1d-b044-3db057c3fd01-cni-bin-dir\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.539412 kubelet[2390]: I0129 10:56:11.539388 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f81dd188-31e4-4e1d-b044-3db057c3fd01-cni-log-dir\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.539548 kubelet[2390]: I0129 10:56:11.539524 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17870272-1d43-424d-9b22-5db660d97e38-socket-dir\") pod \"csi-node-driver-w2ckh\" (UID: \"17870272-1d43-424d-9b22-5db660d97e38\") " pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:11.539680 kubelet[2390]: I0129 10:56:11.539655 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/202491bb-08c7-46cb-a547-f06ce2684acb-xtables-lock\") pod \"kube-proxy-p7ph9\" (UID: \"202491bb-08c7-46cb-a547-f06ce2684acb\") " pod="kube-system/kube-proxy-p7ph9" Jan 29 10:56:11.539795 kubelet[2390]: I0129 10:56:11.539772 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/202491bb-08c7-46cb-a547-f06ce2684acb-lib-modules\") pod \"kube-proxy-p7ph9\" (UID: \"202491bb-08c7-46cb-a547-f06ce2684acb\") " pod="kube-system/kube-proxy-p7ph9" Jan 29 10:56:11.539924 kubelet[2390]: I0129 10:56:11.539900 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17870272-1d43-424d-9b22-5db660d97e38-kubelet-dir\") pod \"csi-node-driver-w2ckh\" (UID: \"17870272-1d43-424d-9b22-5db660d97e38\") " pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:11.540097 kubelet[2390]: I0129 10:56:11.540011 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnswn\" (UniqueName: \"kubernetes.io/projected/17870272-1d43-424d-9b22-5db660d97e38-kube-api-access-nnswn\") pod \"csi-node-driver-w2ckh\" (UID: \"17870272-1d43-424d-9b22-5db660d97e38\") " pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:11.540281 kubelet[2390]: I0129 10:56:11.540209 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f81dd188-31e4-4e1d-b044-3db057c3fd01-xtables-lock\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.540565 kubelet[2390]: I0129 10:56:11.540399 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f81dd188-31e4-4e1d-b044-3db057c3fd01-tigera-ca-bundle\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.540565 kubelet[2390]: I0129 10:56:11.540449 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f81dd188-31e4-4e1d-b044-3db057c3fd01-node-certs\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.540565 kubelet[2390]: I0129 10:56:11.540516 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f81dd188-31e4-4e1d-b044-3db057c3fd01-var-run-calico\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.540916 kubelet[2390]: I0129 10:56:11.540756 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f81dd188-31e4-4e1d-b044-3db057c3fd01-var-lib-calico\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.540916 kubelet[2390]: I0129 10:56:11.540802 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f81dd188-31e4-4e1d-b044-3db057c3fd01-cni-net-dir\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.540916 kubelet[2390]: I0129 10:56:11.540883 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f81dd188-31e4-4e1d-b044-3db057c3fd01-flexvol-driver-host\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.541283 kubelet[2390]: I0129 10:56:11.541116 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47v9p\" (UniqueName: \"kubernetes.io/projected/f81dd188-31e4-4e1d-b044-3db057c3fd01-kube-api-access-47v9p\") pod \"calico-node-k8rdl\" (UID: \"f81dd188-31e4-4e1d-b044-3db057c3fd01\") " pod="calico-system/calico-node-k8rdl" Jan 29 10:56:11.541283 kubelet[2390]: I0129 10:56:11.541210 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/17870272-1d43-424d-9b22-5db660d97e38-varrun\") pod \"csi-node-driver-w2ckh\" (UID: \"17870272-1d43-424d-9b22-5db660d97e38\") " pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:11.541481 kubelet[2390]: I0129 10:56:11.541247 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17870272-1d43-424d-9b22-5db660d97e38-registration-dir\") pod \"csi-node-driver-w2ckh\" (UID: \"17870272-1d43-424d-9b22-5db660d97e38\") " pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:11.541481 kubelet[2390]: I0129 10:56:11.541432 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/202491bb-08c7-46cb-a547-f06ce2684acb-kube-proxy\") pod \"kube-proxy-p7ph9\" (UID: \"202491bb-08c7-46cb-a547-f06ce2684acb\") " pod="kube-system/kube-proxy-p7ph9" Jan 29 10:56:11.541794 kubelet[2390]: I0129 10:56:11.541646 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk7tm\" (UniqueName: \"kubernetes.io/projected/202491bb-08c7-46cb-a547-f06ce2684acb-kube-api-access-jk7tm\") pod \"kube-proxy-p7ph9\" (UID: \"202491bb-08c7-46cb-a547-f06ce2684acb\") " pod="kube-system/kube-proxy-p7ph9" Jan 29 10:56:11.650549 kubelet[2390]: E0129 10:56:11.650306 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:11.650549 kubelet[2390]: W0129 10:56:11.650354 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:11.650549 kubelet[2390]: E0129 10:56:11.650389 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:11.651886 kubelet[2390]: E0129 10:56:11.650759 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:11.651886 kubelet[2390]: W0129 10:56:11.650776 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:11.651886 kubelet[2390]: E0129 10:56:11.650810 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:11.651886 kubelet[2390]: E0129 10:56:11.651857 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:11.651886 kubelet[2390]: W0129 10:56:11.651882 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:11.652116 kubelet[2390]: E0129 10:56:11.651912 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:11.652382 kubelet[2390]: E0129 10:56:11.652353 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:11.652570 kubelet[2390]: W0129 10:56:11.652381 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:11.652570 kubelet[2390]: E0129 10:56:11.652476 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:11.652922 kubelet[2390]: E0129 10:56:11.652882 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:11.652922 kubelet[2390]: W0129 10:56:11.652910 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:11.653040 kubelet[2390]: E0129 10:56:11.652934 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:11.658657 kubelet[2390]: E0129 10:56:11.658500 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:11.658657 kubelet[2390]: W0129 10:56:11.658533 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:11.658657 kubelet[2390]: E0129 10:56:11.658586 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:11.675860 kubelet[2390]: E0129 10:56:11.675635 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:11.675860 kubelet[2390]: W0129 10:56:11.675685 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:11.675860 kubelet[2390]: E0129 10:56:11.675720 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:11.682905 kubelet[2390]: E0129 10:56:11.679477 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:11.682905 kubelet[2390]: W0129 10:56:11.679514 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:11.682905 kubelet[2390]: E0129 10:56:11.679546 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:11.691974 kubelet[2390]: E0129 10:56:11.691917 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:11.691974 kubelet[2390]: W0129 10:56:11.691956 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:11.692129 kubelet[2390]: E0129 10:56:11.691989 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:11.833741 containerd[1934]: time="2025-01-29T10:56:11.833686215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k8rdl,Uid:f81dd188-31e4-4e1d-b044-3db057c3fd01,Namespace:calico-system,Attempt:0,}" Jan 29 10:56:11.851380 containerd[1934]: time="2025-01-29T10:56:11.851203488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p7ph9,Uid:202491bb-08c7-46cb-a547-f06ce2684acb,Namespace:kube-system,Attempt:0,}" Jan 29 10:56:12.408500 containerd[1934]: time="2025-01-29T10:56:12.408095236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:56:12.411840 containerd[1934]: time="2025-01-29T10:56:12.411771829Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:56:12.413435 containerd[1934]: time="2025-01-29T10:56:12.413388776Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 29 10:56:12.415181 containerd[1934]: time="2025-01-29T10:56:12.414616626Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:56:12.416586 containerd[1934]: time="2025-01-29T10:56:12.416547252Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:56:12.428442 containerd[1934]: time="2025-01-29T10:56:12.428382775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:56:12.430262 containerd[1934]: time="2025-01-29T10:56:12.430204628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.868523ms" Jan 29 10:56:12.432821 containerd[1934]: time="2025-01-29T10:56:12.432767134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 598.924685ms" Jan 29 10:56:12.500418 kubelet[2390]: E0129 10:56:12.500339 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:12.660829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831302622.mount: Deactivated successfully. Jan 29 10:56:12.690196 containerd[1934]: time="2025-01-29T10:56:12.689487071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:56:12.690196 containerd[1934]: time="2025-01-29T10:56:12.689760810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:56:12.690196 containerd[1934]: time="2025-01-29T10:56:12.689793146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:56:12.690196 containerd[1934]: time="2025-01-29T10:56:12.690107532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:56:12.702760 kubelet[2390]: E0129 10:56:12.702709 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:12.704834 containerd[1934]: time="2025-01-29T10:56:12.704578233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:56:12.704834 containerd[1934]: time="2025-01-29T10:56:12.704697754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:56:12.704834 containerd[1934]: time="2025-01-29T10:56:12.704736458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:56:12.705273 containerd[1934]: time="2025-01-29T10:56:12.704903799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:56:12.932503 systemd[1]: Started cri-containerd-1b833f21f832b35c76fcc8caefa0a0e34c774fe9d29a775943f0a9c78d18c39d.scope - libcontainer container 1b833f21f832b35c76fcc8caefa0a0e34c774fe9d29a775943f0a9c78d18c39d. Jan 29 10:56:12.940648 systemd[1]: Started cri-containerd-84af20ef0b2e18b9404932768f8e81b4afa756cf7c2d002e2955026253b3ec8d.scope - libcontainer container 84af20ef0b2e18b9404932768f8e81b4afa756cf7c2d002e2955026253b3ec8d. Jan 29 10:56:13.003762 containerd[1934]: time="2025-01-29T10:56:13.003709014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p7ph9,Uid:202491bb-08c7-46cb-a547-f06ce2684acb,Namespace:kube-system,Attempt:0,} returns sandbox id \"84af20ef0b2e18b9404932768f8e81b4afa756cf7c2d002e2955026253b3ec8d\"" Jan 29 10:56:13.004688 containerd[1934]: time="2025-01-29T10:56:13.004215449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k8rdl,Uid:f81dd188-31e4-4e1d-b044-3db057c3fd01,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b833f21f832b35c76fcc8caefa0a0e34c774fe9d29a775943f0a9c78d18c39d\"" Jan 29 10:56:13.009949 containerd[1934]: time="2025-01-29T10:56:13.009463471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 10:56:13.501407 kubelet[2390]: E0129 10:56:13.501341 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:14.261327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197448373.mount: Deactivated successfully. Jan 29 10:56:14.502083 kubelet[2390]: E0129 10:56:14.502029 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:14.701496 kubelet[2390]: E0129 10:56:14.701405 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:14.822212 containerd[1934]: time="2025-01-29T10:56:14.821843989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:14.830548 containerd[1934]: time="2025-01-29T10:56:14.830460203Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772117" Jan 29 10:56:14.833818 containerd[1934]: time="2025-01-29T10:56:14.833717006Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:14.842048 containerd[1934]: time="2025-01-29T10:56:14.839892656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:14.842314 containerd[1934]: time="2025-01-29T10:56:14.842263019Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.832730377s" Jan 29 10:56:14.842432 containerd[1934]: time="2025-01-29T10:56:14.842402641Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 10:56:14.848206 containerd[1934]: time="2025-01-29T10:56:14.847771179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 10:56:14.851350 containerd[1934]: time="2025-01-29T10:56:14.850948150Z" level=info msg="CreateContainer within sandbox \"84af20ef0b2e18b9404932768f8e81b4afa756cf7c2d002e2955026253b3ec8d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 10:56:14.910602 containerd[1934]: time="2025-01-29T10:56:14.910514348Z" level=info msg="CreateContainer within sandbox \"84af20ef0b2e18b9404932768f8e81b4afa756cf7c2d002e2955026253b3ec8d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"71c105899df5e7c3e0bd0c690daaaa8bfe36e5cfff5660c21a733b4109aec6f9\"" Jan 29 10:56:14.912216 containerd[1934]: time="2025-01-29T10:56:14.911769064Z" level=info msg="StartContainer for \"71c105899df5e7c3e0bd0c690daaaa8bfe36e5cfff5660c21a733b4109aec6f9\"" Jan 29 10:56:14.968459 systemd[1]: Started cri-containerd-71c105899df5e7c3e0bd0c690daaaa8bfe36e5cfff5660c21a733b4109aec6f9.scope - libcontainer container 71c105899df5e7c3e0bd0c690daaaa8bfe36e5cfff5660c21a733b4109aec6f9. Jan 29 10:56:15.026186 containerd[1934]: time="2025-01-29T10:56:15.026090068Z" level=info msg="StartContainer for \"71c105899df5e7c3e0bd0c690daaaa8bfe36e5cfff5660c21a733b4109aec6f9\" returns successfully" Jan 29 10:56:15.502649 kubelet[2390]: E0129 10:56:15.502557 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:15.758347 kubelet[2390]: E0129 10:56:15.758204 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.758347 kubelet[2390]: W0129 10:56:15.758243 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.758347 kubelet[2390]: E0129 10:56:15.758274 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.758973 kubelet[2390]: E0129 10:56:15.758668 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.758973 kubelet[2390]: W0129 10:56:15.758718 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.758973 kubelet[2390]: E0129 10:56:15.758741 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.759237 kubelet[2390]: E0129 10:56:15.759207 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.759331 kubelet[2390]: W0129 10:56:15.759236 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.759331 kubelet[2390]: E0129 10:56:15.759284 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.759724 kubelet[2390]: E0129 10:56:15.759684 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.759724 kubelet[2390]: W0129 10:56:15.759712 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.759847 kubelet[2390]: E0129 10:56:15.759734 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.760133 kubelet[2390]: E0129 10:56:15.760095 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.760224 kubelet[2390]: W0129 10:56:15.760149 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.760224 kubelet[2390]: E0129 10:56:15.760212 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.760637 kubelet[2390]: E0129 10:56:15.760592 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.760706 kubelet[2390]: W0129 10:56:15.760636 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.760706 kubelet[2390]: E0129 10:56:15.760657 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.761056 kubelet[2390]: E0129 10:56:15.761029 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.761115 kubelet[2390]: W0129 10:56:15.761068 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.761115 kubelet[2390]: E0129 10:56:15.761090 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.761492 kubelet[2390]: E0129 10:56:15.761466 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.761551 kubelet[2390]: W0129 10:56:15.761491 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.761551 kubelet[2390]: E0129 10:56:15.761511 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.761895 kubelet[2390]: E0129 10:56:15.761857 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.761958 kubelet[2390]: W0129 10:56:15.761905 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.761958 kubelet[2390]: E0129 10:56:15.761928 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.762401 kubelet[2390]: E0129 10:56:15.762361 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.762466 kubelet[2390]: W0129 10:56:15.762399 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.762466 kubelet[2390]: E0129 10:56:15.762427 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.762775 kubelet[2390]: E0129 10:56:15.762750 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.762833 kubelet[2390]: W0129 10:56:15.762773 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.762833 kubelet[2390]: E0129 10:56:15.762795 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.763143 kubelet[2390]: E0129 10:56:15.763118 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.763250 kubelet[2390]: W0129 10:56:15.763142 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.763250 kubelet[2390]: E0129 10:56:15.763225 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.763586 kubelet[2390]: E0129 10:56:15.763559 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.763645 kubelet[2390]: W0129 10:56:15.763584 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.763645 kubelet[2390]: E0129 10:56:15.763605 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.763904 kubelet[2390]: E0129 10:56:15.763879 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.763962 kubelet[2390]: W0129 10:56:15.763903 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.763962 kubelet[2390]: E0129 10:56:15.763924 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.764279 kubelet[2390]: E0129 10:56:15.764254 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.764339 kubelet[2390]: W0129 10:56:15.764277 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.764339 kubelet[2390]: E0129 10:56:15.764298 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.764595 kubelet[2390]: E0129 10:56:15.764570 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.764653 kubelet[2390]: W0129 10:56:15.764594 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.764653 kubelet[2390]: E0129 10:56:15.764614 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.764921 kubelet[2390]: E0129 10:56:15.764897 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.764978 kubelet[2390]: W0129 10:56:15.764919 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.764978 kubelet[2390]: E0129 10:56:15.764938 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.765272 kubelet[2390]: E0129 10:56:15.765246 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.765332 kubelet[2390]: W0129 10:56:15.765273 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.765332 kubelet[2390]: E0129 10:56:15.765294 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.765596 kubelet[2390]: E0129 10:56:15.765571 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.765653 kubelet[2390]: W0129 10:56:15.765594 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.765653 kubelet[2390]: E0129 10:56:15.765614 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.765896 kubelet[2390]: E0129 10:56:15.765873 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.765961 kubelet[2390]: W0129 10:56:15.765895 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.765961 kubelet[2390]: E0129 10:56:15.765915 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.833244 kubelet[2390]: E0129 10:56:15.833192 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.833244 kubelet[2390]: W0129 10:56:15.833229 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.833450 kubelet[2390]: E0129 10:56:15.833260 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.833640 kubelet[2390]: E0129 10:56:15.833607 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.833640 kubelet[2390]: W0129 10:56:15.833637 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.833764 kubelet[2390]: E0129 10:56:15.833677 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.834079 kubelet[2390]: E0129 10:56:15.834040 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.834079 kubelet[2390]: W0129 10:56:15.834068 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.834290 kubelet[2390]: E0129 10:56:15.834112 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.834490 kubelet[2390]: E0129 10:56:15.834449 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.834490 kubelet[2390]: W0129 10:56:15.834475 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.834728 kubelet[2390]: E0129 10:56:15.834513 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.834826 kubelet[2390]: E0129 10:56:15.834792 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.834826 kubelet[2390]: W0129 10:56:15.834819 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.834932 kubelet[2390]: E0129 10:56:15.834848 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.835268 kubelet[2390]: E0129 10:56:15.835241 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.835348 kubelet[2390]: W0129 10:56:15.835267 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.835789 kubelet[2390]: E0129 10:56:15.835451 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.835789 kubelet[2390]: E0129 10:56:15.835542 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.835789 kubelet[2390]: W0129 10:56:15.835557 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.835789 kubelet[2390]: E0129 10:56:15.835576 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.836011 kubelet[2390]: E0129 10:56:15.835847 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.836011 kubelet[2390]: W0129 10:56:15.835862 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.836011 kubelet[2390]: E0129 10:56:15.835888 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.836241 kubelet[2390]: E0129 10:56:15.836214 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.836300 kubelet[2390]: W0129 10:56:15.836239 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.836300 kubelet[2390]: E0129 10:56:15.836277 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.836747 kubelet[2390]: E0129 10:56:15.836650 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.836747 kubelet[2390]: W0129 10:56:15.836676 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.836747 kubelet[2390]: E0129 10:56:15.836718 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.837947 kubelet[2390]: E0129 10:56:15.837886 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.837947 kubelet[2390]: W0129 10:56:15.837931 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.838114 kubelet[2390]: E0129 10:56:15.837964 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:15.840226 kubelet[2390]: E0129 10:56:15.838698 2390 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:56:15.840226 kubelet[2390]: W0129 10:56:15.838742 2390 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:56:15.840226 kubelet[2390]: E0129 10:56:15.838769 2390 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:56:16.076797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169897224.mount: Deactivated successfully. Jan 29 10:56:16.201842 containerd[1934]: time="2025-01-29T10:56:16.201776758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:16.203263 containerd[1934]: time="2025-01-29T10:56:16.203191618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 29 10:56:16.204351 containerd[1934]: time="2025-01-29T10:56:16.204262430Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:16.207885 containerd[1934]: time="2025-01-29T10:56:16.207783965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:16.209583 containerd[1934]: time="2025-01-29T10:56:16.209401188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.360205301s" Jan 29 10:56:16.209583 containerd[1934]: time="2025-01-29T10:56:16.209452834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 10:56:16.213192 containerd[1934]: time="2025-01-29T10:56:16.212857523Z" level=info msg="CreateContainer within sandbox \"1b833f21f832b35c76fcc8caefa0a0e34c774fe9d29a775943f0a9c78d18c39d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 10:56:16.232809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202911980.mount: Deactivated successfully. Jan 29 10:56:16.240214 containerd[1934]: time="2025-01-29T10:56:16.240089096Z" level=info msg="CreateContainer within sandbox \"1b833f21f832b35c76fcc8caefa0a0e34c774fe9d29a775943f0a9c78d18c39d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"da2fb6b90d7d5c7f45f79b62cfe2b7f66a9a5a81083d977896aa057d060ca34c\"" Jan 29 10:56:16.241338 containerd[1934]: time="2025-01-29T10:56:16.241287884Z" level=info msg="StartContainer for \"da2fb6b90d7d5c7f45f79b62cfe2b7f66a9a5a81083d977896aa057d060ca34c\"" Jan 29 10:56:16.292470 systemd[1]: Started cri-containerd-da2fb6b90d7d5c7f45f79b62cfe2b7f66a9a5a81083d977896aa057d060ca34c.scope - libcontainer container da2fb6b90d7d5c7f45f79b62cfe2b7f66a9a5a81083d977896aa057d060ca34c. Jan 29 10:56:16.348147 containerd[1934]: time="2025-01-29T10:56:16.347998275Z" level=info msg="StartContainer for \"da2fb6b90d7d5c7f45f79b62cfe2b7f66a9a5a81083d977896aa057d060ca34c\" returns successfully" Jan 29 10:56:16.368542 systemd[1]: cri-containerd-da2fb6b90d7d5c7f45f79b62cfe2b7f66a9a5a81083d977896aa057d060ca34c.scope: Deactivated successfully. Jan 29 10:56:16.487046 containerd[1934]: time="2025-01-29T10:56:16.486932178Z" level=info msg="shim disconnected" id=da2fb6b90d7d5c7f45f79b62cfe2b7f66a9a5a81083d977896aa057d060ca34c namespace=k8s.io Jan 29 10:56:16.487046 containerd[1934]: time="2025-01-29T10:56:16.487016783Z" level=warning msg="cleaning up after shim disconnected" id=da2fb6b90d7d5c7f45f79b62cfe2b7f66a9a5a81083d977896aa057d060ca34c namespace=k8s.io Jan 29 10:56:16.487046 containerd[1934]: time="2025-01-29T10:56:16.487038300Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:56:16.503181 kubelet[2390]: E0129 10:56:16.503112 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:16.701525 kubelet[2390]: E0129 10:56:16.701365 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:16.733779 containerd[1934]: time="2025-01-29T10:56:16.733452848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 10:56:16.768084 kubelet[2390]: I0129 10:56:16.767970 2390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p7ph9" podStartSLOduration=5.930217587 podStartE2EDuration="7.767949919s" podCreationTimestamp="2025-01-29 10:56:09 +0000 UTC" firstStartedPulling="2025-01-29 10:56:13.008544083 +0000 UTC m=+4.543925908" lastFinishedPulling="2025-01-29 10:56:14.846276427 +0000 UTC m=+6.381658240" observedRunningTime="2025-01-29 10:56:15.74752977 +0000 UTC m=+7.282911607" watchObservedRunningTime="2025-01-29 10:56:16.767949919 +0000 UTC m=+8.303331756" Jan 29 10:56:17.024245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da2fb6b90d7d5c7f45f79b62cfe2b7f66a9a5a81083d977896aa057d060ca34c-rootfs.mount: Deactivated successfully. Jan 29 10:56:17.503815 kubelet[2390]: E0129 10:56:17.503748 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:18.504587 kubelet[2390]: E0129 10:56:18.504523 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:18.700944 kubelet[2390]: E0129 10:56:18.700881 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:19.504969 kubelet[2390]: E0129 10:56:19.504748 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:19.975870 containerd[1934]: time="2025-01-29T10:56:19.975806182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:19.977381 containerd[1934]: time="2025-01-29T10:56:19.977294974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 10:56:19.978813 containerd[1934]: time="2025-01-29T10:56:19.978757150Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:19.982847 containerd[1934]: time="2025-01-29T10:56:19.982550373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:19.985547 containerd[1934]: time="2025-01-29T10:56:19.985410186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.251903581s" Jan 29 10:56:19.985547 containerd[1934]: time="2025-01-29T10:56:19.985461244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 10:56:19.988789 containerd[1934]: time="2025-01-29T10:56:19.988555984Z" level=info msg="CreateContainer within sandbox \"1b833f21f832b35c76fcc8caefa0a0e34c774fe9d29a775943f0a9c78d18c39d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 10:56:20.006868 containerd[1934]: time="2025-01-29T10:56:20.006785329Z" level=info msg="CreateContainer within sandbox \"1b833f21f832b35c76fcc8caefa0a0e34c774fe9d29a775943f0a9c78d18c39d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dcd6a33725fa45932e788e45727c1ca48259ae1b1b49416cdcd3c04a0b301e4f\"" Jan 29 10:56:20.008084 containerd[1934]: time="2025-01-29T10:56:20.008012711Z" level=info msg="StartContainer for \"dcd6a33725fa45932e788e45727c1ca48259ae1b1b49416cdcd3c04a0b301e4f\"" Jan 29 10:56:20.060470 systemd[1]: Started cri-containerd-dcd6a33725fa45932e788e45727c1ca48259ae1b1b49416cdcd3c04a0b301e4f.scope - libcontainer container dcd6a33725fa45932e788e45727c1ca48259ae1b1b49416cdcd3c04a0b301e4f. Jan 29 10:56:20.116826 containerd[1934]: time="2025-01-29T10:56:20.116421800Z" level=info msg="StartContainer for \"dcd6a33725fa45932e788e45727c1ca48259ae1b1b49416cdcd3c04a0b301e4f\" returns successfully" Jan 29 10:56:20.505343 kubelet[2390]: E0129 10:56:20.505138 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:20.701761 kubelet[2390]: E0129 10:56:20.701691 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:21.053051 containerd[1934]: time="2025-01-29T10:56:21.052939661Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:56:21.059429 systemd[1]: cri-containerd-dcd6a33725fa45932e788e45727c1ca48259ae1b1b49416cdcd3c04a0b301e4f.scope: Deactivated successfully. Jan 29 10:56:21.075576 kubelet[2390]: I0129 10:56:21.074625 2390 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 10:56:21.097790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcd6a33725fa45932e788e45727c1ca48259ae1b1b49416cdcd3c04a0b301e4f-rootfs.mount: Deactivated successfully. Jan 29 10:56:21.505522 kubelet[2390]: E0129 10:56:21.505417 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:21.948470 containerd[1934]: time="2025-01-29T10:56:21.948351758Z" level=info msg="shim disconnected" id=dcd6a33725fa45932e788e45727c1ca48259ae1b1b49416cdcd3c04a0b301e4f namespace=k8s.io Jan 29 10:56:21.948470 containerd[1934]: time="2025-01-29T10:56:21.948426397Z" level=warning msg="cleaning up after shim disconnected" id=dcd6a33725fa45932e788e45727c1ca48259ae1b1b49416cdcd3c04a0b301e4f namespace=k8s.io Jan 29 10:56:21.948470 containerd[1934]: time="2025-01-29T10:56:21.948450205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:56:22.506616 kubelet[2390]: E0129 10:56:22.506558 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:22.710448 systemd[1]: Created slice kubepods-besteffort-pod17870272_1d43_424d_9b22_5db660d97e38.slice - libcontainer container kubepods-besteffort-pod17870272_1d43_424d_9b22_5db660d97e38.slice. Jan 29 10:56:22.715352 containerd[1934]: time="2025-01-29T10:56:22.715132810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:0,}" Jan 29 10:56:22.760355 containerd[1934]: time="2025-01-29T10:56:22.759425717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 10:56:22.837302 containerd[1934]: time="2025-01-29T10:56:22.837176834Z" level=error msg="Failed to destroy network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:22.840021 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43-shm.mount: Deactivated successfully. Jan 29 10:56:22.842763 containerd[1934]: time="2025-01-29T10:56:22.841627255Z" level=error msg="encountered an error cleaning up failed sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:22.842763 containerd[1934]: time="2025-01-29T10:56:22.841763459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:22.842998 kubelet[2390]: E0129 10:56:22.842528 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:22.842998 kubelet[2390]: E0129 10:56:22.842737 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:22.842998 kubelet[2390]: E0129 10:56:22.842800 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:22.843258 kubelet[2390]: E0129 10:56:22.842904 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:22.845871 kubelet[2390]: W0129 10:56:22.845717 2390 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:172.31.16.43" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '172.31.16.43' and this object Jan 29 10:56:22.846311 kubelet[2390]: E0129 10:56:22.846221 2390 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:172.31.16.43\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node '172.31.16.43' and this object" logger="UnhandledError" Jan 29 10:56:22.851944 systemd[1]: Created slice kubepods-besteffort-podf16310ba_dbbb_4373_bf50_efe2d6339fd7.slice - libcontainer container kubepods-besteffort-podf16310ba_dbbb_4373_bf50_efe2d6339fd7.slice. Jan 29 10:56:22.981351 kubelet[2390]: I0129 10:56:22.981287 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcnwd\" (UniqueName: \"kubernetes.io/projected/f16310ba-dbbb-4373-bf50-efe2d6339fd7-kube-api-access-kcnwd\") pod \"nginx-deployment-8587fbcb89-2r4js\" (UID: \"f16310ba-dbbb-4373-bf50-efe2d6339fd7\") " pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:23.507445 kubelet[2390]: E0129 10:56:23.507382 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:23.772090 kubelet[2390]: I0129 10:56:23.770524 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43" Jan 29 10:56:23.772255 containerd[1934]: time="2025-01-29T10:56:23.771908981Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\"" Jan 29 10:56:23.773748 containerd[1934]: time="2025-01-29T10:56:23.773336674Z" level=info msg="Ensure that sandbox a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43 in task-service has been cleanup successfully" Jan 29 10:56:23.773748 containerd[1934]: time="2025-01-29T10:56:23.776123037Z" level=info msg="TearDown network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" successfully" Jan 29 10:56:23.773748 containerd[1934]: time="2025-01-29T10:56:23.776184830Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" returns successfully" Jan 29 10:56:23.777845 containerd[1934]: time="2025-01-29T10:56:23.777610389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:1,}" Jan 29 10:56:23.776179 systemd[1]: run-netns-cni\x2df338597f\x2d0863\x2d73e9\x2d75f8\x2d03b2b2b9f924.mount: Deactivated successfully. Jan 29 10:56:23.960822 containerd[1934]: time="2025-01-29T10:56:23.956577133Z" level=error msg="Failed to destroy network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:23.960822 containerd[1934]: time="2025-01-29T10:56:23.957181631Z" level=error msg="encountered an error cleaning up failed sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:23.960822 containerd[1934]: time="2025-01-29T10:56:23.957284131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:23.961082 kubelet[2390]: E0129 10:56:23.959329 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:23.961082 kubelet[2390]: E0129 10:56:23.959408 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:23.961082 kubelet[2390]: E0129 10:56:23.959441 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:23.961663 kubelet[2390]: E0129 10:56:23.959538 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:23.963907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56-shm.mount: Deactivated successfully. Jan 29 10:56:24.059358 containerd[1934]: time="2025-01-29T10:56:24.059282487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:0,}" Jan 29 10:56:24.233812 containerd[1934]: time="2025-01-29T10:56:24.233750654Z" level=error msg="Failed to destroy network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:24.234596 containerd[1934]: time="2025-01-29T10:56:24.234547967Z" level=error msg="encountered an error cleaning up failed sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:24.234809 containerd[1934]: time="2025-01-29T10:56:24.234770468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:24.235784 kubelet[2390]: E0129 10:56:24.235311 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:24.235784 kubelet[2390]: E0129 10:56:24.235392 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:24.235784 kubelet[2390]: E0129 10:56:24.235425 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:24.236027 kubelet[2390]: E0129 10:56:24.235491 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-2r4js" podUID="f16310ba-dbbb-4373-bf50-efe2d6339fd7" Jan 29 10:56:24.508324 kubelet[2390]: E0129 10:56:24.508146 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:24.781227 kubelet[2390]: I0129 10:56:24.780988 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56" Jan 29 10:56:24.786040 containerd[1934]: time="2025-01-29T10:56:24.785955747Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\"" Jan 29 10:56:24.787735 containerd[1934]: time="2025-01-29T10:56:24.787396022Z" level=info msg="Ensure that sandbox 4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56 in task-service has been cleanup successfully" Jan 29 10:56:24.787853 kubelet[2390]: I0129 10:56:24.787805 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4" Jan 29 10:56:24.791184 containerd[1934]: time="2025-01-29T10:56:24.789465047Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\"" Jan 29 10:56:24.791184 containerd[1934]: time="2025-01-29T10:56:24.789756657Z" level=info msg="Ensure that sandbox efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4 in task-service has been cleanup successfully" Jan 29 10:56:24.792921 systemd[1]: run-netns-cni\x2de4edc68f\x2d3c2f\x2d3df7\x2dae87\x2d5f64ba76ed4f.mount: Deactivated successfully. Jan 29 10:56:24.796970 containerd[1934]: time="2025-01-29T10:56:24.794320038Z" level=info msg="TearDown network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" successfully" Jan 29 10:56:24.796970 containerd[1934]: time="2025-01-29T10:56:24.794396656Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" returns successfully" Jan 29 10:56:24.798043 containerd[1934]: time="2025-01-29T10:56:24.797966334Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\"" Jan 29 10:56:24.798498 containerd[1934]: time="2025-01-29T10:56:24.798433405Z" level=info msg="TearDown network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" successfully" Jan 29 10:56:24.798681 containerd[1934]: time="2025-01-29T10:56:24.798573519Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" returns successfully" Jan 29 10:56:24.799107 containerd[1934]: time="2025-01-29T10:56:24.798943582Z" level=info msg="TearDown network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" successfully" Jan 29 10:56:24.799107 containerd[1934]: time="2025-01-29T10:56:24.798975989Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" returns successfully" Jan 29 10:56:24.799001 systemd[1]: run-netns-cni\x2d0abcd907\x2d59c8\x2d460d\x2dc5d0\x2d5f1840c51f34.mount: Deactivated successfully. Jan 29 10:56:24.801001 containerd[1934]: time="2025-01-29T10:56:24.800923263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:2,}" Jan 29 10:56:24.802663 containerd[1934]: time="2025-01-29T10:56:24.802486525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:1,}" Jan 29 10:56:25.024786 containerd[1934]: time="2025-01-29T10:56:25.024710926Z" level=error msg="Failed to destroy network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:25.026719 containerd[1934]: time="2025-01-29T10:56:25.026657792Z" level=error msg="encountered an error cleaning up failed sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:25.028199 containerd[1934]: time="2025-01-29T10:56:25.028107459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:25.029700 kubelet[2390]: E0129 10:56:25.029203 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:25.029700 kubelet[2390]: E0129 10:56:25.029290 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:25.029700 kubelet[2390]: E0129 10:56:25.029324 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:25.029974 kubelet[2390]: E0129 10:56:25.029405 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:25.047529 containerd[1934]: time="2025-01-29T10:56:25.047378962Z" level=error msg="Failed to destroy network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:25.048703 containerd[1934]: time="2025-01-29T10:56:25.048373829Z" level=error msg="encountered an error cleaning up failed sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:25.048703 containerd[1934]: time="2025-01-29T10:56:25.048471244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:25.049554 kubelet[2390]: E0129 10:56:25.048984 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:25.049928 kubelet[2390]: E0129 10:56:25.049258 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:25.049928 kubelet[2390]: E0129 10:56:25.049738 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:25.049928 kubelet[2390]: E0129 10:56:25.049844 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-2r4js" podUID="f16310ba-dbbb-4373-bf50-efe2d6339fd7" Jan 29 10:56:25.509070 kubelet[2390]: E0129 10:56:25.508935 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:25.778902 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef-shm.mount: Deactivated successfully. Jan 29 10:56:25.795589 kubelet[2390]: I0129 10:56:25.795471 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef" Jan 29 10:56:25.797660 containerd[1934]: time="2025-01-29T10:56:25.797395893Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\"" Jan 29 10:56:25.800328 containerd[1934]: time="2025-01-29T10:56:25.797683665Z" level=info msg="Ensure that sandbox 22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef in task-service has been cleanup successfully" Jan 29 10:56:25.802080 systemd[1]: run-netns-cni\x2d9bfd49a7\x2d3831\x2d7379\x2d155e\x2d53c35d6c9e67.mount: Deactivated successfully. Jan 29 10:56:25.802942 containerd[1934]: time="2025-01-29T10:56:25.801004707Z" level=info msg="TearDown network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" successfully" Jan 29 10:56:25.802942 containerd[1934]: time="2025-01-29T10:56:25.802249469Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" returns successfully" Jan 29 10:56:25.804504 containerd[1934]: time="2025-01-29T10:56:25.803435579Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\"" Jan 29 10:56:25.804504 containerd[1934]: time="2025-01-29T10:56:25.803599597Z" level=info msg="TearDown network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" successfully" Jan 29 10:56:25.804504 containerd[1934]: time="2025-01-29T10:56:25.803621606Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" returns successfully" Jan 29 10:56:25.809109 containerd[1934]: time="2025-01-29T10:56:25.809050930Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\"" Jan 29 10:56:25.809527 kubelet[2390]: I0129 10:56:25.809495 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc" Jan 29 10:56:25.811018 containerd[1934]: time="2025-01-29T10:56:25.810448135Z" level=info msg="TearDown network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" successfully" Jan 29 10:56:25.811018 containerd[1934]: time="2025-01-29T10:56:25.810495404Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" returns successfully" Jan 29 10:56:25.811018 containerd[1934]: time="2025-01-29T10:56:25.810761790Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\"" Jan 29 10:56:25.811018 containerd[1934]: time="2025-01-29T10:56:25.811009826Z" level=info msg="Ensure that sandbox 3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc in task-service has been cleanup successfully" Jan 29 10:56:25.813458 containerd[1934]: time="2025-01-29T10:56:25.811781064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:3,}" Jan 29 10:56:25.814749 containerd[1934]: time="2025-01-29T10:56:25.814682749Z" level=info msg="TearDown network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" successfully" Jan 29 10:56:25.814749 containerd[1934]: time="2025-01-29T10:56:25.814735271Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" returns successfully" Jan 29 10:56:25.815735 systemd[1]: run-netns-cni\x2d83986b7d\x2d8c64\x2d45da\x2d3885\x2d5b1e4e23f459.mount: Deactivated successfully. Jan 29 10:56:25.818724 containerd[1934]: time="2025-01-29T10:56:25.818384457Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\"" Jan 29 10:56:25.818724 containerd[1934]: time="2025-01-29T10:56:25.818560229Z" level=info msg="TearDown network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" successfully" Jan 29 10:56:25.818724 containerd[1934]: time="2025-01-29T10:56:25.818583594Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" returns successfully" Jan 29 10:56:25.821664 containerd[1934]: time="2025-01-29T10:56:25.821504288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:2,}" Jan 29 10:56:26.067675 containerd[1934]: time="2025-01-29T10:56:26.067604006Z" level=error msg="Failed to destroy network for sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:26.068924 containerd[1934]: time="2025-01-29T10:56:26.068794158Z" level=error msg="encountered an error cleaning up failed sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:26.069395 containerd[1934]: time="2025-01-29T10:56:26.068922254Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:26.070046 kubelet[2390]: E0129 10:56:26.069315 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:26.070046 kubelet[2390]: E0129 10:56:26.069739 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:26.070046 kubelet[2390]: E0129 10:56:26.069797 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:26.070382 kubelet[2390]: E0129 10:56:26.069988 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:26.079896 containerd[1934]: time="2025-01-29T10:56:26.079625304Z" level=error msg="Failed to destroy network for sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:26.080625 containerd[1934]: time="2025-01-29T10:56:26.080504956Z" level=error msg="encountered an error cleaning up failed sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:26.080809 containerd[1934]: time="2025-01-29T10:56:26.080755463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:26.081670 kubelet[2390]: E0129 10:56:26.081323 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:26.081670 kubelet[2390]: E0129 10:56:26.081423 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:26.081670 kubelet[2390]: E0129 10:56:26.081486 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:26.081996 kubelet[2390]: E0129 10:56:26.081582 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-2r4js" podUID="f16310ba-dbbb-4373-bf50-efe2d6339fd7" Jan 29 10:56:26.510604 kubelet[2390]: E0129 10:56:26.509406 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:26.778652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791-shm.mount: Deactivated successfully. Jan 29 10:56:26.820059 kubelet[2390]: I0129 10:56:26.819998 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791" Jan 29 10:56:26.822082 containerd[1934]: time="2025-01-29T10:56:26.822019472Z" level=info msg="StopPodSandbox for \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\"" Jan 29 10:56:26.824743 containerd[1934]: time="2025-01-29T10:56:26.824302649Z" level=info msg="Ensure that sandbox e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791 in task-service has been cleanup successfully" Jan 29 10:56:26.824743 containerd[1934]: time="2025-01-29T10:56:26.824653294Z" level=info msg="TearDown network for sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" successfully" Jan 29 10:56:26.824743 containerd[1934]: time="2025-01-29T10:56:26.824684502Z" level=info msg="StopPodSandbox for \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" returns successfully" Jan 29 10:56:26.830746 containerd[1934]: time="2025-01-29T10:56:26.828395014Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\"" Jan 29 10:56:26.829099 systemd[1]: run-netns-cni\x2d65a87634\x2dbb1e\x2d7aca\x2d3fda\x2d028c9c06bb09.mount: Deactivated successfully. Jan 29 10:56:26.832402 containerd[1934]: time="2025-01-29T10:56:26.831514174Z" level=info msg="TearDown network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" successfully" Jan 29 10:56:26.832402 containerd[1934]: time="2025-01-29T10:56:26.831566924Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" returns successfully" Jan 29 10:56:26.835144 containerd[1934]: time="2025-01-29T10:56:26.834835576Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\"" Jan 29 10:56:26.835144 containerd[1934]: time="2025-01-29T10:56:26.834996824Z" level=info msg="TearDown network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" successfully" Jan 29 10:56:26.835144 containerd[1934]: time="2025-01-29T10:56:26.835018197Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" returns successfully" Jan 29 10:56:26.837421 containerd[1934]: time="2025-01-29T10:56:26.836877051Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\"" Jan 29 10:56:26.837421 containerd[1934]: time="2025-01-29T10:56:26.837040194Z" level=info msg="TearDown network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" successfully" Jan 29 10:56:26.837421 containerd[1934]: time="2025-01-29T10:56:26.837067324Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" returns successfully" Jan 29 10:56:26.838336 containerd[1934]: time="2025-01-29T10:56:26.838293387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:4,}" Jan 29 10:56:26.841129 kubelet[2390]: I0129 10:56:26.841080 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24" Jan 29 10:56:26.845190 containerd[1934]: time="2025-01-29T10:56:26.843717541Z" level=info msg="StopPodSandbox for \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\"" Jan 29 10:56:26.845190 containerd[1934]: time="2025-01-29T10:56:26.844096144Z" level=info msg="Ensure that sandbox 20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24 in task-service has been cleanup successfully" Jan 29 10:56:26.851250 containerd[1934]: time="2025-01-29T10:56:26.848849510Z" level=info msg="TearDown network for sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" successfully" Jan 29 10:56:26.851645 containerd[1934]: time="2025-01-29T10:56:26.851574331Z" level=info msg="StopPodSandbox for \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" returns successfully" Jan 29 10:56:26.853543 systemd[1]: run-netns-cni\x2dedc50cb7\x2dc6f7\x2da3aa\x2d1f9c\x2d087afe465b3e.mount: Deactivated successfully. Jan 29 10:56:26.856884 containerd[1934]: time="2025-01-29T10:56:26.856544956Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\"" Jan 29 10:56:26.856884 containerd[1934]: time="2025-01-29T10:56:26.856708351Z" level=info msg="TearDown network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" successfully" Jan 29 10:56:26.856884 containerd[1934]: time="2025-01-29T10:56:26.856742078Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" returns successfully" Jan 29 10:56:26.860427 containerd[1934]: time="2025-01-29T10:56:26.859968680Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\"" Jan 29 10:56:26.863849 containerd[1934]: time="2025-01-29T10:56:26.860591228Z" level=info msg="TearDown network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" successfully" Jan 29 10:56:26.863849 containerd[1934]: time="2025-01-29T10:56:26.860625723Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" returns successfully" Jan 29 10:56:26.863849 containerd[1934]: time="2025-01-29T10:56:26.862049783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:3,}" Jan 29 10:56:27.060309 containerd[1934]: time="2025-01-29T10:56:27.060245728Z" level=error msg="Failed to destroy network for sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:27.061022 containerd[1934]: time="2025-01-29T10:56:27.060981344Z" level=error msg="encountered an error cleaning up failed sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:27.062318 containerd[1934]: time="2025-01-29T10:56:27.062263346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:27.063322 kubelet[2390]: E0129 10:56:27.062714 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:27.063322 kubelet[2390]: E0129 10:56:27.062797 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:27.063322 kubelet[2390]: E0129 10:56:27.062831 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:27.063598 kubelet[2390]: E0129 10:56:27.062892 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:27.083050 containerd[1934]: time="2025-01-29T10:56:27.082980043Z" level=error msg="Failed to destroy network for sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:27.083851 containerd[1934]: time="2025-01-29T10:56:27.083802651Z" level=error msg="encountered an error cleaning up failed sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:27.084221 containerd[1934]: time="2025-01-29T10:56:27.084179155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:27.085074 kubelet[2390]: E0129 10:56:27.084820 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:27.085074 kubelet[2390]: E0129 10:56:27.084903 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:27.085074 kubelet[2390]: E0129 10:56:27.084937 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:27.085557 kubelet[2390]: E0129 10:56:27.085011 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-2r4js" podUID="f16310ba-dbbb-4373-bf50-efe2d6339fd7" Jan 29 10:56:27.510034 kubelet[2390]: E0129 10:56:27.509886 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:27.778685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c-shm.mount: Deactivated successfully. Jan 29 10:56:27.858490 kubelet[2390]: I0129 10:56:27.857732 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c" Jan 29 10:56:27.863447 containerd[1934]: time="2025-01-29T10:56:27.862898543Z" level=info msg="StopPodSandbox for \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\"" Jan 29 10:56:27.863447 containerd[1934]: time="2025-01-29T10:56:27.863227622Z" level=info msg="Ensure that sandbox 0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c in task-service has been cleanup successfully" Jan 29 10:56:27.867264 containerd[1934]: time="2025-01-29T10:56:27.864212390Z" level=info msg="TearDown network for sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\" successfully" Jan 29 10:56:27.867264 containerd[1934]: time="2025-01-29T10:56:27.864253073Z" level=info msg="StopPodSandbox for \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\" returns successfully" Jan 29 10:56:27.868778 containerd[1934]: time="2025-01-29T10:56:27.868732125Z" level=info msg="StopPodSandbox for \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\"" Jan 29 10:56:27.869529 systemd[1]: run-netns-cni\x2d5243d2ef\x2dfc19\x2d4850\x2d549c\x2d2fc07e8720ad.mount: Deactivated successfully. Jan 29 10:56:27.871848 containerd[1934]: time="2025-01-29T10:56:27.871806906Z" level=info msg="TearDown network for sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" successfully" Jan 29 10:56:27.872137 containerd[1934]: time="2025-01-29T10:56:27.871969137Z" level=info msg="StopPodSandbox for \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" returns successfully" Jan 29 10:56:27.873384 containerd[1934]: time="2025-01-29T10:56:27.873337857Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\"" Jan 29 10:56:27.874012 containerd[1934]: time="2025-01-29T10:56:27.873733227Z" level=info msg="TearDown network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" successfully" Jan 29 10:56:27.874012 containerd[1934]: time="2025-01-29T10:56:27.873772123Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" returns successfully" Jan 29 10:56:27.874538 kubelet[2390]: I0129 10:56:27.874506 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15" Jan 29 10:56:27.876598 containerd[1934]: time="2025-01-29T10:56:27.874914120Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\"" Jan 29 10:56:27.876598 containerd[1934]: time="2025-01-29T10:56:27.876286390Z" level=info msg="TearDown network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" successfully" Jan 29 10:56:27.876598 containerd[1934]: time="2025-01-29T10:56:27.876322839Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" returns successfully" Jan 29 10:56:27.877228 containerd[1934]: time="2025-01-29T10:56:27.877182785Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\"" Jan 29 10:56:27.877455 containerd[1934]: time="2025-01-29T10:56:27.877371439Z" level=info msg="TearDown network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" successfully" Jan 29 10:56:27.877455 containerd[1934]: time="2025-01-29T10:56:27.877416104Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" returns successfully" Jan 29 10:56:27.877589 containerd[1934]: time="2025-01-29T10:56:27.877551001Z" level=info msg="StopPodSandbox for \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\"" Jan 29 10:56:27.878496 containerd[1934]: time="2025-01-29T10:56:27.878345028Z" level=info msg="Ensure that sandbox 961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15 in task-service has been cleanup successfully" Jan 29 10:56:27.882224 containerd[1934]: time="2025-01-29T10:56:27.880825135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:5,}" Jan 29 10:56:27.884095 systemd[1]: run-netns-cni\x2dd49632ab\x2d5202\x2d523e\x2d8399\x2d0289b82aa6c1.mount: Deactivated successfully. Jan 29 10:56:27.885316 containerd[1934]: time="2025-01-29T10:56:27.884252588Z" level=info msg="TearDown network for sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\" successfully" Jan 29 10:56:27.885316 containerd[1934]: time="2025-01-29T10:56:27.884293752Z" level=info msg="StopPodSandbox for \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\" returns successfully" Jan 29 10:56:27.885605 containerd[1934]: time="2025-01-29T10:56:27.885557116Z" level=info msg="StopPodSandbox for \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\"" Jan 29 10:56:27.885750 containerd[1934]: time="2025-01-29T10:56:27.885718195Z" level=info msg="TearDown network for sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" successfully" Jan 29 10:56:27.885842 containerd[1934]: time="2025-01-29T10:56:27.885748744Z" level=info msg="StopPodSandbox for \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" returns successfully" Jan 29 10:56:27.890801 containerd[1934]: time="2025-01-29T10:56:27.890447514Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\"" Jan 29 10:56:27.890801 containerd[1934]: time="2025-01-29T10:56:27.890699304Z" level=info msg="TearDown network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" successfully" Jan 29 10:56:27.890801 containerd[1934]: time="2025-01-29T10:56:27.890722548Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" returns successfully" Jan 29 10:56:27.892521 containerd[1934]: time="2025-01-29T10:56:27.891811519Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\"" Jan 29 10:56:27.892521 containerd[1934]: time="2025-01-29T10:56:27.891958829Z" level=info msg="TearDown network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" successfully" Jan 29 10:56:27.892521 containerd[1934]: time="2025-01-29T10:56:27.892001852Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" returns successfully" Jan 29 10:56:27.893389 containerd[1934]: time="2025-01-29T10:56:27.893324083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:4,}" Jan 29 10:56:28.194080 containerd[1934]: time="2025-01-29T10:56:28.193998215Z" level=error msg="Failed to destroy network for sandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:28.194778 containerd[1934]: time="2025-01-29T10:56:28.194582527Z" level=error msg="encountered an error cleaning up failed sandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:28.194778 containerd[1934]: time="2025-01-29T10:56:28.194689046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:28.195105 kubelet[2390]: E0129 10:56:28.194955 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:28.195365 kubelet[2390]: E0129 10:56:28.195039 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:28.195365 kubelet[2390]: E0129 10:56:28.195244 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:28.196417 kubelet[2390]: E0129 10:56:28.195342 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:28.203719 containerd[1934]: time="2025-01-29T10:56:28.203425223Z" level=error msg="Failed to destroy network for sandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:28.205707 containerd[1934]: time="2025-01-29T10:56:28.205455867Z" level=error msg="encountered an error cleaning up failed sandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:28.205707 containerd[1934]: time="2025-01-29T10:56:28.205572629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:28.206681 kubelet[2390]: E0129 10:56:28.206084 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:28.206681 kubelet[2390]: E0129 10:56:28.206209 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:28.206681 kubelet[2390]: E0129 10:56:28.206292 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:28.206955 kubelet[2390]: E0129 10:56:28.206396 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-2r4js" podUID="f16310ba-dbbb-4373-bf50-efe2d6339fd7" Jan 29 10:56:28.510420 kubelet[2390]: E0129 10:56:28.510275 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:28.779973 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa-shm.mount: Deactivated successfully. Jan 29 10:56:28.883195 kubelet[2390]: I0129 10:56:28.882086 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa" Jan 29 10:56:28.883749 containerd[1934]: time="2025-01-29T10:56:28.883400288Z" level=info msg="StopPodSandbox for \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\"" Jan 29 10:56:28.883749 containerd[1934]: time="2025-01-29T10:56:28.883682842Z" level=info msg="Ensure that sandbox dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa in task-service has been cleanup successfully" Jan 29 10:56:28.884287 containerd[1934]: time="2025-01-29T10:56:28.883988282Z" level=info msg="TearDown network for sandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\" successfully" Jan 29 10:56:28.884287 containerd[1934]: time="2025-01-29T10:56:28.884015988Z" level=info msg="StopPodSandbox for \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\" returns successfully" Jan 29 10:56:28.887195 containerd[1934]: time="2025-01-29T10:56:28.884799196Z" level=info msg="StopPodSandbox for \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\"" Jan 29 10:56:28.887195 containerd[1934]: time="2025-01-29T10:56:28.884958932Z" level=info msg="TearDown network for sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\" successfully" Jan 29 10:56:28.887195 containerd[1934]: time="2025-01-29T10:56:28.884982320Z" level=info msg="StopPodSandbox for \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\" returns successfully" Jan 29 10:56:28.887195 containerd[1934]: time="2025-01-29T10:56:28.885712335Z" level=info msg="StopPodSandbox for \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\"" Jan 29 10:56:28.887195 containerd[1934]: time="2025-01-29T10:56:28.885858866Z" level=info msg="TearDown network for sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" successfully" Jan 29 10:56:28.887195 containerd[1934]: time="2025-01-29T10:56:28.885880839Z" level=info msg="StopPodSandbox for \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" returns successfully" Jan 29 10:56:28.887195 containerd[1934]: time="2025-01-29T10:56:28.886638236Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\"" Jan 29 10:56:28.887195 containerd[1934]: time="2025-01-29T10:56:28.886781480Z" level=info msg="TearDown network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" successfully" Jan 29 10:56:28.887195 containerd[1934]: time="2025-01-29T10:56:28.886804929Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" returns successfully" Jan 29 10:56:28.891200 containerd[1934]: time="2025-01-29T10:56:28.887810362Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\"" Jan 29 10:56:28.891200 containerd[1934]: time="2025-01-29T10:56:28.887989864Z" level=info msg="TearDown network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" successfully" Jan 29 10:56:28.891200 containerd[1934]: time="2025-01-29T10:56:28.888012820Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" returns successfully" Jan 29 10:56:28.889791 systemd[1]: run-netns-cni\x2db62f3922\x2d6ddf\x2d905f\x2d0b41\x2db7af53849d68.mount: Deactivated successfully. Jan 29 10:56:28.893790 containerd[1934]: time="2025-01-29T10:56:28.893737161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:5,}" Jan 29 10:56:28.923091 containerd[1934]: time="2025-01-29T10:56:28.923023894Z" level=info msg="StopPodSandbox for \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\"" Jan 29 10:56:28.923608 kubelet[2390]: I0129 10:56:28.921786 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0" Jan 29 10:56:28.925573 containerd[1934]: time="2025-01-29T10:56:28.925497105Z" level=info msg="Ensure that sandbox 0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0 in task-service has been cleanup successfully" Jan 29 10:56:28.925967 containerd[1934]: time="2025-01-29T10:56:28.925837375Z" level=info msg="TearDown network for sandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\" successfully" Jan 29 10:56:28.925967 containerd[1934]: time="2025-01-29T10:56:28.925873117Z" level=info msg="StopPodSandbox for \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\" returns successfully" Jan 29 10:56:28.929647 containerd[1934]: time="2025-01-29T10:56:28.929407569Z" level=info msg="StopPodSandbox for \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\"" Jan 29 10:56:28.930767 containerd[1934]: time="2025-01-29T10:56:28.930614921Z" level=info msg="TearDown network for sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\" successfully" Jan 29 10:56:28.930990 containerd[1934]: time="2025-01-29T10:56:28.930654621Z" level=info msg="StopPodSandbox for \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\" returns successfully" Jan 29 10:56:28.931996 systemd[1]: run-netns-cni\x2d8c037f06\x2db636\x2d9941\x2d53a9\x2d6702c5284e67.mount: Deactivated successfully. Jan 29 10:56:28.935398 containerd[1934]: time="2025-01-29T10:56:28.935348281Z" level=info msg="StopPodSandbox for \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\"" Jan 29 10:56:28.935894 containerd[1934]: time="2025-01-29T10:56:28.935736898Z" level=info msg="TearDown network for sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" successfully" Jan 29 10:56:28.935894 containerd[1934]: time="2025-01-29T10:56:28.935771609Z" level=info msg="StopPodSandbox for \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" returns successfully" Jan 29 10:56:28.937779 containerd[1934]: time="2025-01-29T10:56:28.937613180Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\"" Jan 29 10:56:28.938857 containerd[1934]: time="2025-01-29T10:56:28.938799914Z" level=info msg="TearDown network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" successfully" Jan 29 10:56:28.939189 containerd[1934]: time="2025-01-29T10:56:28.939096358Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" returns successfully" Jan 29 10:56:28.941612 containerd[1934]: time="2025-01-29T10:56:28.941563536Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\"" Jan 29 10:56:28.943485 containerd[1934]: time="2025-01-29T10:56:28.943252483Z" level=info msg="TearDown network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" successfully" Jan 29 10:56:28.943485 containerd[1934]: time="2025-01-29T10:56:28.943293670Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" returns successfully" Jan 29 10:56:28.945200 containerd[1934]: time="2025-01-29T10:56:28.945038593Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\"" Jan 29 10:56:28.945988 containerd[1934]: time="2025-01-29T10:56:28.945821382Z" level=info msg="TearDown network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" successfully" Jan 29 10:56:28.945988 containerd[1934]: time="2025-01-29T10:56:28.945861070Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" returns successfully" Jan 29 10:56:28.947489 containerd[1934]: time="2025-01-29T10:56:28.947424128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:6,}" Jan 29 10:56:29.099195 containerd[1934]: time="2025-01-29T10:56:29.099075437Z" level=error msg="Failed to destroy network for sandbox \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:29.100174 containerd[1934]: time="2025-01-29T10:56:29.099857146Z" level=error msg="encountered an error cleaning up failed sandbox \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:29.100174 containerd[1934]: time="2025-01-29T10:56:29.099969805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:29.100532 kubelet[2390]: E0129 10:56:29.100323 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:29.100532 kubelet[2390]: E0129 10:56:29.100401 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:29.100532 kubelet[2390]: E0129 10:56:29.100433 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2r4js" Jan 29 10:56:29.100806 kubelet[2390]: E0129 10:56:29.100501 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-2r4js_default(f16310ba-dbbb-4373-bf50-efe2d6339fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-2r4js" podUID="f16310ba-dbbb-4373-bf50-efe2d6339fd7" Jan 29 10:56:29.116688 containerd[1934]: time="2025-01-29T10:56:29.116620272Z" level=error msg="Failed to destroy network for sandbox \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:29.117916 containerd[1934]: time="2025-01-29T10:56:29.117415246Z" level=error msg="encountered an error cleaning up failed sandbox \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:29.117916 containerd[1934]: time="2025-01-29T10:56:29.117507024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:29.118398 kubelet[2390]: E0129 10:56:29.118138 2390 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:56:29.118398 kubelet[2390]: E0129 10:56:29.118313 2390 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:29.118709 kubelet[2390]: E0129 10:56:29.118416 2390 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w2ckh" Jan 29 10:56:29.118709 kubelet[2390]: E0129 10:56:29.118518 2390 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w2ckh_calico-system(17870272-1d43-424d-9b22-5db660d97e38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w2ckh" podUID="17870272-1d43-424d-9b22-5db660d97e38" Jan 29 10:56:29.311948 containerd[1934]: time="2025-01-29T10:56:29.311884608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:29.313411 containerd[1934]: time="2025-01-29T10:56:29.313340643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 10:56:29.314108 containerd[1934]: time="2025-01-29T10:56:29.314035564Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:29.317532 containerd[1934]: time="2025-01-29T10:56:29.317420474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:29.318947 containerd[1934]: time="2025-01-29T10:56:29.318742717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.559260329s" Jan 29 10:56:29.318947 containerd[1934]: time="2025-01-29T10:56:29.318793679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 10:56:29.331545 containerd[1934]: time="2025-01-29T10:56:29.331372819Z" level=info msg="CreateContainer within sandbox \"1b833f21f832b35c76fcc8caefa0a0e34c774fe9d29a775943f0a9c78d18c39d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 10:56:29.351409 containerd[1934]: time="2025-01-29T10:56:29.351251147Z" level=info msg="CreateContainer within sandbox \"1b833f21f832b35c76fcc8caefa0a0e34c774fe9d29a775943f0a9c78d18c39d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"59b54b7aacaef29b7a788eb3739f29c43c58b4481f327cdf94512170c59b4be9\"" Jan 29 10:56:29.353014 containerd[1934]: time="2025-01-29T10:56:29.352951680Z" level=info msg="StartContainer for \"59b54b7aacaef29b7a788eb3739f29c43c58b4481f327cdf94512170c59b4be9\"" Jan 29 10:56:29.398504 systemd[1]: Started cri-containerd-59b54b7aacaef29b7a788eb3739f29c43c58b4481f327cdf94512170c59b4be9.scope - libcontainer container 59b54b7aacaef29b7a788eb3739f29c43c58b4481f327cdf94512170c59b4be9. Jan 29 10:56:29.456721 containerd[1934]: time="2025-01-29T10:56:29.456494467Z" level=info msg="StartContainer for \"59b54b7aacaef29b7a788eb3739f29c43c58b4481f327cdf94512170c59b4be9\" returns successfully" Jan 29 10:56:29.497809 kubelet[2390]: E0129 10:56:29.497747 2390 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:29.510670 kubelet[2390]: E0129 10:56:29.510598 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:29.691250 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 10:56:29.691383 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 10:56:29.788180 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b-shm.mount: Deactivated successfully. Jan 29 10:56:29.790079 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9-shm.mount: Deactivated successfully. Jan 29 10:56:29.790270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount810569211.mount: Deactivated successfully. Jan 29 10:56:29.933205 kubelet[2390]: I0129 10:56:29.933136 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b" Jan 29 10:56:29.936652 containerd[1934]: time="2025-01-29T10:56:29.935391020Z" level=info msg="StopPodSandbox for \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\"" Jan 29 10:56:29.939134 containerd[1934]: time="2025-01-29T10:56:29.938284081Z" level=info msg="Ensure that sandbox 794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b in task-service has been cleanup successfully" Jan 29 10:56:29.945677 systemd[1]: run-netns-cni\x2d33105188\x2df543\x2d6796\x2d76b0\x2d4452afa004d2.mount: Deactivated successfully. Jan 29 10:56:29.948220 containerd[1934]: time="2025-01-29T10:56:29.946525087Z" level=info msg="TearDown network for sandbox \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\" successfully" Jan 29 10:56:29.948220 containerd[1934]: time="2025-01-29T10:56:29.947267299Z" level=info msg="StopPodSandbox for \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\" returns successfully" Jan 29 10:56:29.951060 containerd[1934]: time="2025-01-29T10:56:29.950844906Z" level=info msg="StopPodSandbox for \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\"" Jan 29 10:56:29.951200 containerd[1934]: time="2025-01-29T10:56:29.951066843Z" level=info msg="TearDown network for sandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\" successfully" Jan 29 10:56:29.951200 containerd[1934]: time="2025-01-29T10:56:29.951092210Z" level=info msg="StopPodSandbox for \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\" returns successfully" Jan 29 10:56:29.951806 containerd[1934]: time="2025-01-29T10:56:29.951672923Z" level=info msg="StopPodSandbox for \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\"" Jan 29 10:56:29.951864 containerd[1934]: time="2025-01-29T10:56:29.951838285Z" level=info msg="TearDown network for sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\" successfully" Jan 29 10:56:29.951956 containerd[1934]: time="2025-01-29T10:56:29.951860881Z" level=info msg="StopPodSandbox for \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\" returns successfully" Jan 29 10:56:29.953535 containerd[1934]: time="2025-01-29T10:56:29.953095784Z" level=info msg="StopPodSandbox for \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\"" Jan 29 10:56:29.953535 containerd[1934]: time="2025-01-29T10:56:29.953358212Z" level=info msg="TearDown network for sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" successfully" Jan 29 10:56:29.953535 containerd[1934]: time="2025-01-29T10:56:29.953382308Z" level=info msg="StopPodSandbox for \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" returns successfully" Jan 29 10:56:29.954356 containerd[1934]: time="2025-01-29T10:56:29.954040575Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\"" Jan 29 10:56:29.954356 containerd[1934]: time="2025-01-29T10:56:29.954233378Z" level=info msg="TearDown network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" successfully" Jan 29 10:56:29.954356 containerd[1934]: time="2025-01-29T10:56:29.954257019Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" returns successfully" Jan 29 10:56:29.954356 containerd[1934]: time="2025-01-29T10:56:29.954872371Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\"" Jan 29 10:56:29.954356 containerd[1934]: time="2025-01-29T10:56:29.955028881Z" level=info msg="TearDown network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" successfully" Jan 29 10:56:29.954356 containerd[1934]: time="2025-01-29T10:56:29.955051909Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" returns successfully" Jan 29 10:56:29.955547 kubelet[2390]: I0129 10:56:29.955005 2390 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9" Jan 29 10:56:29.956242 containerd[1934]: time="2025-01-29T10:56:29.956126164Z" level=info msg="StopPodSandbox for \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\"" Jan 29 10:56:29.956569 containerd[1934]: time="2025-01-29T10:56:29.956414164Z" level=info msg="Ensure that sandbox d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9 in task-service has been cleanup successfully" Jan 29 10:56:29.956569 containerd[1934]: time="2025-01-29T10:56:29.958658661Z" level=info msg="TearDown network for sandbox \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\" successfully" Jan 29 10:56:29.956569 containerd[1934]: time="2025-01-29T10:56:29.958702103Z" level=info msg="StopPodSandbox for \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\" returns successfully" Jan 29 10:56:29.956569 containerd[1934]: time="2025-01-29T10:56:29.958797455Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\"" Jan 29 10:56:29.956569 containerd[1934]: time="2025-01-29T10:56:29.958976646Z" level=info msg="TearDown network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" successfully" Jan 29 10:56:29.956569 containerd[1934]: time="2025-01-29T10:56:29.959004448Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" returns successfully" Jan 29 10:56:29.961380 containerd[1934]: time="2025-01-29T10:56:29.960698228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:7,}" Jan 29 10:56:29.961380 containerd[1934]: time="2025-01-29T10:56:29.961070906Z" level=info msg="StopPodSandbox for \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\"" Jan 29 10:56:29.961380 containerd[1934]: time="2025-01-29T10:56:29.961234360Z" level=info msg="TearDown network for sandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\" successfully" Jan 29 10:56:29.961380 containerd[1934]: time="2025-01-29T10:56:29.961257701Z" level=info msg="StopPodSandbox for \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\" returns successfully" Jan 29 10:56:29.962036 systemd[1]: run-netns-cni\x2d8d8cc5c0\x2d662f\x2d8575\x2d6e5f\x2dba6d0b65e5db.mount: Deactivated successfully. Jan 29 10:56:29.963333 containerd[1934]: time="2025-01-29T10:56:29.962651703Z" level=info msg="StopPodSandbox for \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\"" Jan 29 10:56:29.963333 containerd[1934]: time="2025-01-29T10:56:29.962832465Z" level=info msg="TearDown network for sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\" successfully" Jan 29 10:56:29.963333 containerd[1934]: time="2025-01-29T10:56:29.962856681Z" level=info msg="StopPodSandbox for \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\" returns successfully" Jan 29 10:56:29.963984 containerd[1934]: time="2025-01-29T10:56:29.963941586Z" level=info msg="StopPodSandbox for \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\"" Jan 29 10:56:29.964786 containerd[1934]: time="2025-01-29T10:56:29.964748854Z" level=info msg="TearDown network for sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" successfully" Jan 29 10:56:29.964936 containerd[1934]: time="2025-01-29T10:56:29.964909466Z" level=info msg="StopPodSandbox for \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" returns successfully" Jan 29 10:56:29.965977 containerd[1934]: time="2025-01-29T10:56:29.965855469Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\"" Jan 29 10:56:29.966119 containerd[1934]: time="2025-01-29T10:56:29.966013022Z" level=info msg="TearDown network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" successfully" Jan 29 10:56:29.966119 containerd[1934]: time="2025-01-29T10:56:29.966035810Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" returns successfully" Jan 29 10:56:29.967365 containerd[1934]: time="2025-01-29T10:56:29.967057435Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\"" Jan 29 10:56:29.967610 containerd[1934]: time="2025-01-29T10:56:29.967564314Z" level=info msg="TearDown network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" successfully" Jan 29 10:56:29.967736 containerd[1934]: time="2025-01-29T10:56:29.967603258Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" returns successfully" Jan 29 10:56:29.968612 containerd[1934]: time="2025-01-29T10:56:29.968543360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:6,}" Jan 29 10:56:30.237760 (udev-worker)[3323]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:56:30.240148 systemd-networkd[1835]: calie4cad9b444d: Link UP Jan 29 10:56:30.241538 systemd-networkd[1835]: calie4cad9b444d: Gained carrier Jan 29 10:56:30.257842 kubelet[2390]: I0129 10:56:30.257649 2390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k8rdl" podStartSLOduration=4.945579917 podStartE2EDuration="21.257624452s" podCreationTimestamp="2025-01-29 10:56:09 +0000 UTC" firstStartedPulling="2025-01-29 10:56:13.00858996 +0000 UTC m=+4.543971785" lastFinishedPulling="2025-01-29 10:56:29.320634507 +0000 UTC m=+20.856016320" observedRunningTime="2025-01-29 10:56:30.002576155 +0000 UTC m=+21.537957980" watchObservedRunningTime="2025-01-29 10:56:30.257624452 +0000 UTC m=+21.793006289" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.052 [INFO][3337] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.082 [INFO][3337] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.43-k8s-csi--node--driver--w2ckh-eth0 csi-node-driver- calico-system 17870272-1d43-424d-9b22-5db660d97e38 914 0 2025-01-29 10:56:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172.31.16.43 csi-node-driver-w2ckh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie4cad9b444d [] []}} ContainerID="0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" Namespace="calico-system" Pod="csi-node-driver-w2ckh" WorkloadEndpoint="172.31.16.43-k8s-csi--node--driver--w2ckh-" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.082 [INFO][3337] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" Namespace="calico-system" Pod="csi-node-driver-w2ckh" WorkloadEndpoint="172.31.16.43-k8s-csi--node--driver--w2ckh-eth0" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.160 [INFO][3361] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" HandleID="k8s-pod-network.0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" Workload="172.31.16.43-k8s-csi--node--driver--w2ckh-eth0" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.181 [INFO][3361] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" HandleID="k8s-pod-network.0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" Workload="172.31.16.43-k8s-csi--node--driver--w2ckh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b82e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.16.43", "pod":"csi-node-driver-w2ckh", "timestamp":"2025-01-29 10:56:30.160840932 +0000 UTC"}, Hostname:"172.31.16.43", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.182 [INFO][3361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.182 [INFO][3361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.182 [INFO][3361] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.43' Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.186 [INFO][3361] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" host="172.31.16.43" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.193 [INFO][3361] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.16.43" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.200 [INFO][3361] ipam/ipam.go 489: Trying affinity for 192.168.90.128/26 host="172.31.16.43" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.203 [INFO][3361] ipam/ipam.go 155: Attempting to load block cidr=192.168.90.128/26 host="172.31.16.43" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.207 [INFO][3361] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.128/26 host="172.31.16.43" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.207 [INFO][3361] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.128/26 handle="k8s-pod-network.0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" host="172.31.16.43" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.209 [INFO][3361] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74 Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.215 [INFO][3361] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.90.128/26 handle="k8s-pod-network.0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" host="172.31.16.43" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.223 [INFO][3361] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.90.129/26] block=192.168.90.128/26 handle="k8s-pod-network.0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" host="172.31.16.43" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.224 [INFO][3361] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.129/26] handle="k8s-pod-network.0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" host="172.31.16.43" Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.224 [INFO][3361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:56:30.260742 containerd[1934]: 2025-01-29 10:56:30.224 [INFO][3361] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.129/26] IPv6=[] ContainerID="0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" HandleID="k8s-pod-network.0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" Workload="172.31.16.43-k8s-csi--node--driver--w2ckh-eth0" Jan 29 10:56:30.261829 containerd[1934]: 2025-01-29 10:56:30.228 [INFO][3337] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" Namespace="calico-system" Pod="csi-node-driver-w2ckh" WorkloadEndpoint="172.31.16.43-k8s-csi--node--driver--w2ckh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.43-k8s-csi--node--driver--w2ckh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17870272-1d43-424d-9b22-5db660d97e38", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.43", ContainerID:"", Pod:"csi-node-driver-w2ckh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4cad9b444d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:56:30.261829 containerd[1934]: 2025-01-29 10:56:30.228 [INFO][3337] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.90.129/32] ContainerID="0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" Namespace="calico-system" Pod="csi-node-driver-w2ckh" WorkloadEndpoint="172.31.16.43-k8s-csi--node--driver--w2ckh-eth0" Jan 29 10:56:30.261829 containerd[1934]: 2025-01-29 10:56:30.228 [INFO][3337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4cad9b444d ContainerID="0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" Namespace="calico-system" Pod="csi-node-driver-w2ckh" WorkloadEndpoint="172.31.16.43-k8s-csi--node--driver--w2ckh-eth0" Jan 29 10:56:30.261829 containerd[1934]: 2025-01-29 10:56:30.242 [INFO][3337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" Namespace="calico-system" Pod="csi-node-driver-w2ckh" WorkloadEndpoint="172.31.16.43-k8s-csi--node--driver--w2ckh-eth0" Jan 29 10:56:30.261829 containerd[1934]: 2025-01-29 10:56:30.243 [INFO][3337] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" Namespace="calico-system" Pod="csi-node-driver-w2ckh" WorkloadEndpoint="172.31.16.43-k8s-csi--node--driver--w2ckh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.43-k8s-csi--node--driver--w2ckh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17870272-1d43-424d-9b22-5db660d97e38", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.43", ContainerID:"0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74", Pod:"csi-node-driver-w2ckh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie4cad9b444d", MAC:"82:2a:9c:d5:01:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:56:30.261829 containerd[1934]: 2025-01-29 10:56:30.258 [INFO][3337] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74" Namespace="calico-system" Pod="csi-node-driver-w2ckh" WorkloadEndpoint="172.31.16.43-k8s-csi--node--driver--w2ckh-eth0" Jan 29 10:56:30.299567 containerd[1934]: time="2025-01-29T10:56:30.298698985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:56:30.299567 containerd[1934]: time="2025-01-29T10:56:30.298846031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:56:30.299567 containerd[1934]: time="2025-01-29T10:56:30.298879722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:56:30.299567 containerd[1934]: time="2025-01-29T10:56:30.299091548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:56:30.335473 systemd[1]: Started cri-containerd-0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74.scope - libcontainer container 0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74. Jan 29 10:56:30.343427 systemd-networkd[1835]: cali6976645dc4e: Link UP Jan 29 10:56:30.343850 systemd-networkd[1835]: cali6976645dc4e: Gained carrier Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.067 [INFO][3342] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.092 [INFO][3342] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0 nginx-deployment-8587fbcb89- default f16310ba-dbbb-4373-bf50-efe2d6339fd7 1011 0 2025-01-29 10:56:22 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.16.43 nginx-deployment-8587fbcb89-2r4js eth0 default [] [] [kns.default ksa.default.default] cali6976645dc4e [] []}} ContainerID="0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" Namespace="default" Pod="nginx-deployment-8587fbcb89-2r4js" WorkloadEndpoint="172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.092 [INFO][3342] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" Namespace="default" Pod="nginx-deployment-8587fbcb89-2r4js" WorkloadEndpoint="172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.187 [INFO][3372] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" HandleID="k8s-pod-network.0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" Workload="172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.203 [INFO][3372] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" HandleID="k8s-pod-network.0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" Workload="172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318e70), Attrs:map[string]string{"namespace":"default", "node":"172.31.16.43", "pod":"nginx-deployment-8587fbcb89-2r4js", "timestamp":"2025-01-29 10:56:30.187464253 +0000 UTC"}, Hostname:"172.31.16.43", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.203 [INFO][3372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.224 [INFO][3372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.224 [INFO][3372] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.43' Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.287 [INFO][3372] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" host="172.31.16.43" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.294 [INFO][3372] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.16.43" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.303 [INFO][3372] ipam/ipam.go 489: Trying affinity for 192.168.90.128/26 host="172.31.16.43" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.307 [INFO][3372] ipam/ipam.go 155: Attempting to load block cidr=192.168.90.128/26 host="172.31.16.43" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.311 [INFO][3372] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.128/26 host="172.31.16.43" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.311 [INFO][3372] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.128/26 handle="k8s-pod-network.0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" host="172.31.16.43" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.313 [INFO][3372] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3 Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.325 [INFO][3372] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.90.128/26 handle="k8s-pod-network.0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" host="172.31.16.43" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.333 [INFO][3372] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.90.130/26] block=192.168.90.128/26 handle="k8s-pod-network.0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" host="172.31.16.43" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.333 [INFO][3372] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.130/26] handle="k8s-pod-network.0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" host="172.31.16.43" Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.333 [INFO][3372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:56:30.366969 containerd[1934]: 2025-01-29 10:56:30.333 [INFO][3372] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.130/26] IPv6=[] ContainerID="0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" HandleID="k8s-pod-network.0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" Workload="172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0" Jan 29 10:56:30.368097 containerd[1934]: 2025-01-29 10:56:30.337 [INFO][3342] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" Namespace="default" Pod="nginx-deployment-8587fbcb89-2r4js" WorkloadEndpoint="172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f16310ba-dbbb-4373-bf50-efe2d6339fd7", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.43", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-2r4js", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali6976645dc4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:56:30.368097 containerd[1934]: 2025-01-29 10:56:30.338 [INFO][3342] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.90.130/32] ContainerID="0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" Namespace="default" Pod="nginx-deployment-8587fbcb89-2r4js" WorkloadEndpoint="172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0" Jan 29 10:56:30.368097 containerd[1934]: 2025-01-29 10:56:30.338 [INFO][3342] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6976645dc4e ContainerID="0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" Namespace="default" Pod="nginx-deployment-8587fbcb89-2r4js" WorkloadEndpoint="172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0" Jan 29 10:56:30.368097 containerd[1934]: 2025-01-29 10:56:30.344 [INFO][3342] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" Namespace="default" Pod="nginx-deployment-8587fbcb89-2r4js" WorkloadEndpoint="172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0" Jan 29 10:56:30.368097 containerd[1934]: 2025-01-29 10:56:30.345 [INFO][3342] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" Namespace="default" Pod="nginx-deployment-8587fbcb89-2r4js" WorkloadEndpoint="172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f16310ba-dbbb-4373-bf50-efe2d6339fd7", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 56, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.43", ContainerID:"0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3", Pod:"nginx-deployment-8587fbcb89-2r4js", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali6976645dc4e", MAC:"96:40:f8:19:50:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:56:30.368097 containerd[1934]: 2025-01-29 10:56:30.358 [INFO][3342] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3" Namespace="default" Pod="nginx-deployment-8587fbcb89-2r4js" WorkloadEndpoint="172.31.16.43-k8s-nginx--deployment--8587fbcb89--2r4js-eth0" Jan 29 10:56:30.405970 containerd[1934]: time="2025-01-29T10:56:30.405910796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w2ckh,Uid:17870272-1d43-424d-9b22-5db660d97e38,Namespace:calico-system,Attempt:7,} returns sandbox id \"0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74\"" Jan 29 10:56:30.408993 containerd[1934]: time="2025-01-29T10:56:30.408294064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:56:30.408993 containerd[1934]: time="2025-01-29T10:56:30.408395665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:56:30.408993 containerd[1934]: time="2025-01-29T10:56:30.408490490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:56:30.409702 containerd[1934]: time="2025-01-29T10:56:30.409540277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:56:30.410536 containerd[1934]: time="2025-01-29T10:56:30.410482069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 10:56:30.447492 systemd[1]: Started cri-containerd-0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3.scope - libcontainer container 0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3. Jan 29 10:56:30.506241 containerd[1934]: time="2025-01-29T10:56:30.503922346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2r4js,Uid:f16310ba-dbbb-4373-bf50-efe2d6339fd7,Namespace:default,Attempt:6,} returns sandbox id \"0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3\"" Jan 29 10:56:30.511580 kubelet[2390]: E0129 10:56:30.511483 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:31.512363 kubelet[2390]: E0129 10:56:31.512295 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:31.628206 kernel: bpftool[3610]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 10:56:31.776618 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 10:56:31.864467 containerd[1934]: time="2025-01-29T10:56:31.864391539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:31.868529 containerd[1934]: time="2025-01-29T10:56:31.868424018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 10:56:31.872043 containerd[1934]: time="2025-01-29T10:56:31.871938656Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:31.882200 containerd[1934]: time="2025-01-29T10:56:31.880790576Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:31.887663 containerd[1934]: time="2025-01-29T10:56:31.886336721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.475628277s" Jan 29 10:56:31.887906 containerd[1934]: time="2025-01-29T10:56:31.887856949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 10:56:31.891385 containerd[1934]: time="2025-01-29T10:56:31.890828198Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 10:56:31.894028 containerd[1934]: time="2025-01-29T10:56:31.893880971Z" level=info msg="CreateContainer within sandbox \"0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 10:56:31.924180 containerd[1934]: time="2025-01-29T10:56:31.922035795Z" level=info msg="CreateContainer within sandbox \"0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c39ed148327860517709c2eb3ca6e5c425bf3e55b2828a02c23b1c641b4bb787\"" Jan 29 10:56:31.924180 containerd[1934]: time="2025-01-29T10:56:31.923210391Z" level=info msg="StartContainer for \"c39ed148327860517709c2eb3ca6e5c425bf3e55b2828a02c23b1c641b4bb787\"" Jan 29 10:56:32.011598 systemd[1]: Started cri-containerd-c39ed148327860517709c2eb3ca6e5c425bf3e55b2828a02c23b1c641b4bb787.scope - libcontainer container c39ed148327860517709c2eb3ca6e5c425bf3e55b2828a02c23b1c641b4bb787. Jan 29 10:56:32.011986 systemd-networkd[1835]: cali6976645dc4e: Gained IPv6LL Jan 29 10:56:32.034541 (udev-worker)[3324]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:56:32.038379 systemd-networkd[1835]: vxlan.calico: Link UP Jan 29 10:56:32.038395 systemd-networkd[1835]: vxlan.calico: Gained carrier Jan 29 10:56:32.122627 containerd[1934]: time="2025-01-29T10:56:32.122510420Z" level=info msg="StartContainer for \"c39ed148327860517709c2eb3ca6e5c425bf3e55b2828a02c23b1c641b4bb787\" returns successfully" Jan 29 10:56:32.141012 systemd-networkd[1835]: calie4cad9b444d: Gained IPv6LL Jan 29 10:56:32.167191 kubelet[2390]: I0129 10:56:32.166693 2390 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:56:32.512541 kubelet[2390]: E0129 10:56:32.512471 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:32.914445 systemd[1]: run-containerd-runc-k8s.io-59b54b7aacaef29b7a788eb3739f29c43c58b4481f327cdf94512170c59b4be9-runc.TSu8Tt.mount: Deactivated successfully. Jan 29 10:56:33.166824 systemd-networkd[1835]: vxlan.calico: Gained IPv6LL Jan 29 10:56:33.513884 kubelet[2390]: E0129 10:56:33.513347 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:34.514317 kubelet[2390]: E0129 10:56:34.514253 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:35.268261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498812980.mount: Deactivated successfully. Jan 29 10:56:35.515388 kubelet[2390]: E0129 10:56:35.515321 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:36.059930 ntpd[1914]: Listen normally on 7 vxlan.calico 192.168.90.128:123 Jan 29 10:56:36.060069 ntpd[1914]: Listen normally on 8 calie4cad9b444d [fe80::ecee:eeff:feee:eeee%3]:123 Jan 29 10:56:36.061672 ntpd[1914]: 29 Jan 10:56:36 ntpd[1914]: Listen normally on 7 vxlan.calico 192.168.90.128:123 Jan 29 10:56:36.061672 ntpd[1914]: 29 Jan 10:56:36 ntpd[1914]: Listen normally on 8 calie4cad9b444d [fe80::ecee:eeff:feee:eeee%3]:123 Jan 29 10:56:36.061672 ntpd[1914]: 29 Jan 10:56:36 ntpd[1914]: Listen normally on 9 cali6976645dc4e [fe80::ecee:eeff:feee:eeee%4]:123 Jan 29 10:56:36.061672 ntpd[1914]: 29 Jan 10:56:36 ntpd[1914]: Listen normally on 10 vxlan.calico [fe80::64e6:afff:fe43:67b8%5]:123 Jan 29 10:56:36.060170 ntpd[1914]: Listen normally on 9 cali6976645dc4e [fe80::ecee:eeff:feee:eeee%4]:123 Jan 29 10:56:36.060250 ntpd[1914]: Listen normally on 10 vxlan.calico [fe80::64e6:afff:fe43:67b8%5]:123 Jan 29 10:56:36.516597 kubelet[2390]: E0129 10:56:36.516386 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:36.589787 containerd[1934]: time="2025-01-29T10:56:36.589479404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:36.591595 containerd[1934]: time="2025-01-29T10:56:36.591442030Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:36.591595 containerd[1934]: time="2025-01-29T10:56:36.591521982Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490" Jan 29 10:56:36.598078 containerd[1934]: time="2025-01-29T10:56:36.597992781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:36.600621 containerd[1934]: time="2025-01-29T10:56:36.599890627Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 4.709000396s" Jan 29 10:56:36.600621 containerd[1934]: time="2025-01-29T10:56:36.599948786Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 10:56:36.602787 containerd[1934]: time="2025-01-29T10:56:36.602728576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 10:56:36.604212 containerd[1934]: time="2025-01-29T10:56:36.604067646Z" level=info msg="CreateContainer within sandbox \"0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 10:56:36.627324 containerd[1934]: time="2025-01-29T10:56:36.625775155Z" level=info msg="CreateContainer within sandbox \"0c1955ab9e2a6a6b2fbcaf16e0685d88cdff4811adf22a1a0d24ddcb2f1d93c3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e7af3c7f8069f440779a6d5ab930a63ba84f8799f59792c8de06ea1a68e16d2c\"" Jan 29 10:56:36.627324 containerd[1934]: time="2025-01-29T10:56:36.626872702Z" level=info msg="StartContainer for \"e7af3c7f8069f440779a6d5ab930a63ba84f8799f59792c8de06ea1a68e16d2c\"" Jan 29 10:56:36.689460 systemd[1]: Started cri-containerd-e7af3c7f8069f440779a6d5ab930a63ba84f8799f59792c8de06ea1a68e16d2c.scope - libcontainer container e7af3c7f8069f440779a6d5ab930a63ba84f8799f59792c8de06ea1a68e16d2c. Jan 29 10:56:36.736495 containerd[1934]: time="2025-01-29T10:56:36.736186294Z" level=info msg="StartContainer for \"e7af3c7f8069f440779a6d5ab930a63ba84f8799f59792c8de06ea1a68e16d2c\" returns successfully" Jan 29 10:56:37.516600 kubelet[2390]: E0129 10:56:37.516533 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:37.837982 containerd[1934]: time="2025-01-29T10:56:37.837926555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:37.840962 containerd[1934]: time="2025-01-29T10:56:37.840890825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 10:56:37.842474 containerd[1934]: time="2025-01-29T10:56:37.842431142Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:37.846314 containerd[1934]: time="2025-01-29T10:56:37.846267279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:37.847756 containerd[1934]: time="2025-01-29T10:56:37.847711285Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.244919428s" Jan 29 10:56:37.847941 containerd[1934]: time="2025-01-29T10:56:37.847908634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 10:56:37.851763 containerd[1934]: time="2025-01-29T10:56:37.851709209Z" level=info msg="CreateContainer within sandbox \"0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 10:56:37.875332 containerd[1934]: time="2025-01-29T10:56:37.875070450Z" level=info msg="CreateContainer within sandbox \"0cf205844458107673aa29df1b2d189daf30e80878880245ac2e4e2ae36a6f74\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4c49ee3fc22251883a6dd06a099da6af4e571d3e3459c76676dd635dac305fdf\"" Jan 29 10:56:37.877700 containerd[1934]: time="2025-01-29T10:56:37.877623025Z" level=info msg="StartContainer for \"4c49ee3fc22251883a6dd06a099da6af4e571d3e3459c76676dd635dac305fdf\"" Jan 29 10:56:37.932627 systemd[1]: run-containerd-runc-k8s.io-4c49ee3fc22251883a6dd06a099da6af4e571d3e3459c76676dd635dac305fdf-runc.MNQMec.mount: Deactivated successfully. Jan 29 10:56:37.944450 systemd[1]: Started cri-containerd-4c49ee3fc22251883a6dd06a099da6af4e571d3e3459c76676dd635dac305fdf.scope - libcontainer container 4c49ee3fc22251883a6dd06a099da6af4e571d3e3459c76676dd635dac305fdf. Jan 29 10:56:38.004074 containerd[1934]: time="2025-01-29T10:56:38.003952349Z" level=info msg="StartContainer for \"4c49ee3fc22251883a6dd06a099da6af4e571d3e3459c76676dd635dac305fdf\" returns successfully" Jan 29 10:56:38.092918 kubelet[2390]: I0129 10:56:38.092507 2390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-2r4js" podStartSLOduration=9.997863217999999 podStartE2EDuration="16.092484777s" podCreationTimestamp="2025-01-29 10:56:22 +0000 UTC" firstStartedPulling="2025-01-29 10:56:30.507048858 +0000 UTC m=+22.042430683" lastFinishedPulling="2025-01-29 10:56:36.601670429 +0000 UTC m=+28.137052242" observedRunningTime="2025-01-29 10:56:37.079704866 +0000 UTC m=+28.615086691" watchObservedRunningTime="2025-01-29 10:56:38.092484777 +0000 UTC m=+29.627866602" Jan 29 10:56:38.517327 kubelet[2390]: E0129 10:56:38.517191 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:38.656507 kubelet[2390]: I0129 10:56:38.656451 2390 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 10:56:38.656507 kubelet[2390]: I0129 10:56:38.656502 2390 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 10:56:39.517394 kubelet[2390]: E0129 10:56:39.517337 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:40.518461 kubelet[2390]: E0129 10:56:40.518385 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:41.519239 kubelet[2390]: E0129 10:56:41.519186 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:42.520051 kubelet[2390]: E0129 10:56:42.519981 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:43.521268 kubelet[2390]: E0129 10:56:43.521210 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:44.522238 kubelet[2390]: E0129 10:56:44.522150 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:44.772405 kubelet[2390]: I0129 10:56:44.772214 2390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w2ckh" podStartSLOduration=28.332437976 podStartE2EDuration="35.772149913s" podCreationTimestamp="2025-01-29 10:56:09 +0000 UTC" firstStartedPulling="2025-01-29 10:56:30.409860001 +0000 UTC m=+21.945241826" lastFinishedPulling="2025-01-29 10:56:37.849571938 +0000 UTC m=+29.384953763" observedRunningTime="2025-01-29 10:56:38.09279926 +0000 UTC m=+29.628181109" watchObservedRunningTime="2025-01-29 10:56:44.772149913 +0000 UTC m=+36.307531738" Jan 29 10:56:44.783411 systemd[1]: Created slice kubepods-besteffort-pod3ba3cb88_b1cd_414c_ad09_60050d538c91.slice - libcontainer container kubepods-besteffort-pod3ba3cb88_b1cd_414c_ad09_60050d538c91.slice. Jan 29 10:56:44.930131 kubelet[2390]: I0129 10:56:44.930085 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/3ba3cb88-b1cd-414c-ad09-60050d538c91-data\") pod \"nfs-server-provisioner-0\" (UID: \"3ba3cb88-b1cd-414c-ad09-60050d538c91\") " pod="default/nfs-server-provisioner-0" Jan 29 10:56:44.930425 kubelet[2390]: I0129 10:56:44.930367 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtjbp\" (UniqueName: \"kubernetes.io/projected/3ba3cb88-b1cd-414c-ad09-60050d538c91-kube-api-access-wtjbp\") pod \"nfs-server-provisioner-0\" (UID: \"3ba3cb88-b1cd-414c-ad09-60050d538c91\") " pod="default/nfs-server-provisioner-0" Jan 29 10:56:45.089937 containerd[1934]: time="2025-01-29T10:56:45.089410367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3ba3cb88-b1cd-414c-ad09-60050d538c91,Namespace:default,Attempt:0,}" Jan 29 10:56:45.336972 systemd-networkd[1835]: cali60e51b789ff: Link UP Jan 29 10:56:45.337567 systemd-networkd[1835]: cali60e51b789ff: Gained carrier Jan 29 10:56:45.342366 (udev-worker)[3932]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.181 [INFO][3913] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.43-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 3ba3cb88-b1cd-414c-ad09-60050d538c91 1174 0 2025-01-29 10:56:44 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.16.43 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.43-k8s-nfs--server--provisioner--0-" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.181 [INFO][3913] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.43-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.237 [INFO][3923] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" HandleID="k8s-pod-network.41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" Workload="172.31.16.43-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.258 [INFO][3923] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" HandleID="k8s-pod-network.41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" Workload="172.31.16.43-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030c120), Attrs:map[string]string{"namespace":"default", "node":"172.31.16.43", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 10:56:45.237793668 +0000 UTC"}, Hostname:"172.31.16.43", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.259 [INFO][3923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.259 [INFO][3923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.259 [INFO][3923] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.43' Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.268 [INFO][3923] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" host="172.31.16.43" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.278 [INFO][3923] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.16.43" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.289 [INFO][3923] ipam/ipam.go 489: Trying affinity for 192.168.90.128/26 host="172.31.16.43" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.291 [INFO][3923] ipam/ipam.go 155: Attempting to load block cidr=192.168.90.128/26 host="172.31.16.43" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.296 [INFO][3923] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.128/26 host="172.31.16.43" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.296 [INFO][3923] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.128/26 handle="k8s-pod-network.41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" host="172.31.16.43" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.299 [INFO][3923] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4 Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.308 [INFO][3923] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.90.128/26 handle="k8s-pod-network.41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" host="172.31.16.43" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.326 [INFO][3923] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.90.131/26] block=192.168.90.128/26 handle="k8s-pod-network.41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" host="172.31.16.43" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.326 [INFO][3923] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.131/26] handle="k8s-pod-network.41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" host="172.31.16.43" Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.326 [INFO][3923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:56:45.362345 containerd[1934]: 2025-01-29 10:56:45.326 [INFO][3923] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.131/26] IPv6=[] ContainerID="41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" HandleID="k8s-pod-network.41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" Workload="172.31.16.43-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:56:45.367491 containerd[1934]: 2025-01-29 10:56:45.329 [INFO][3913] cni-plugin/k8s.go 386: Populated endpoint ContainerID="41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.43-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.43-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"3ba3cb88-b1cd-414c-ad09-60050d538c91", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.43", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.90.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:56:45.367491 containerd[1934]: 2025-01-29 10:56:45.330 [INFO][3913] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.90.131/32] ContainerID="41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.43-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:56:45.367491 containerd[1934]: 2025-01-29 10:56:45.330 [INFO][3913] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.43-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:56:45.367491 containerd[1934]: 2025-01-29 10:56:45.335 [INFO][3913] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.43-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:56:45.369277 containerd[1934]: 2025-01-29 10:56:45.335 [INFO][3913] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.43-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.43-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"3ba3cb88-b1cd-414c-ad09-60050d538c91", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 56, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.43", ContainerID:"41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.90.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"e2:f6:1f:92:0b:5f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:56:45.369277 containerd[1934]: 2025-01-29 10:56:45.353 [INFO][3913] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.16.43-k8s-nfs--server--provisioner--0-eth0" Jan 29 10:56:45.403414 containerd[1934]: time="2025-01-29T10:56:45.403255260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:56:45.403827 containerd[1934]: time="2025-01-29T10:56:45.403351284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:56:45.403827 containerd[1934]: time="2025-01-29T10:56:45.403580100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:56:45.403978 containerd[1934]: time="2025-01-29T10:56:45.403752864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:56:45.449525 systemd[1]: Started cri-containerd-41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4.scope - libcontainer container 41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4. Jan 29 10:56:45.508797 containerd[1934]: time="2025-01-29T10:56:45.508351825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:3ba3cb88-b1cd-414c-ad09-60050d538c91,Namespace:default,Attempt:0,} returns sandbox id \"41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4\"" Jan 29 10:56:45.512031 containerd[1934]: time="2025-01-29T10:56:45.511635073Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 10:56:45.522606 kubelet[2390]: E0129 10:56:45.522549 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:46.524338 kubelet[2390]: E0129 10:56:46.524260 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:46.670168 update_engine[1919]: I20250129 10:56:46.669198 1919 update_attempter.cc:509] Updating boot flags... Jan 29 10:56:46.782725 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (4003) Jan 29 10:56:47.177885 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (4007) Jan 29 10:56:47.182378 systemd-networkd[1835]: cali60e51b789ff: Gained IPv6LL Jan 29 10:56:47.524537 kubelet[2390]: E0129 10:56:47.524407 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:48.481722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2157575845.mount: Deactivated successfully. Jan 29 10:56:48.526416 kubelet[2390]: E0129 10:56:48.526324 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:49.497906 kubelet[2390]: E0129 10:56:49.497836 2390 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:49.526945 kubelet[2390]: E0129 10:56:49.526895 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:50.060417 ntpd[1914]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 10:56:50.061551 ntpd[1914]: 29 Jan 10:56:50 ntpd[1914]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 10:56:50.528660 kubelet[2390]: E0129 10:56:50.527639 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:51.375291 containerd[1934]: time="2025-01-29T10:56:51.375231534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:51.377860 containerd[1934]: time="2025-01-29T10:56:51.377797878Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623" Jan 29 10:56:51.379377 containerd[1934]: time="2025-01-29T10:56:51.379300854Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:51.386916 containerd[1934]: time="2025-01-29T10:56:51.386834010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:56:51.389958 containerd[1934]: time="2025-01-29T10:56:51.388768638Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.877078017s" Jan 29 10:56:51.389958 containerd[1934]: time="2025-01-29T10:56:51.388825986Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 29 10:56:51.393238 containerd[1934]: time="2025-01-29T10:56:51.393183738Z" level=info msg="CreateContainer within sandbox \"41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 10:56:51.416622 containerd[1934]: time="2025-01-29T10:56:51.416563590Z" level=info msg="CreateContainer within sandbox \"41d33f59f2aa22b4bcd55b0e7c4b762db16002b21b3006e39e96a4bb2b36a3d4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"be2c38013157c6190d8d4ee3c44a4c0aed9f806aa786ad67741f6a38ff472c04\"" Jan 29 10:56:51.421175 containerd[1934]: time="2025-01-29T10:56:51.420798402Z" level=info msg="StartContainer for \"be2c38013157c6190d8d4ee3c44a4c0aed9f806aa786ad67741f6a38ff472c04\"" Jan 29 10:56:51.485501 systemd[1]: Started cri-containerd-be2c38013157c6190d8d4ee3c44a4c0aed9f806aa786ad67741f6a38ff472c04.scope - libcontainer container be2c38013157c6190d8d4ee3c44a4c0aed9f806aa786ad67741f6a38ff472c04. Jan 29 10:56:51.528101 kubelet[2390]: E0129 10:56:51.528058 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:51.533562 containerd[1934]: time="2025-01-29T10:56:51.533210359Z" level=info msg="StartContainer for \"be2c38013157c6190d8d4ee3c44a4c0aed9f806aa786ad67741f6a38ff472c04\" returns successfully" Jan 29 10:56:52.529529 kubelet[2390]: E0129 10:56:52.529462 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:53.529809 kubelet[2390]: E0129 10:56:53.529737 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:54.530516 kubelet[2390]: E0129 10:56:54.530446 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:55.531281 kubelet[2390]: E0129 10:56:55.531228 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:56.532485 kubelet[2390]: E0129 10:56:56.532410 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:57.532607 kubelet[2390]: E0129 10:56:57.532536 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:58.533171 kubelet[2390]: E0129 10:56:58.533095 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:56:59.534037 kubelet[2390]: E0129 10:56:59.533974 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:00.534586 kubelet[2390]: E0129 10:57:00.534523 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:01.535001 kubelet[2390]: E0129 10:57:01.534925 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:02.299660 kubelet[2390]: I0129 10:57:02.299570 2390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=12.419933975 podStartE2EDuration="18.299549884s" podCreationTimestamp="2025-01-29 10:56:44 +0000 UTC" firstStartedPulling="2025-01-29 10:56:45.510998701 +0000 UTC m=+37.046380538" lastFinishedPulling="2025-01-29 10:56:51.390614634 +0000 UTC m=+42.925996447" observedRunningTime="2025-01-29 10:56:52.14512785 +0000 UTC m=+43.680509675" watchObservedRunningTime="2025-01-29 10:57:02.299549884 +0000 UTC m=+53.834931709" Jan 29 10:57:02.535607 kubelet[2390]: E0129 10:57:02.535544 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:03.536031 kubelet[2390]: E0129 10:57:03.535967 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:04.536413 kubelet[2390]: E0129 10:57:04.536339 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:05.536541 kubelet[2390]: E0129 10:57:05.536475 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:06.537141 kubelet[2390]: E0129 10:57:06.537084 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:07.538047 kubelet[2390]: E0129 10:57:07.537983 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:08.539086 kubelet[2390]: E0129 10:57:08.539025 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:09.497357 kubelet[2390]: E0129 10:57:09.497297 2390 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:09.533616 containerd[1934]: time="2025-01-29T10:57:09.533102268Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\"" Jan 29 10:57:09.533616 containerd[1934]: time="2025-01-29T10:57:09.533288244Z" level=info msg="TearDown network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" successfully" Jan 29 10:57:09.533616 containerd[1934]: time="2025-01-29T10:57:09.533310600Z" level=info msg="StopPodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" returns successfully" Jan 29 10:57:09.534605 containerd[1934]: time="2025-01-29T10:57:09.534480036Z" level=info msg="RemovePodSandbox for \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\"" Jan 29 10:57:09.534605 containerd[1934]: time="2025-01-29T10:57:09.534542844Z" level=info msg="Forcibly stopping sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\"" Jan 29 10:57:09.534827 containerd[1934]: time="2025-01-29T10:57:09.534671340Z" level=info msg="TearDown network for sandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" successfully" Jan 29 10:57:09.538929 containerd[1934]: time="2025-01-29T10:57:09.538857840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.539084 containerd[1934]: time="2025-01-29T10:57:09.538942548Z" level=info msg="RemovePodSandbox \"efb2df617d9951db7a35b047be06e52c57311df91663c3d1fedf1e083a0df4b4\" returns successfully" Jan 29 10:57:09.539259 kubelet[2390]: E0129 10:57:09.539223 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:09.540313 containerd[1934]: time="2025-01-29T10:57:09.539656944Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\"" Jan 29 10:57:09.540313 containerd[1934]: time="2025-01-29T10:57:09.539817696Z" level=info msg="TearDown network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" successfully" Jan 29 10:57:09.540313 containerd[1934]: time="2025-01-29T10:57:09.539840052Z" level=info msg="StopPodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" returns successfully" Jan 29 10:57:09.541247 containerd[1934]: time="2025-01-29T10:57:09.541200348Z" level=info msg="RemovePodSandbox for \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\"" Jan 29 10:57:09.541368 containerd[1934]: time="2025-01-29T10:57:09.541252380Z" level=info msg="Forcibly stopping sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\"" Jan 29 10:57:09.541421 containerd[1934]: time="2025-01-29T10:57:09.541384224Z" level=info msg="TearDown network for sandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" successfully" Jan 29 10:57:09.544689 containerd[1934]: time="2025-01-29T10:57:09.544594596Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.544826 containerd[1934]: time="2025-01-29T10:57:09.544743348Z" level=info msg="RemovePodSandbox \"3855fbd3ede87ddce067426a56aa9ecf6dd64b1b74e8c853063f625f5c2757cc\" returns successfully" Jan 29 10:57:09.545582 containerd[1934]: time="2025-01-29T10:57:09.545337048Z" level=info msg="StopPodSandbox for \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\"" Jan 29 10:57:09.545582 containerd[1934]: time="2025-01-29T10:57:09.545483268Z" level=info msg="TearDown network for sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" successfully" Jan 29 10:57:09.545582 containerd[1934]: time="2025-01-29T10:57:09.545504688Z" level=info msg="StopPodSandbox for \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" returns successfully" Jan 29 10:57:09.547445 containerd[1934]: time="2025-01-29T10:57:09.546023916Z" level=info msg="RemovePodSandbox for \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\"" Jan 29 10:57:09.547445 containerd[1934]: time="2025-01-29T10:57:09.546067680Z" level=info msg="Forcibly stopping sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\"" Jan 29 10:57:09.547445 containerd[1934]: time="2025-01-29T10:57:09.546211884Z" level=info msg="TearDown network for sandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" successfully" Jan 29 10:57:09.549310 containerd[1934]: time="2025-01-29T10:57:09.549234348Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.549518 containerd[1934]: time="2025-01-29T10:57:09.549309528Z" level=info msg="RemovePodSandbox \"20cef2967d693218d84955ab3d78d3ff3ab70b2c4a03c6c430f64db699e94f24\" returns successfully" Jan 29 10:57:09.549982 containerd[1934]: time="2025-01-29T10:57:09.549843948Z" level=info msg="StopPodSandbox for \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\"" Jan 29 10:57:09.550448 containerd[1934]: time="2025-01-29T10:57:09.550330968Z" level=info msg="TearDown network for sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\" successfully" Jan 29 10:57:09.550448 containerd[1934]: time="2025-01-29T10:57:09.550382640Z" level=info msg="StopPodSandbox for \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\" returns successfully" Jan 29 10:57:09.552010 containerd[1934]: time="2025-01-29T10:57:09.551945772Z" level=info msg="RemovePodSandbox for \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\"" Jan 29 10:57:09.552137 containerd[1934]: time="2025-01-29T10:57:09.552015984Z" level=info msg="Forcibly stopping sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\"" Jan 29 10:57:09.552238 containerd[1934]: time="2025-01-29T10:57:09.552147096Z" level=info msg="TearDown network for sandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\" successfully" Jan 29 10:57:09.556063 containerd[1934]: time="2025-01-29T10:57:09.555824976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.556063 containerd[1934]: time="2025-01-29T10:57:09.555906204Z" level=info msg="RemovePodSandbox \"961d03560aa6769ebbdadaa458267b88709205df6af8edaab99fd298f3e36e15\" returns successfully" Jan 29 10:57:09.559773 containerd[1934]: time="2025-01-29T10:57:09.559387980Z" level=info msg="StopPodSandbox for \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\"" Jan 29 10:57:09.559773 containerd[1934]: time="2025-01-29T10:57:09.559560936Z" level=info msg="TearDown network for sandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\" successfully" Jan 29 10:57:09.559773 containerd[1934]: time="2025-01-29T10:57:09.559582320Z" level=info msg="StopPodSandbox for \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\" returns successfully" Jan 29 10:57:09.563921 containerd[1934]: time="2025-01-29T10:57:09.561671316Z" level=info msg="RemovePodSandbox for \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\"" Jan 29 10:57:09.563921 containerd[1934]: time="2025-01-29T10:57:09.561721164Z" level=info msg="Forcibly stopping sandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\"" Jan 29 10:57:09.563921 containerd[1934]: time="2025-01-29T10:57:09.561857496Z" level=info msg="TearDown network for sandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\" successfully" Jan 29 10:57:09.568847 containerd[1934]: time="2025-01-29T10:57:09.568740960Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.569116 containerd[1934]: time="2025-01-29T10:57:09.569069472Z" level=info msg="RemovePodSandbox \"dd68087c5b59db3862f16b61617e47c98ef15ed96200b5787b39c97a868b5afa\" returns successfully" Jan 29 10:57:09.570393 containerd[1934]: time="2025-01-29T10:57:09.570333072Z" level=info msg="StopPodSandbox for \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\"" Jan 29 10:57:09.570759 containerd[1934]: time="2025-01-29T10:57:09.570729972Z" level=info msg="TearDown network for sandbox \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\" successfully" Jan 29 10:57:09.570907 containerd[1934]: time="2025-01-29T10:57:09.570879672Z" level=info msg="StopPodSandbox for \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\" returns successfully" Jan 29 10:57:09.571984 containerd[1934]: time="2025-01-29T10:57:09.571948344Z" level=info msg="RemovePodSandbox for \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\"" Jan 29 10:57:09.572224 containerd[1934]: time="2025-01-29T10:57:09.572197620Z" level=info msg="Forcibly stopping sandbox \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\"" Jan 29 10:57:09.572474 containerd[1934]: time="2025-01-29T10:57:09.572448384Z" level=info msg="TearDown network for sandbox \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\" successfully" Jan 29 10:57:09.575861 containerd[1934]: time="2025-01-29T10:57:09.575801172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.576092 containerd[1934]: time="2025-01-29T10:57:09.576060852Z" level=info msg="RemovePodSandbox \"d1392a62c55de474e72518669e18af127482d81a5af07085774c15ff40ba64e9\" returns successfully" Jan 29 10:57:09.576805 containerd[1934]: time="2025-01-29T10:57:09.576762648Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\"" Jan 29 10:57:09.576951 containerd[1934]: time="2025-01-29T10:57:09.576919008Z" level=info msg="TearDown network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" successfully" Jan 29 10:57:09.577031 containerd[1934]: time="2025-01-29T10:57:09.576950292Z" level=info msg="StopPodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" returns successfully" Jan 29 10:57:09.577568 containerd[1934]: time="2025-01-29T10:57:09.577529172Z" level=info msg="RemovePodSandbox for \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\"" Jan 29 10:57:09.577663 containerd[1934]: time="2025-01-29T10:57:09.577576956Z" level=info msg="Forcibly stopping sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\"" Jan 29 10:57:09.577715 containerd[1934]: time="2025-01-29T10:57:09.577694496Z" level=info msg="TearDown network for sandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" successfully" Jan 29 10:57:09.581433 containerd[1934]: time="2025-01-29T10:57:09.581371560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.581593 containerd[1934]: time="2025-01-29T10:57:09.581464884Z" level=info msg="RemovePodSandbox \"a388bb63e9771629376a47553bc905cbb78e6ae6f71b2d2fb13359d54acdef43\" returns successfully" Jan 29 10:57:09.582254 containerd[1934]: time="2025-01-29T10:57:09.582003444Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\"" Jan 29 10:57:09.582254 containerd[1934]: time="2025-01-29T10:57:09.582179220Z" level=info msg="TearDown network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" successfully" Jan 29 10:57:09.582254 containerd[1934]: time="2025-01-29T10:57:09.582202680Z" level=info msg="StopPodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" returns successfully" Jan 29 10:57:09.583423 containerd[1934]: time="2025-01-29T10:57:09.582717648Z" level=info msg="RemovePodSandbox for \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\"" Jan 29 10:57:09.583423 containerd[1934]: time="2025-01-29T10:57:09.582759720Z" level=info msg="Forcibly stopping sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\"" Jan 29 10:57:09.583423 containerd[1934]: time="2025-01-29T10:57:09.582874128Z" level=info msg="TearDown network for sandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" successfully" Jan 29 10:57:09.585774 containerd[1934]: time="2025-01-29T10:57:09.585711780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.585875 containerd[1934]: time="2025-01-29T10:57:09.585783072Z" level=info msg="RemovePodSandbox \"4f4924e956245b98c9248ff0804c58c97ab01bfeb75954ee9c1884d214abfd56\" returns successfully" Jan 29 10:57:09.586692 containerd[1934]: time="2025-01-29T10:57:09.586418821Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\"" Jan 29 10:57:09.586692 containerd[1934]: time="2025-01-29T10:57:09.586567669Z" level=info msg="TearDown network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" successfully" Jan 29 10:57:09.586692 containerd[1934]: time="2025-01-29T10:57:09.586588969Z" level=info msg="StopPodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" returns successfully" Jan 29 10:57:09.587521 containerd[1934]: time="2025-01-29T10:57:09.587099269Z" level=info msg="RemovePodSandbox for \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\"" Jan 29 10:57:09.587521 containerd[1934]: time="2025-01-29T10:57:09.587139805Z" level=info msg="Forcibly stopping sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\"" Jan 29 10:57:09.587521 containerd[1934]: time="2025-01-29T10:57:09.587285989Z" level=info msg="TearDown network for sandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" successfully" Jan 29 10:57:09.590272 containerd[1934]: time="2025-01-29T10:57:09.590204113Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.590599 containerd[1934]: time="2025-01-29T10:57:09.590278189Z" level=info msg="RemovePodSandbox \"22048390e161a9cc644d39970af3b97522aafa7d7c6b9e20f4bb1e20d4182eef\" returns successfully" Jan 29 10:57:09.591359 containerd[1934]: time="2025-01-29T10:57:09.591305221Z" level=info msg="StopPodSandbox for \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\"" Jan 29 10:57:09.591521 containerd[1934]: time="2025-01-29T10:57:09.591485989Z" level=info msg="TearDown network for sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" successfully" Jan 29 10:57:09.591674 containerd[1934]: time="2025-01-29T10:57:09.591518437Z" level=info msg="StopPodSandbox for \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" returns successfully" Jan 29 10:57:09.592393 containerd[1934]: time="2025-01-29T10:57:09.592096393Z" level=info msg="RemovePodSandbox for \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\"" Jan 29 10:57:09.592393 containerd[1934]: time="2025-01-29T10:57:09.592138861Z" level=info msg="Forcibly stopping sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\"" Jan 29 10:57:09.592393 containerd[1934]: time="2025-01-29T10:57:09.592282213Z" level=info msg="TearDown network for sandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" successfully" Jan 29 10:57:09.595315 containerd[1934]: time="2025-01-29T10:57:09.595254313Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.595469 containerd[1934]: time="2025-01-29T10:57:09.595331137Z" level=info msg="RemovePodSandbox \"e62c2b912ce0eb3f89b22ec6d4a24bed55a2c55ac86ff753d353185942b62791\" returns successfully" Jan 29 10:57:09.596358 containerd[1934]: time="2025-01-29T10:57:09.596296645Z" level=info msg="StopPodSandbox for \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\"" Jan 29 10:57:09.596758 containerd[1934]: time="2025-01-29T10:57:09.596588017Z" level=info msg="TearDown network for sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\" successfully" Jan 29 10:57:09.596758 containerd[1934]: time="2025-01-29T10:57:09.596636245Z" level=info msg="StopPodSandbox for \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\" returns successfully" Jan 29 10:57:09.597303 containerd[1934]: time="2025-01-29T10:57:09.597252301Z" level=info msg="RemovePodSandbox for \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\"" Jan 29 10:57:09.597407 containerd[1934]: time="2025-01-29T10:57:09.597300841Z" level=info msg="Forcibly stopping sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\"" Jan 29 10:57:09.597577 containerd[1934]: time="2025-01-29T10:57:09.597415129Z" level=info msg="TearDown network for sandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\" successfully" Jan 29 10:57:09.605146 containerd[1934]: time="2025-01-29T10:57:09.604671061Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.605146 containerd[1934]: time="2025-01-29T10:57:09.604796677Z" level=info msg="RemovePodSandbox \"0d7ba341196dc0a2614d355c4b96e0024d1a59150e840c36a4f112b71e63ef4c\" returns successfully" Jan 29 10:57:09.605838 containerd[1934]: time="2025-01-29T10:57:09.605781457Z" level=info msg="StopPodSandbox for \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\"" Jan 29 10:57:09.606088 containerd[1934]: time="2025-01-29T10:57:09.606044365Z" level=info msg="TearDown network for sandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\" successfully" Jan 29 10:57:09.606183 containerd[1934]: time="2025-01-29T10:57:09.606081157Z" level=info msg="StopPodSandbox for \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\" returns successfully" Jan 29 10:57:09.607895 containerd[1934]: time="2025-01-29T10:57:09.607837789Z" level=info msg="RemovePodSandbox for \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\"" Jan 29 10:57:09.608282 containerd[1934]: time="2025-01-29T10:57:09.608140333Z" level=info msg="Forcibly stopping sandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\"" Jan 29 10:57:09.608580 containerd[1934]: time="2025-01-29T10:57:09.608548309Z" level=info msg="TearDown network for sandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\" successfully" Jan 29 10:57:09.613405 containerd[1934]: time="2025-01-29T10:57:09.613302433Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.613405 containerd[1934]: time="2025-01-29T10:57:09.613383433Z" level=info msg="RemovePodSandbox \"0fad215df5ac5012faa646f9aa3cf30a206d17a3c45c94ca88fc45f3da3331a0\" returns successfully" Jan 29 10:57:09.614366 containerd[1934]: time="2025-01-29T10:57:09.614285209Z" level=info msg="StopPodSandbox for \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\"" Jan 29 10:57:09.614743 containerd[1934]: time="2025-01-29T10:57:09.614610565Z" level=info msg="TearDown network for sandbox \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\" successfully" Jan 29 10:57:09.614743 containerd[1934]: time="2025-01-29T10:57:09.614661337Z" level=info msg="StopPodSandbox for \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\" returns successfully" Jan 29 10:57:09.616222 containerd[1934]: time="2025-01-29T10:57:09.615525481Z" level=info msg="RemovePodSandbox for \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\"" Jan 29 10:57:09.616222 containerd[1934]: time="2025-01-29T10:57:09.615584833Z" level=info msg="Forcibly stopping sandbox \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\"" Jan 29 10:57:09.616222 containerd[1934]: time="2025-01-29T10:57:09.615703621Z" level=info msg="TearDown network for sandbox \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\" successfully" Jan 29 10:57:09.619554 containerd[1934]: time="2025-01-29T10:57:09.618668305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:57:09.619554 containerd[1934]: time="2025-01-29T10:57:09.618745057Z" level=info msg="RemovePodSandbox \"794a81b8a1f08f2bfb6f651bf51a799ffecfd3c7820005a81321165790013c8b\" returns successfully" Jan 29 10:57:10.540000 kubelet[2390]: E0129 10:57:10.539927 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:11.540678 kubelet[2390]: E0129 10:57:11.540610 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:12.541397 kubelet[2390]: E0129 10:57:12.541318 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:13.541872 kubelet[2390]: E0129 10:57:13.541811 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:14.542634 kubelet[2390]: E0129 10:57:14.542574 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:15.543485 kubelet[2390]: E0129 10:57:15.543419 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:16.544258 kubelet[2390]: E0129 10:57:16.544191 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:16.818259 systemd[1]: Created slice kubepods-besteffort-pod1d1dc2df_25ed_4a70_b91c_95b1354216e1.slice - libcontainer container kubepods-besteffort-pod1d1dc2df_25ed_4a70_b91c_95b1354216e1.slice. Jan 29 10:57:16.924826 kubelet[2390]: I0129 10:57:16.924481 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbl5\" (UniqueName: \"kubernetes.io/projected/1d1dc2df-25ed-4a70-b91c-95b1354216e1-kube-api-access-brbl5\") pod \"test-pod-1\" (UID: \"1d1dc2df-25ed-4a70-b91c-95b1354216e1\") " pod="default/test-pod-1" Jan 29 10:57:16.924826 kubelet[2390]: I0129 10:57:16.924549 2390 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-42c28adf-50dd-4a5b-a6be-09b81ee295fd\" (UniqueName: \"kubernetes.io/nfs/1d1dc2df-25ed-4a70-b91c-95b1354216e1-pvc-42c28adf-50dd-4a5b-a6be-09b81ee295fd\") pod \"test-pod-1\" (UID: \"1d1dc2df-25ed-4a70-b91c-95b1354216e1\") " pod="default/test-pod-1" Jan 29 10:57:17.061286 kernel: FS-Cache: Loaded Jan 29 10:57:17.103708 kernel: RPC: Registered named UNIX socket transport module. Jan 29 10:57:17.103849 kernel: RPC: Registered udp transport module. Jan 29 10:57:17.103897 kernel: RPC: Registered tcp transport module. Jan 29 10:57:17.104617 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 10:57:17.105498 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 10:57:17.445700 kernel: NFS: Registering the id_resolver key type Jan 29 10:57:17.445844 kernel: Key type id_resolver registered Jan 29 10:57:17.445888 kernel: Key type id_legacy registered Jan 29 10:57:17.483825 nfsidmap[4334]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 29 10:57:17.489849 nfsidmap[4335]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jan 29 10:57:17.544865 kubelet[2390]: E0129 10:57:17.544803 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:17.725045 containerd[1934]: time="2025-01-29T10:57:17.724937613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1d1dc2df-25ed-4a70-b91c-95b1354216e1,Namespace:default,Attempt:0,}" Jan 29 10:57:17.909573 systemd-networkd[1835]: cali5ec59c6bf6e: Link UP Jan 29 10:57:17.909961 systemd-networkd[1835]: cali5ec59c6bf6e: Gained carrier Jan 29 10:57:17.911338 (udev-worker)[4331]: Network interface NamePolicy= disabled on kernel command line. Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.801 [INFO][4337] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.16.43-k8s-test--pod--1-eth0 default 1d1dc2df-25ed-4a70-b91c-95b1354216e1 1283 0 2025-01-29 10:56:45 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.16.43 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.43-k8s-test--pod--1-" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.802 [INFO][4337] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.43-k8s-test--pod--1-eth0" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.846 [INFO][4348] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" HandleID="k8s-pod-network.abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" Workload="172.31.16.43-k8s-test--pod--1-eth0" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.862 [INFO][4348] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" HandleID="k8s-pod-network.abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" Workload="172.31.16.43-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000220b70), Attrs:map[string]string{"namespace":"default", "node":"172.31.16.43", "pod":"test-pod-1", "timestamp":"2025-01-29 10:57:17.846336622 +0000 UTC"}, Hostname:"172.31.16.43", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.862 [INFO][4348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.862 [INFO][4348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.862 [INFO][4348] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.16.43' Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.865 [INFO][4348] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" host="172.31.16.43" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.871 [INFO][4348] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.16.43" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.877 [INFO][4348] ipam/ipam.go 489: Trying affinity for 192.168.90.128/26 host="172.31.16.43" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.880 [INFO][4348] ipam/ipam.go 155: Attempting to load block cidr=192.168.90.128/26 host="172.31.16.43" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.883 [INFO][4348] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.90.128/26 host="172.31.16.43" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.883 [INFO][4348] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.90.128/26 handle="k8s-pod-network.abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" host="172.31.16.43" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.885 [INFO][4348] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07 Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.894 [INFO][4348] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.90.128/26 handle="k8s-pod-network.abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" host="172.31.16.43" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.903 [INFO][4348] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.90.132/26] block=192.168.90.128/26 handle="k8s-pod-network.abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" host="172.31.16.43" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.903 [INFO][4348] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.90.132/26] handle="k8s-pod-network.abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" host="172.31.16.43" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.903 [INFO][4348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.903 [INFO][4348] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.132/26] IPv6=[] ContainerID="abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" HandleID="k8s-pod-network.abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" Workload="172.31.16.43-k8s-test--pod--1-eth0" Jan 29 10:57:17.932227 containerd[1934]: 2025-01-29 10:57:17.905 [INFO][4337] cni-plugin/k8s.go 386: Populated endpoint ContainerID="abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.43-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.43-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1d1dc2df-25ed-4a70-b91c-95b1354216e1", ResourceVersion:"1283", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.43", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:57:17.936034 containerd[1934]: 2025-01-29 10:57:17.906 [INFO][4337] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.90.132/32] ContainerID="abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.43-k8s-test--pod--1-eth0" Jan 29 10:57:17.936034 containerd[1934]: 2025-01-29 10:57:17.906 [INFO][4337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.43-k8s-test--pod--1-eth0" Jan 29 10:57:17.936034 containerd[1934]: 2025-01-29 10:57:17.911 [INFO][4337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.43-k8s-test--pod--1-eth0" Jan 29 10:57:17.936034 containerd[1934]: 2025-01-29 10:57:17.912 [INFO][4337] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.43-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.43-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1d1dc2df-25ed-4a70-b91c-95b1354216e1", ResourceVersion:"1283", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 56, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.16.43", ContainerID:"abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.90.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ca:9a:a3:e4:76:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:57:17.936034 containerd[1934]: 2025-01-29 10:57:17.924 [INFO][4337] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.16.43-k8s-test--pod--1-eth0" Jan 29 10:57:17.971668 containerd[1934]: time="2025-01-29T10:57:17.971487886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:57:17.972173 containerd[1934]: time="2025-01-29T10:57:17.971630290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:57:17.972173 containerd[1934]: time="2025-01-29T10:57:17.971668102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:57:17.972173 containerd[1934]: time="2025-01-29T10:57:17.971823358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:57:18.006480 systemd[1]: Started cri-containerd-abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07.scope - libcontainer container abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07. Jan 29 10:57:18.072853 containerd[1934]: time="2025-01-29T10:57:18.072736699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1d1dc2df-25ed-4a70-b91c-95b1354216e1,Namespace:default,Attempt:0,} returns sandbox id \"abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07\"" Jan 29 10:57:18.076243 containerd[1934]: time="2025-01-29T10:57:18.076057135Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 10:57:18.375771 containerd[1934]: time="2025-01-29T10:57:18.375703892Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:57:18.376941 containerd[1934]: time="2025-01-29T10:57:18.376827800Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 10:57:18.383107 containerd[1934]: time="2025-01-29T10:57:18.383039516Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 306.927337ms" Jan 29 10:57:18.383107 containerd[1934]: time="2025-01-29T10:57:18.383099528Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 10:57:18.387241 containerd[1934]: time="2025-01-29T10:57:18.387058076Z" level=info msg="CreateContainer within sandbox \"abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 10:57:18.411219 containerd[1934]: time="2025-01-29T10:57:18.410981840Z" level=info msg="CreateContainer within sandbox \"abb4389a4e477f560091f33a22b50aa05fec1d665de636856611742e0580bc07\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3a43069bc6b8bcbc5254820ca3fdc7e31bb7c81fb79b4299fb78851cbed51224\"" Jan 29 10:57:18.412069 containerd[1934]: time="2025-01-29T10:57:18.411913292Z" level=info msg="StartContainer for \"3a43069bc6b8bcbc5254820ca3fdc7e31bb7c81fb79b4299fb78851cbed51224\"" Jan 29 10:57:18.468447 systemd[1]: Started cri-containerd-3a43069bc6b8bcbc5254820ca3fdc7e31bb7c81fb79b4299fb78851cbed51224.scope - libcontainer container 3a43069bc6b8bcbc5254820ca3fdc7e31bb7c81fb79b4299fb78851cbed51224. Jan 29 10:57:18.523689 containerd[1934]: time="2025-01-29T10:57:18.523611945Z" level=info msg="StartContainer for \"3a43069bc6b8bcbc5254820ca3fdc7e31bb7c81fb79b4299fb78851cbed51224\" returns successfully" Jan 29 10:57:18.545052 kubelet[2390]: E0129 10:57:18.545003 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:19.371691 systemd-networkd[1835]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 10:57:19.546178 kubelet[2390]: E0129 10:57:19.546085 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:20.546520 kubelet[2390]: E0129 10:57:20.546457 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:21.547595 kubelet[2390]: E0129 10:57:21.547534 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:22.059513 ntpd[1914]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 10:57:22.060128 ntpd[1914]: 29 Jan 10:57:22 ntpd[1914]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 10:57:22.548578 kubelet[2390]: E0129 10:57:22.548428 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:23.549372 kubelet[2390]: E0129 10:57:23.549295 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:24.550317 kubelet[2390]: E0129 10:57:24.550257 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:25.551487 kubelet[2390]: E0129 10:57:25.551424 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:26.552001 kubelet[2390]: E0129 10:57:26.551943 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:27.553260 kubelet[2390]: E0129 10:57:27.553203 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:28.553958 kubelet[2390]: E0129 10:57:28.553892 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:29.497465 kubelet[2390]: E0129 10:57:29.497406 2390 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:29.554962 kubelet[2390]: E0129 10:57:29.554909 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:30.555581 kubelet[2390]: E0129 10:57:30.555503 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:31.556588 kubelet[2390]: E0129 10:57:31.556528 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:32.557701 kubelet[2390]: E0129 10:57:32.557627 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:33.558763 kubelet[2390]: E0129 10:57:33.558701 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:34.559742 kubelet[2390]: E0129 10:57:34.559681 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:35.560843 kubelet[2390]: E0129 10:57:35.560780 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:36.561316 kubelet[2390]: E0129 10:57:36.561262 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:37.562348 kubelet[2390]: E0129 10:57:37.562280 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:38.563300 kubelet[2390]: E0129 10:57:38.563240 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:39.563745 kubelet[2390]: E0129 10:57:39.563691 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:40.564753 kubelet[2390]: E0129 10:57:40.564696 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:41.565324 kubelet[2390]: E0129 10:57:41.565259 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:42.566088 kubelet[2390]: E0129 10:57:42.566022 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:42.595746 kubelet[2390]: E0129 10:57:42.595651 2390 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.43?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 10:57:43.566751 kubelet[2390]: E0129 10:57:43.566678 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:44.567581 kubelet[2390]: E0129 10:57:44.567518 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:45.567720 kubelet[2390]: E0129 10:57:45.567647 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:46.568192 kubelet[2390]: E0129 10:57:46.568097 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:47.568600 kubelet[2390]: E0129 10:57:47.568539 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:48.569208 kubelet[2390]: E0129 10:57:48.569121 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:49.497394 kubelet[2390]: E0129 10:57:49.497337 2390 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:49.569371 kubelet[2390]: E0129 10:57:49.569322 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:50.570008 kubelet[2390]: E0129 10:57:50.569933 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:51.570637 kubelet[2390]: E0129 10:57:51.570553 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:52.571383 kubelet[2390]: E0129 10:57:52.571320 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:52.597216 kubelet[2390]: E0129 10:57:52.596801 2390 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.43?timeout=10s\": context deadline exceeded" Jan 29 10:57:53.572063 kubelet[2390]: E0129 10:57:53.572000 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:54.572771 kubelet[2390]: E0129 10:57:54.572710 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:55.573584 kubelet[2390]: E0129 10:57:55.573514 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:56.574427 kubelet[2390]: E0129 10:57:56.574356 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:57.574984 kubelet[2390]: E0129 10:57:57.574919 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:58.575438 kubelet[2390]: E0129 10:57:58.575367 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:57:59.575566 kubelet[2390]: E0129 10:57:59.575506 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:00.576150 kubelet[2390]: E0129 10:58:00.576091 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:01.576503 kubelet[2390]: E0129 10:58:01.576436 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:02.577579 kubelet[2390]: E0129 10:58:02.577509 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:02.597907 kubelet[2390]: E0129 10:58:02.597837 2390 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.43?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 29 10:58:03.578395 kubelet[2390]: E0129 10:58:03.578336 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:04.579559 kubelet[2390]: E0129 10:58:04.579486 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:05.054758 kubelet[2390]: E0129 10:58:05.053342 2390 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.43?timeout=10s\": unexpected EOF" Jan 29 10:58:05.057651 kubelet[2390]: E0129 10:58:05.057218 2390 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.42:6443/api/v1/namespaces/calico-system/events\": unexpected EOF" event=< Jan 29 10:58:05.057651 kubelet[2390]: &Event{ObjectMeta:{calico-node-k8rdl.181f24a75354b1c2 calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-node-k8rdl,UID:f81dd188-31e4-4e1d-b044-3db057c3fd01,APIVersion:v1,ResourceVersion:863,FieldPath:spec.containers{calico-node},},Reason:Unhealthy,Message:Readiness probe failed: 2025-01-29 10:58:02.258 [INFO][337] node/health.go 202: Number of node(s) with BGP peering established = 0 Jan 29 10:58:05.057651 kubelet[2390]: calico/node is not ready: BIRD is not ready: BGP not established with 172.31.19.42 Jan 29 10:58:05.057651 kubelet[2390]: ,Source:EventSource{Component:kubelet,Host:172.31.16.43,},FirstTimestamp:2025-01-29 10:58:02.264498626 +0000 UTC m=+113.799880451,LastTimestamp:2025-01-29 10:58:02.264498626 +0000 UTC m=+113.799880451,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.16.43,} Jan 29 10:58:05.057651 kubelet[2390]: > Jan 29 10:58:05.068952 kubelet[2390]: E0129 10:58:05.068224 2390 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.43?timeout=10s\": read tcp 172.31.16.43:58014->172.31.19.42:6443: read: connection reset by peer" Jan 29 10:58:05.068952 kubelet[2390]: I0129 10:58:05.068284 2390 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 10:58:05.069552 kubelet[2390]: E0129 10:58:05.069383 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.43?timeout=10s\": dial tcp 172.31.19.42:6443: connect: connection refused" interval="200ms" Jan 29 10:58:05.270913 kubelet[2390]: E0129 10:58:05.270845 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.43?timeout=10s\": dial tcp 172.31.19.42:6443: connect: connection refused" interval="400ms" Jan 29 10:58:05.580722 kubelet[2390]: E0129 10:58:05.580654 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:05.671951 kubelet[2390]: E0129 10:58:05.671875 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.43?timeout=10s\": dial tcp 172.31.19.42:6443: connect: connection refused" interval="800ms" Jan 29 10:58:06.581134 kubelet[2390]: E0129 10:58:06.581069 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:07.581545 kubelet[2390]: E0129 10:58:07.581468 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:08.582683 kubelet[2390]: E0129 10:58:08.582619 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:09.497397 kubelet[2390]: E0129 10:58:09.497332 2390 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:09.583838 kubelet[2390]: E0129 10:58:09.583773 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:10.584221 kubelet[2390]: E0129 10:58:10.584143 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:11.584819 kubelet[2390]: E0129 10:58:11.584759 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:12.585761 kubelet[2390]: E0129 10:58:12.585705 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:13.586221 kubelet[2390]: E0129 10:58:13.586144 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:14.587378 kubelet[2390]: E0129 10:58:14.587319 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:15.588298 kubelet[2390]: E0129 10:58:15.588238 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:16.473221 kubelet[2390]: E0129 10:58:16.473112 2390 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.16.43?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Jan 29 10:58:16.589321 kubelet[2390]: E0129 10:58:16.589246 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:17.589639 kubelet[2390]: E0129 10:58:17.589579 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:18.590574 kubelet[2390]: E0129 10:58:18.590501 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:19.591198 kubelet[2390]: E0129 10:58:19.591110 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:20.591851 kubelet[2390]: E0129 10:58:20.591785 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:21.592573 kubelet[2390]: E0129 10:58:21.592508 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:22.593318 kubelet[2390]: E0129 10:58:22.593257 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:23.593852 kubelet[2390]: E0129 10:58:23.593785 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:24.593974 kubelet[2390]: E0129 10:58:24.593910 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 10:58:25.594138 kubelet[2390]: E0129 10:58:25.594061 2390 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"