Jan 13 20:17:25.906266 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Jan 13 20:17:25.906287 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025
Jan 13 20:17:25.906297 kernel: KASLR enabled
Jan 13 20:17:25.906302 kernel: efi: EFI v2.7 by EDK II
Jan 13 20:17:25.906308 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 
Jan 13 20:17:25.906313 kernel: random: crng init done
Jan 13 20:17:25.906320 kernel: secureboot: Secure boot disabled
Jan 13 20:17:25.906337 kernel: ACPI: Early table checksum verification disabled
Jan 13 20:17:25.906344 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS )
Jan 13 20:17:25.906352 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS  BXPC     00000001      01000013)
Jan 13 20:17:25.906358 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:17:25.906364 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:17:25.906369 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:17:25.906375 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:17:25.906383 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:17:25.906390 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:17:25.906396 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:17:25.906402 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:17:25.906409 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:17:25.906415 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Jan 13 20:17:25.906421 kernel: NUMA: Failed to initialise from firmware
Jan 13 20:17:25.906427 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Jan 13 20:17:25.906434 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff]
Jan 13 20:17:25.906440 kernel: Zone ranges:
Jan 13 20:17:25.906446 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Jan 13 20:17:25.906453 kernel:   DMA32    empty
Jan 13 20:17:25.906459 kernel:   Normal   empty
Jan 13 20:17:25.906465 kernel: Movable zone start for each node
Jan 13 20:17:25.906471 kernel: Early memory node ranges
Jan 13 20:17:25.906477 kernel:   node   0: [mem 0x0000000040000000-0x00000000d976ffff]
Jan 13 20:17:25.906483 kernel:   node   0: [mem 0x00000000d9770000-0x00000000d9b3ffff]
Jan 13 20:17:25.906489 kernel:   node   0: [mem 0x00000000d9b40000-0x00000000dce1ffff]
Jan 13 20:17:25.906496 kernel:   node   0: [mem 0x00000000dce20000-0x00000000dceaffff]
Jan 13 20:17:25.906502 kernel:   node   0: [mem 0x00000000dceb0000-0x00000000dcebffff]
Jan 13 20:17:25.906508 kernel:   node   0: [mem 0x00000000dcec0000-0x00000000dcfdffff]
Jan 13 20:17:25.906514 kernel:   node   0: [mem 0x00000000dcfe0000-0x00000000dcffffff]
Jan 13 20:17:25.906520 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Jan 13 20:17:25.906527 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Jan 13 20:17:25.906534 kernel: psci: probing for conduit method from ACPI.
Jan 13 20:17:25.906540 kernel: psci: PSCIv1.1 detected in firmware.
Jan 13 20:17:25.906549 kernel: psci: Using standard PSCI v0.2 function IDs
Jan 13 20:17:25.906555 kernel: psci: Trusted OS migration not required
Jan 13 20:17:25.906562 kernel: psci: SMC Calling Convention v1.1
Jan 13 20:17:25.906570 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Jan 13 20:17:25.906576 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Jan 13 20:17:25.906583 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Jan 13 20:17:25.906589 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Jan 13 20:17:25.906596 kernel: Detected PIPT I-cache on CPU0
Jan 13 20:17:25.906602 kernel: CPU features: detected: GIC system register CPU interface
Jan 13 20:17:25.906609 kernel: CPU features: detected: Hardware dirty bit management
Jan 13 20:17:25.906616 kernel: CPU features: detected: Spectre-v4
Jan 13 20:17:25.906622 kernel: CPU features: detected: Spectre-BHB
Jan 13 20:17:25.906629 kernel: CPU features: kernel page table isolation forced ON by KASLR
Jan 13 20:17:25.906636 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Jan 13 20:17:25.906643 kernel: CPU features: detected: ARM erratum 1418040
Jan 13 20:17:25.906649 kernel: CPU features: detected: SSBS not fully self-synchronizing
Jan 13 20:17:25.906656 kernel: alternatives: applying boot alternatives
Jan 13 20:17:25.906664 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436
Jan 13 20:17:25.906670 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jan 13 20:17:25.906677 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 13 20:17:25.906684 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Jan 13 20:17:25.906690 kernel: Fallback order for Node 0: 0 
Jan 13 20:17:25.906697 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Jan 13 20:17:25.906703 kernel: Policy zone: DMA
Jan 13 20:17:25.906711 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 13 20:17:25.906717 kernel: software IO TLB: area num 4.
Jan 13 20:17:25.906724 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB)
Jan 13 20:17:25.906731 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved)
Jan 13 20:17:25.906738 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Jan 13 20:17:25.906744 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 13 20:17:25.906752 kernel: rcu:         RCU event tracing is enabled.
Jan 13 20:17:25.906759 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Jan 13 20:17:25.906765 kernel:         Trampoline variant of Tasks RCU enabled.
Jan 13 20:17:25.906772 kernel:         Tracing variant of Tasks RCU enabled.
Jan 13 20:17:25.906779 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 13 20:17:25.906785 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Jan 13 20:17:25.906793 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Jan 13 20:17:25.906800 kernel: GICv3: 256 SPIs implemented
Jan 13 20:17:25.906806 kernel: GICv3: 0 Extended SPIs implemented
Jan 13 20:17:25.906813 kernel: Root IRQ handler: gic_handle_irq
Jan 13 20:17:25.906819 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Jan 13 20:17:25.906826 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Jan 13 20:17:25.906832 kernel: ITS [mem 0x08080000-0x0809ffff]
Jan 13 20:17:25.906839 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1)
Jan 13 20:17:25.906846 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1)
Jan 13 20:17:25.906853 kernel: GICv3: using LPI property table @0x00000000400f0000
Jan 13 20:17:25.906859 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000
Jan 13 20:17:25.906867 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 13 20:17:25.906874 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:17:25.906881 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Jan 13 20:17:25.906887 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Jan 13 20:17:25.906894 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Jan 13 20:17:25.906901 kernel: arm-pv: using stolen time PV
Jan 13 20:17:25.906908 kernel: Console: colour dummy device 80x25
Jan 13 20:17:25.906914 kernel: ACPI: Core revision 20230628
Jan 13 20:17:25.906921 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Jan 13 20:17:25.906928 kernel: pid_max: default: 32768 minimum: 301
Jan 13 20:17:25.906936 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Jan 13 20:17:25.906943 kernel: landlock: Up and running.
Jan 13 20:17:25.906949 kernel: SELinux:  Initializing.
Jan 13 20:17:25.906956 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 13 20:17:25.906963 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 13 20:17:25.906970 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 13 20:17:25.906977 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 13 20:17:25.906984 kernel: rcu: Hierarchical SRCU implementation.
Jan 13 20:17:25.906991 kernel: rcu:         Max phase no-delay instances is 400.
Jan 13 20:17:25.906998 kernel: Platform MSI: ITS@0x8080000 domain created
Jan 13 20:17:25.907005 kernel: PCI/MSI: ITS@0x8080000 domain created
Jan 13 20:17:25.907012 kernel: Remapping and enabling EFI services.
Jan 13 20:17:25.907018 kernel: smp: Bringing up secondary CPUs ...
Jan 13 20:17:25.907025 kernel: Detected PIPT I-cache on CPU1
Jan 13 20:17:25.907032 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Jan 13 20:17:25.907039 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000
Jan 13 20:17:25.907046 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:17:25.907052 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Jan 13 20:17:25.907059 kernel: Detected PIPT I-cache on CPU2
Jan 13 20:17:25.907067 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Jan 13 20:17:25.907074 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000
Jan 13 20:17:25.907085 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:17:25.907093 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Jan 13 20:17:25.907100 kernel: Detected PIPT I-cache on CPU3
Jan 13 20:17:25.907108 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Jan 13 20:17:25.907115 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000
Jan 13 20:17:25.907122 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:17:25.907129 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Jan 13 20:17:25.907137 kernel: smp: Brought up 1 node, 4 CPUs
Jan 13 20:17:25.907144 kernel: SMP: Total of 4 processors activated.
Jan 13 20:17:25.907151 kernel: CPU features: detected: 32-bit EL0 Support
Jan 13 20:17:25.907158 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Jan 13 20:17:25.907165 kernel: CPU features: detected: Common not Private translations
Jan 13 20:17:25.907172 kernel: CPU features: detected: CRC32 instructions
Jan 13 20:17:25.907179 kernel: CPU features: detected: Enhanced Virtualization Traps
Jan 13 20:17:25.907187 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Jan 13 20:17:25.907195 kernel: CPU features: detected: LSE atomic instructions
Jan 13 20:17:25.907202 kernel: CPU features: detected: Privileged Access Never
Jan 13 20:17:25.907209 kernel: CPU features: detected: RAS Extension Support
Jan 13 20:17:25.907216 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Jan 13 20:17:25.907229 kernel: CPU: All CPU(s) started at EL1
Jan 13 20:17:25.907238 kernel: alternatives: applying system-wide alternatives
Jan 13 20:17:25.907245 kernel: devtmpfs: initialized
Jan 13 20:17:25.907252 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 13 20:17:25.907259 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Jan 13 20:17:25.907269 kernel: pinctrl core: initialized pinctrl subsystem
Jan 13 20:17:25.907276 kernel: SMBIOS 3.0.0 present.
Jan 13 20:17:25.907283 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022
Jan 13 20:17:25.907291 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 13 20:17:25.907298 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Jan 13 20:17:25.907305 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 13 20:17:25.907312 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 13 20:17:25.907319 kernel: audit: initializing netlink subsys (disabled)
Jan 13 20:17:25.907333 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1
Jan 13 20:17:25.907342 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 13 20:17:25.907349 kernel: cpuidle: using governor menu
Jan 13 20:17:25.907356 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Jan 13 20:17:25.907363 kernel: ASID allocator initialised with 32768 entries
Jan 13 20:17:25.907371 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 13 20:17:25.907377 kernel: Serial: AMBA PL011 UART driver
Jan 13 20:17:25.907385 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Jan 13 20:17:25.907392 kernel: Modules: 0 pages in range for non-PLT usage
Jan 13 20:17:25.907399 kernel: Modules: 508960 pages in range for PLT usage
Jan 13 20:17:25.907407 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 13 20:17:25.907414 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Jan 13 20:17:25.907421 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Jan 13 20:17:25.907428 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Jan 13 20:17:25.907435 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 13 20:17:25.907442 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Jan 13 20:17:25.907450 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Jan 13 20:17:25.907457 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Jan 13 20:17:25.907464 kernel: ACPI: Added _OSI(Module Device)
Jan 13 20:17:25.907472 kernel: ACPI: Added _OSI(Processor Device)
Jan 13 20:17:25.907480 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 13 20:17:25.907487 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 13 20:17:25.907494 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 13 20:17:25.907501 kernel: ACPI: Interpreter enabled
Jan 13 20:17:25.907508 kernel: ACPI: Using GIC for interrupt routing
Jan 13 20:17:25.907515 kernel: ACPI: MCFG table detected, 1 entries
Jan 13 20:17:25.907522 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Jan 13 20:17:25.907529 kernel: printk: console [ttyAMA0] enabled
Jan 13 20:17:25.907536 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 13 20:17:25.907665 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Jan 13 20:17:25.907736 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Jan 13 20:17:25.907801 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Jan 13 20:17:25.907862 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Jan 13 20:17:25.907923 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Jan 13 20:17:25.907932 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Jan 13 20:17:25.907940 kernel: PCI host bridge to bus 0000:00
Jan 13 20:17:25.908010 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Jan 13 20:17:25.908070 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Jan 13 20:17:25.908126 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Jan 13 20:17:25.908184 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 13 20:17:25.908273 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Jan 13 20:17:25.908366 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Jan 13 20:17:25.908452 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Jan 13 20:17:25.908519 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Jan 13 20:17:25.908585 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 13 20:17:25.908649 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 13 20:17:25.908712 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Jan 13 20:17:25.908775 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Jan 13 20:17:25.908833 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Jan 13 20:17:25.908891 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Jan 13 20:17:25.908947 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Jan 13 20:17:25.908957 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Jan 13 20:17:25.908964 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Jan 13 20:17:25.908971 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Jan 13 20:17:25.908978 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Jan 13 20:17:25.908985 kernel: iommu: Default domain type: Translated
Jan 13 20:17:25.908992 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Jan 13 20:17:25.909001 kernel: efivars: Registered efivars operations
Jan 13 20:17:25.909008 kernel: vgaarb: loaded
Jan 13 20:17:25.909015 kernel: clocksource: Switched to clocksource arch_sys_counter
Jan 13 20:17:25.909022 kernel: VFS: Disk quotas dquot_6.6.0
Jan 13 20:17:25.909030 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 13 20:17:25.909037 kernel: pnp: PnP ACPI init
Jan 13 20:17:25.909110 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Jan 13 20:17:25.909124 kernel: pnp: PnP ACPI: found 1 devices
Jan 13 20:17:25.909131 kernel: NET: Registered PF_INET protocol family
Jan 13 20:17:25.909140 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 13 20:17:25.909148 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Jan 13 20:17:25.909155 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 13 20:17:25.909162 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Jan 13 20:17:25.909170 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Jan 13 20:17:25.909177 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Jan 13 20:17:25.909184 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 13 20:17:25.909191 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 13 20:17:25.909199 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 13 20:17:25.909206 kernel: PCI: CLS 0 bytes, default 64
Jan 13 20:17:25.909213 kernel: kvm [1]: HYP mode not available
Jan 13 20:17:25.909221 kernel: Initialise system trusted keyrings
Jan 13 20:17:25.909235 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Jan 13 20:17:25.909242 kernel: Key type asymmetric registered
Jan 13 20:17:25.909249 kernel: Asymmetric key parser 'x509' registered
Jan 13 20:17:25.909256 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Jan 13 20:17:25.909263 kernel: io scheduler mq-deadline registered
Jan 13 20:17:25.909272 kernel: io scheduler kyber registered
Jan 13 20:17:25.909279 kernel: io scheduler bfq registered
Jan 13 20:17:25.909286 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Jan 13 20:17:25.909294 kernel: ACPI: button: Power Button [PWRB]
Jan 13 20:17:25.909301 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Jan 13 20:17:25.909390 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Jan 13 20:17:25.909401 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 13 20:17:25.909408 kernel: thunder_xcv, ver 1.0
Jan 13 20:17:25.909416 kernel: thunder_bgx, ver 1.0
Jan 13 20:17:25.909423 kernel: nicpf, ver 1.0
Jan 13 20:17:25.909432 kernel: nicvf, ver 1.0
Jan 13 20:17:25.909517 kernel: rtc-efi rtc-efi.0: registered as rtc0
Jan 13 20:17:25.909583 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:17:25 UTC (1736799445)
Jan 13 20:17:25.909593 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 13 20:17:25.909600 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Jan 13 20:17:25.909607 kernel: watchdog: Delayed init of the lockup detector failed: -19
Jan 13 20:17:25.909614 kernel: watchdog: Hard watchdog permanently disabled
Jan 13 20:17:25.909624 kernel: NET: Registered PF_INET6 protocol family
Jan 13 20:17:25.909631 kernel: Segment Routing with IPv6
Jan 13 20:17:25.909638 kernel: In-situ OAM (IOAM) with IPv6
Jan 13 20:17:25.909645 kernel: NET: Registered PF_PACKET protocol family
Jan 13 20:17:25.909652 kernel: Key type dns_resolver registered
Jan 13 20:17:25.909659 kernel: registered taskstats version 1
Jan 13 20:17:25.909666 kernel: Loading compiled-in X.509 certificates
Jan 13 20:17:25.909673 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb'
Jan 13 20:17:25.909680 kernel: Key type .fscrypt registered
Jan 13 20:17:25.909687 kernel: Key type fscrypt-provisioning registered
Jan 13 20:17:25.909695 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 13 20:17:25.909703 kernel: ima: Allocated hash algorithm: sha1
Jan 13 20:17:25.909709 kernel: ima: No architecture policies found
Jan 13 20:17:25.909717 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Jan 13 20:17:25.909724 kernel: clk: Disabling unused clocks
Jan 13 20:17:25.909730 kernel: Freeing unused kernel memory: 39680K
Jan 13 20:17:25.909737 kernel: Run /init as init process
Jan 13 20:17:25.909744 kernel:   with arguments:
Jan 13 20:17:25.909753 kernel:     /init
Jan 13 20:17:25.909759 kernel:   with environment:
Jan 13 20:17:25.909766 kernel:     HOME=/
Jan 13 20:17:25.909773 kernel:     TERM=linux
Jan 13 20:17:25.909780 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jan 13 20:17:25.909788 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 13 20:17:25.909797 systemd[1]: Detected virtualization kvm.
Jan 13 20:17:25.909805 systemd[1]: Detected architecture arm64.
Jan 13 20:17:25.909814 systemd[1]: Running in initrd.
Jan 13 20:17:25.909821 systemd[1]: No hostname configured, using default hostname.
Jan 13 20:17:25.909828 systemd[1]: Hostname set to <localhost>.
Jan 13 20:17:25.909836 systemd[1]: Initializing machine ID from VM UUID.
Jan 13 20:17:25.909844 systemd[1]: Queued start job for default target initrd.target.
Jan 13 20:17:25.909851 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:17:25.909859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:17:25.909867 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Jan 13 20:17:25.909876 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 13 20:17:25.909884 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Jan 13 20:17:25.909892 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Jan 13 20:17:25.909901 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Jan 13 20:17:25.909908 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Jan 13 20:17:25.909916 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:17:25.909924 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:17:25.909933 systemd[1]: Reached target paths.target - Path Units.
Jan 13 20:17:25.909941 systemd[1]: Reached target slices.target - Slice Units.
Jan 13 20:17:25.909948 systemd[1]: Reached target swap.target - Swaps.
Jan 13 20:17:25.909956 systemd[1]: Reached target timers.target - Timer Units.
Jan 13 20:17:25.909963 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jan 13 20:17:25.909971 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 13 20:17:25.909979 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 13 20:17:25.909986 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Jan 13 20:17:25.909994 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:17:25.910004 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:17:25.910011 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:17:25.910019 systemd[1]: Reached target sockets.target - Socket Units.
Jan 13 20:17:25.910027 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Jan 13 20:17:25.910034 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 13 20:17:25.910042 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jan 13 20:17:25.910049 systemd[1]: Starting systemd-fsck-usr.service...
Jan 13 20:17:25.910057 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 13 20:17:25.910066 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 13 20:17:25.910073 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:17:25.910081 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Jan 13 20:17:25.910089 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:17:25.910096 systemd[1]: Finished systemd-fsck-usr.service.
Jan 13 20:17:25.910105 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 13 20:17:25.910114 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 13 20:17:25.910122 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 13 20:17:25.910143 systemd-journald[238]: Collecting audit messages is disabled.
Jan 13 20:17:25.910163 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:17:25.910171 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:17:25.910180 systemd-journald[238]: Journal started
Jan 13 20:17:25.910197 systemd-journald[238]: Runtime Journal (/run/log/journal/155086cdce53402386aaa30bad2c4484) is 5.9M, max 47.3M, 41.4M free.
Jan 13 20:17:25.896622 systemd-modules-load[239]: Inserted module 'overlay'
Jan 13 20:17:25.911987 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 13 20:17:25.914716 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 13 20:17:25.915126 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:17:25.917096 kernel: Bridge firewalling registered
Jan 13 20:17:25.915252 systemd-modules-load[239]: Inserted module 'br_netfilter'
Jan 13 20:17:25.916465 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:17:25.928523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:17:25.929823 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 13 20:17:25.930942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:17:25.934452 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jan 13 20:17:25.937538 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:17:25.943381 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:17:25.945734 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 13 20:17:25.947159 dracut-cmdline[272]: dracut-dracut-053
Jan 13 20:17:25.949187 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436
Jan 13 20:17:25.975540 systemd-resolved[285]: Positive Trust Anchors:
Jan 13 20:17:25.975630 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 13 20:17:25.975662 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 13 20:17:25.980166 systemd-resolved[285]: Defaulting to hostname 'linux'.
Jan 13 20:17:25.981295 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 13 20:17:25.983178 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:17:26.018356 kernel: SCSI subsystem initialized
Jan 13 20:17:26.022341 kernel: Loading iSCSI transport class v2.0-870.
Jan 13 20:17:26.029341 kernel: iscsi: registered transport (tcp)
Jan 13 20:17:26.044353 kernel: iscsi: registered transport (qla4xxx)
Jan 13 20:17:26.044368 kernel: QLogic iSCSI HBA Driver
Jan 13 20:17:26.084012 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jan 13 20:17:26.089470 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jan 13 20:17:26.106477 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 13 20:17:26.106530 kernel: device-mapper: uevent: version 1.0.3
Jan 13 20:17:26.106567 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 13 20:17:26.154351 kernel: raid6: neonx8   gen() 15666 MB/s
Jan 13 20:17:26.171349 kernel: raid6: neonx4   gen() 15459 MB/s
Jan 13 20:17:26.188352 kernel: raid6: neonx2   gen() 13078 MB/s
Jan 13 20:17:26.205347 kernel: raid6: neonx1   gen() 10394 MB/s
Jan 13 20:17:26.222339 kernel: raid6: int64x8  gen()  6893 MB/s
Jan 13 20:17:26.239354 kernel: raid6: int64x4  gen()  7270 MB/s
Jan 13 20:17:26.256349 kernel: raid6: int64x2  gen()  6068 MB/s
Jan 13 20:17:26.273350 kernel: raid6: int64x1  gen()  5021 MB/s
Jan 13 20:17:26.273373 kernel: raid6: using algorithm neonx8 gen() 15666 MB/s
Jan 13 20:17:26.290349 kernel: raid6: .... xor() 11860 MB/s, rmw enabled
Jan 13 20:17:26.290365 kernel: raid6: using neon recovery algorithm
Jan 13 20:17:26.295340 kernel: xor: measuring software checksum speed
Jan 13 20:17:26.295353 kernel:    8regs           : 19222 MB/sec
Jan 13 20:17:26.296754 kernel:    32regs          : 18263 MB/sec
Jan 13 20:17:26.296777 kernel:    arm64_neon      : 26493 MB/sec
Jan 13 20:17:26.296801 kernel: xor: using function: arm64_neon (26493 MB/sec)
Jan 13 20:17:26.354357 kernel: Btrfs loaded, zoned=no, fsverity=no
Jan 13 20:17:26.366394 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jan 13 20:17:26.374475 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:17:26.385245 systemd-udevd[462]: Using default interface naming scheme 'v255'.
Jan 13 20:17:26.388335 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:17:26.394478 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jan 13 20:17:26.408495 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation
Jan 13 20:17:26.444154 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 13 20:17:26.458514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 13 20:17:26.497118 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:17:26.504471 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jan 13 20:17:26.517474 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jan 13 20:17:26.518721 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 13 20:17:26.520972 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:17:26.522950 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 13 20:17:26.534479 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jan 13 20:17:26.539352 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues
Jan 13 20:17:26.547768 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Jan 13 20:17:26.547859 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Jan 13 20:17:26.547876 kernel: GPT:9289727 != 19775487
Jan 13 20:17:26.547886 kernel: GPT:Alternate GPT header not at the end of the disk.
Jan 13 20:17:26.547895 kernel: GPT:9289727 != 19775487
Jan 13 20:17:26.547905 kernel: GPT: Use GNU Parted to correct GPT errors.
Jan 13 20:17:26.547914 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 13 20:17:26.546812 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jan 13 20:17:26.552938 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 13 20:17:26.553037 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:17:26.555969 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:17:26.559525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 13 20:17:26.559713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:17:26.562982 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:17:26.567964 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (513)
Jan 13 20:17:26.567995 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (520)
Jan 13 20:17:26.570972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:17:26.585547 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Jan 13 20:17:26.586698 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:17:26.591813 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Jan 13 20:17:26.598205 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Jan 13 20:17:26.599142 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Jan 13 20:17:26.604025 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Jan 13 20:17:26.618465 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jan 13 20:17:26.622528 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:17:26.625274 disk-uuid[549]: Primary Header is updated.
Jan 13 20:17:26.625274 disk-uuid[549]: Secondary Entries is updated.
Jan 13 20:17:26.625274 disk-uuid[549]: Secondary Header is updated.
Jan 13 20:17:26.630774 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 13 20:17:26.644038 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:17:27.638618 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 13 20:17:27.638678 disk-uuid[550]: The operation has completed successfully.
Jan 13 20:17:27.666423 systemd[1]: disk-uuid.service: Deactivated successfully.
Jan 13 20:17:27.667341 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jan 13 20:17:27.684485 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jan 13 20:17:27.687078 sh[569]: Success
Jan 13 20:17:27.699909 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Jan 13 20:17:27.728829 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jan 13 20:17:27.737520 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jan 13 20:17:27.739390 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jan 13 20:17:27.748080 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78
Jan 13 20:17:27.748112 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:17:27.748122 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jan 13 20:17:27.748139 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jan 13 20:17:27.749334 kernel: BTRFS info (device dm-0): using free space tree
Jan 13 20:17:27.752068 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jan 13 20:17:27.753185 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jan 13 20:17:27.765532 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jan 13 20:17:27.766775 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jan 13 20:17:27.774813 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:17:27.774858 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:17:27.774868 kernel: BTRFS info (device vda6): using free space tree
Jan 13 20:17:27.777352 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 13 20:17:27.784058 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jan 13 20:17:27.785346 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:17:27.791382 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jan 13 20:17:27.797468 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jan 13 20:17:27.861134 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 13 20:17:27.874474 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 13 20:17:27.893951 ignition[664]: Ignition 2.20.0
Jan 13 20:17:27.893962 ignition[664]: Stage: fetch-offline
Jan 13 20:17:27.893994 ignition[664]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:17:27.894014 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:17:27.894170 ignition[664]: parsed url from cmdline: ""
Jan 13 20:17:27.894173 ignition[664]: no config URL provided
Jan 13 20:17:27.894178 ignition[664]: reading system config file "/usr/lib/ignition/user.ign"
Jan 13 20:17:27.894185 ignition[664]: no config at "/usr/lib/ignition/user.ign"
Jan 13 20:17:27.897994 systemd-networkd[765]: lo: Link UP
Jan 13 20:17:27.894226 ignition[664]: op(1): [started]  loading QEMU firmware config module
Jan 13 20:17:27.897998 systemd-networkd[765]: lo: Gained carrier
Jan 13 20:17:27.894231 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg"
Jan 13 20:17:27.898839 systemd-networkd[765]: Enumeration completed
Jan 13 20:17:27.899234 ignition[664]: op(1): [finished] loading QEMU firmware config module
Jan 13 20:17:27.898917 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 13 20:17:27.899873 systemd[1]: Reached target network.target - Network.
Jan 13 20:17:27.901547 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:17:27.901550 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:17:27.902214 systemd-networkd[765]: eth0: Link UP
Jan 13 20:17:27.902224 systemd-networkd[765]: eth0: Gained carrier
Jan 13 20:17:27.902231 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:17:27.913466 ignition[664]: parsing config with SHA512: a992ecf6da394ea8be9395eb05477967b103a7734e8b3e81d0ff7ef821a973f24de76d5dd5d13586f33849bfebd2168204d6b604280d41970b670a3af1dae7d1
Jan 13 20:17:27.915369 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 13 20:17:27.916692 unknown[664]: fetched base config from "system"
Jan 13 20:17:27.916700 unknown[664]: fetched user config from "qemu"
Jan 13 20:17:27.916942 ignition[664]: fetch-offline: fetch-offline passed
Jan 13 20:17:27.918518 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 13 20:17:27.917012 ignition[664]: Ignition finished successfully
Jan 13 20:17:27.919637 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Jan 13 20:17:27.924493 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jan 13 20:17:27.934039 ignition[772]: Ignition 2.20.0
Jan 13 20:17:27.934050 ignition[772]: Stage: kargs
Jan 13 20:17:27.934207 ignition[772]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:17:27.934216 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:17:27.934895 ignition[772]: kargs: kargs passed
Jan 13 20:17:27.937939 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jan 13 20:17:27.934933 ignition[772]: Ignition finished successfully
Jan 13 20:17:27.940022 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jan 13 20:17:27.952444 ignition[781]: Ignition 2.20.0
Jan 13 20:17:27.952455 ignition[781]: Stage: disks
Jan 13 20:17:27.952605 ignition[781]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:17:27.952614 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:17:27.954866 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jan 13 20:17:27.953270 ignition[781]: disks: disks passed
Jan 13 20:17:27.955703 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jan 13 20:17:27.953309 ignition[781]: Ignition finished successfully
Jan 13 20:17:27.957000 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 13 20:17:27.958357 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 13 20:17:27.959483 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 13 20:17:27.960806 systemd[1]: Reached target basic.target - Basic System.
Jan 13 20:17:27.972506 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jan 13 20:17:27.981595 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Jan 13 20:17:27.984472 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jan 13 20:17:27.986829 systemd[1]: Mounting sysroot.mount - /sysroot...
Jan 13 20:17:28.030345 kernel: EXT4-fs (vda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none.
Jan 13 20:17:28.030984 systemd[1]: Mounted sysroot.mount - /sysroot.
Jan 13 20:17:28.032016 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jan 13 20:17:28.047459 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 13 20:17:28.048951 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jan 13 20:17:28.049948 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Jan 13 20:17:28.050018 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jan 13 20:17:28.050043 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 13 20:17:28.056001 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800)
Jan 13 20:17:28.055756 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jan 13 20:17:28.059057 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:17:28.059073 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:17:28.059083 kernel: BTRFS info (device vda6): using free space tree
Jan 13 20:17:28.058790 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jan 13 20:17:28.061287 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 13 20:17:28.062855 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 13 20:17:28.097791 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory
Jan 13 20:17:28.101505 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory
Jan 13 20:17:28.104632 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory
Jan 13 20:17:28.108212 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory
Jan 13 20:17:28.174917 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jan 13 20:17:28.190486 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jan 13 20:17:28.191825 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jan 13 20:17:28.196357 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:17:28.212882 ignition[915]: INFO     : Ignition 2.20.0
Jan 13 20:17:28.212882 ignition[915]: INFO     : Stage: mount
Jan 13 20:17:28.214847 ignition[915]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:17:28.214847 ignition[915]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:17:28.214847 ignition[915]: INFO     : mount: mount passed
Jan 13 20:17:28.214847 ignition[915]: INFO     : Ignition finished successfully
Jan 13 20:17:28.214396 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jan 13 20:17:28.215747 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jan 13 20:17:28.222430 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jan 13 20:17:28.746814 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jan 13 20:17:28.754489 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 13 20:17:28.759349 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929)
Jan 13 20:17:28.761089 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:17:28.761103 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:17:28.761113 kernel: BTRFS info (device vda6): using free space tree
Jan 13 20:17:28.763338 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 13 20:17:28.764550 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 13 20:17:28.779939 ignition[946]: INFO     : Ignition 2.20.0
Jan 13 20:17:28.779939 ignition[946]: INFO     : Stage: files
Jan 13 20:17:28.781392 ignition[946]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:17:28.781392 ignition[946]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:17:28.781392 ignition[946]: DEBUG    : files: compiled without relabeling support, skipping
Jan 13 20:17:28.784171 ignition[946]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jan 13 20:17:28.784171 ignition[946]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jan 13 20:17:28.784171 ignition[946]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jan 13 20:17:28.784171 ignition[946]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jan 13 20:17:28.784171 ignition[946]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jan 13 20:17:28.783697 unknown[946]: wrote ssh authorized keys file for user: core
Jan 13 20:17:28.790108 ignition[946]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Jan 13 20:17:28.790108 ignition[946]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Jan 13 20:17:28.790108 ignition[946]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jan 13 20:17:28.790108 ignition[946]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jan 13 20:17:28.790108 ignition[946]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 13 20:17:28.790108 ignition[946]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 13 20:17:28.790108 ignition[946]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 13 20:17:28.790108 ignition[946]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1
Jan 13 20:17:29.041640 ignition[946]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Jan 13 20:17:29.284025 ignition[946]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Jan 13 20:17:29.284025 ignition[946]: INFO     : files: op(7): [started]  processing unit "coreos-metadata.service"
Jan 13 20:17:29.286744 ignition[946]: INFO     : files: op(7): op(8): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Jan 13 20:17:29.286744 ignition[946]: INFO     : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Jan 13 20:17:29.286744 ignition[946]: INFO     : files: op(7): [finished] processing unit "coreos-metadata.service"
Jan 13 20:17:29.286744 ignition[946]: INFO     : files: op(9): [started]  setting preset to disabled for "coreos-metadata.service"
Jan 13 20:17:29.308415 ignition[946]: INFO     : files: op(9): op(a): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Jan 13 20:17:29.311436 ignition[946]: INFO     : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Jan 13 20:17:29.312471 ignition[946]: INFO     : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service"
Jan 13 20:17:29.312471 ignition[946]: INFO     : files: createResultFile: createFiles: op(b): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jan 13 20:17:29.312471 ignition[946]: INFO     : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jan 13 20:17:29.312471 ignition[946]: INFO     : files: files passed
Jan 13 20:17:29.312471 ignition[946]: INFO     : Ignition finished successfully
Jan 13 20:17:29.313284 systemd[1]: Finished ignition-files.service - Ignition (files).
Jan 13 20:17:29.323449 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jan 13 20:17:29.324773 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jan 13 20:17:29.326152 systemd[1]: ignition-quench.service: Deactivated successfully.
Jan 13 20:17:29.326238 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jan 13 20:17:29.331775 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory
Jan 13 20:17:29.334568 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:17:29.334568 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:17:29.336792 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:17:29.337361 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 13 20:17:29.338975 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jan 13 20:17:29.340798 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jan 13 20:17:29.344467 systemd-networkd[765]: eth0: Gained IPv6LL
Jan 13 20:17:29.367094 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 13 20:17:29.367181 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jan 13 20:17:29.368823 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jan 13 20:17:29.370081 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jan 13 20:17:29.371368 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jan 13 20:17:29.371974 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jan 13 20:17:29.385966 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 13 20:17:29.393527 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jan 13 20:17:29.400694 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:17:29.401598 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:17:29.403108 systemd[1]: Stopped target timers.target - Timer Units.
Jan 13 20:17:29.404450 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 13 20:17:29.404553 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 13 20:17:29.406438 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jan 13 20:17:29.407885 systemd[1]: Stopped target basic.target - Basic System.
Jan 13 20:17:29.409031 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jan 13 20:17:29.410278 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 13 20:17:29.411682 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jan 13 20:17:29.413117 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jan 13 20:17:29.414439 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 13 20:17:29.415834 systemd[1]: Stopped target sysinit.target - System Initialization.
Jan 13 20:17:29.417339 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jan 13 20:17:29.418626 systemd[1]: Stopped target swap.target - Swaps.
Jan 13 20:17:29.419719 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 13 20:17:29.419833 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jan 13 20:17:29.421525 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:17:29.422953 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:17:29.424299 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Jan 13 20:17:29.425386 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:17:29.426544 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 13 20:17:29.426661 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jan 13 20:17:29.428726 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jan 13 20:17:29.428836 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 13 20:17:29.430279 systemd[1]: Stopped target paths.target - Path Units.
Jan 13 20:17:29.431400 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 13 20:17:29.436380 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:17:29.437386 systemd[1]: Stopped target slices.target - Slice Units.
Jan 13 20:17:29.438904 systemd[1]: Stopped target sockets.target - Socket Units.
Jan 13 20:17:29.440056 systemd[1]: iscsid.socket: Deactivated successfully.
Jan 13 20:17:29.440138 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jan 13 20:17:29.441237 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jan 13 20:17:29.441313 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 13 20:17:29.442418 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jan 13 20:17:29.442518 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 13 20:17:29.443915 systemd[1]: ignition-files.service: Deactivated successfully.
Jan 13 20:17:29.444012 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jan 13 20:17:29.456488 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jan 13 20:17:29.457770 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jan 13 20:17:29.458416 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 13 20:17:29.458523 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:17:29.459830 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 13 20:17:29.459916 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 13 20:17:29.464684 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 13 20:17:29.465652 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jan 13 20:17:29.469311 ignition[1001]: INFO     : Ignition 2.20.0
Jan 13 20:17:29.469311 ignition[1001]: INFO     : Stage: umount
Jan 13 20:17:29.470605 ignition[1001]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:17:29.470605 ignition[1001]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:17:29.470605 ignition[1001]: INFO     : umount: umount passed
Jan 13 20:17:29.470605 ignition[1001]: INFO     : Ignition finished successfully
Jan 13 20:17:29.470499 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jan 13 20:17:29.472521 systemd[1]: ignition-mount.service: Deactivated successfully.
Jan 13 20:17:29.472607 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jan 13 20:17:29.473805 systemd[1]: Stopped target network.target - Network.
Jan 13 20:17:29.474809 systemd[1]: ignition-disks.service: Deactivated successfully.
Jan 13 20:17:29.474862 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jan 13 20:17:29.476017 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jan 13 20:17:29.476099 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jan 13 20:17:29.477334 systemd[1]: ignition-setup.service: Deactivated successfully.
Jan 13 20:17:29.477378 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jan 13 20:17:29.478717 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Jan 13 20:17:29.478755 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Jan 13 20:17:29.480224 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jan 13 20:17:29.481442 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jan 13 20:17:29.488869 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jan 13 20:17:29.488984 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jan 13 20:17:29.491319 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 13 20:17:29.491383 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:17:29.492372 systemd-networkd[765]: eth0: DHCPv6 lease lost
Jan 13 20:17:29.494301 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jan 13 20:17:29.494425 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jan 13 20:17:29.496141 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jan 13 20:17:29.496171 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:17:29.506438 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jan 13 20:17:29.507090 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jan 13 20:17:29.507143 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 13 20:17:29.508151 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 13 20:17:29.508193 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:17:29.509458 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 13 20:17:29.509494 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:17:29.510847 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:17:29.519256 systemd[1]: network-cleanup.service: Deactivated successfully.
Jan 13 20:17:29.519407 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jan 13 20:17:29.520484 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jan 13 20:17:29.521305 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jan 13 20:17:29.522985 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jan 13 20:17:29.523070 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jan 13 20:17:29.536081 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 13 20:17:29.536240 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:17:29.538128 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 13 20:17:29.538170 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:17:29.539506 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 13 20:17:29.539537 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:17:29.540773 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 13 20:17:29.540813 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jan 13 20:17:29.542708 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 13 20:17:29.542751 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jan 13 20:17:29.544673 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 13 20:17:29.544713 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:17:29.555460 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jan 13 20:17:29.556207 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 13 20:17:29.556267 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:17:29.558053 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Jan 13 20:17:29.558091 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 13 20:17:29.559529 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 13 20:17:29.559567 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:17:29.561155 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 13 20:17:29.561192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:17:29.562841 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 13 20:17:29.564359 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jan 13 20:17:29.566204 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jan 13 20:17:29.567816 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jan 13 20:17:29.596191 systemd[1]: Switching root.
Jan 13 20:17:29.622143 systemd-journald[238]: Journal stopped
Jan 13 20:17:30.264207 systemd-journald[238]: Received SIGTERM from PID 1 (systemd).
Jan 13 20:17:30.264267 kernel: SELinux:  policy capability network_peer_controls=1
Jan 13 20:17:30.264280 kernel: SELinux:  policy capability open_perms=1
Jan 13 20:17:30.264293 kernel: SELinux:  policy capability extended_socket_class=1
Jan 13 20:17:30.264302 kernel: SELinux:  policy capability always_check_network=0
Jan 13 20:17:30.264312 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 13 20:17:30.264346 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 13 20:17:30.264359 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Jan 13 20:17:30.264369 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Jan 13 20:17:30.264379 kernel: audit: type=1403 audit(1736799449.744:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 13 20:17:30.264390 systemd[1]: Successfully loaded SELinux policy in 29.799ms.
Jan 13 20:17:30.264409 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.267ms.
Jan 13 20:17:30.264422 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 13 20:17:30.264434 systemd[1]: Detected virtualization kvm.
Jan 13 20:17:30.264444 systemd[1]: Detected architecture arm64.
Jan 13 20:17:30.264454 systemd[1]: Detected first boot.
Jan 13 20:17:30.264464 systemd[1]: Initializing machine ID from VM UUID.
Jan 13 20:17:30.264476 zram_generator::config[1047]: No configuration found.
Jan 13 20:17:30.264488 systemd[1]: Populated /etc with preset unit settings.
Jan 13 20:17:30.264499 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 13 20:17:30.264509 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Jan 13 20:17:30.264520 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 13 20:17:30.264531 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Jan 13 20:17:30.264541 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Jan 13 20:17:30.264551 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Jan 13 20:17:30.264562 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Jan 13 20:17:30.264573 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Jan 13 20:17:30.264586 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Jan 13 20:17:30.264596 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Jan 13 20:17:30.264607 systemd[1]: Created slice user.slice - User and Session Slice.
Jan 13 20:17:30.264617 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:17:30.264628 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:17:30.264639 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Jan 13 20:17:30.264650 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Jan 13 20:17:30.264662 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Jan 13 20:17:30.264673 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 13 20:17:30.264684 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Jan 13 20:17:30.264694 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:17:30.264704 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Jan 13 20:17:30.264715 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Jan 13 20:17:30.264725 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Jan 13 20:17:30.264736 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Jan 13 20:17:30.264747 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:17:30.264758 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 13 20:17:30.264768 systemd[1]: Reached target slices.target - Slice Units.
Jan 13 20:17:30.264779 systemd[1]: Reached target swap.target - Swaps.
Jan 13 20:17:30.264789 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Jan 13 20:17:30.264799 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Jan 13 20:17:30.264811 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:17:30.264821 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:17:30.264832 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:17:30.264846 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Jan 13 20:17:30.264858 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Jan 13 20:17:30.264868 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Jan 13 20:17:30.264879 systemd[1]: Mounting media.mount - External Media Directory...
Jan 13 20:17:30.264889 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Jan 13 20:17:30.264899 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Jan 13 20:17:30.264910 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Jan 13 20:17:30.264921 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 13 20:17:30.264931 systemd[1]: Reached target machines.target - Containers.
Jan 13 20:17:30.264943 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Jan 13 20:17:30.264954 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:17:30.264965 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 13 20:17:30.264975 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Jan 13 20:17:30.264986 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:17:30.264996 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 13 20:17:30.265007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:17:30.265017 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Jan 13 20:17:30.265030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:17:30.265041 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Jan 13 20:17:30.265052 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 13 20:17:30.265063 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Jan 13 20:17:30.265073 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Jan 13 20:17:30.265084 kernel: fuse: init (API version 7.39)
Jan 13 20:17:30.265093 systemd[1]: Stopped systemd-fsck-usr.service.
Jan 13 20:17:30.265103 kernel: loop: module loaded
Jan 13 20:17:30.265113 kernel: ACPI: bus type drm_connector registered
Jan 13 20:17:30.265125 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 13 20:17:30.265136 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 13 20:17:30.265146 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Jan 13 20:17:30.265157 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Jan 13 20:17:30.265168 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 13 20:17:30.265178 systemd[1]: verity-setup.service: Deactivated successfully.
Jan 13 20:17:30.265205 systemd-journald[1115]: Collecting audit messages is disabled.
Jan 13 20:17:30.265232 systemd[1]: Stopped verity-setup.service.
Jan 13 20:17:30.265246 systemd-journald[1115]: Journal started
Jan 13 20:17:30.265266 systemd-journald[1115]: Runtime Journal (/run/log/journal/155086cdce53402386aaa30bad2c4484) is 5.9M, max 47.3M, 41.4M free.
Jan 13 20:17:30.085151 systemd[1]: Queued start job for default target multi-user.target.
Jan 13 20:17:30.107217 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Jan 13 20:17:30.107553 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 13 20:17:30.267386 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 13 20:17:30.267989 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Jan 13 20:17:30.269234 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Jan 13 20:17:30.270272 systemd[1]: Mounted media.mount - External Media Directory.
Jan 13 20:17:30.271198 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Jan 13 20:17:30.272175 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Jan 13 20:17:30.273150 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Jan 13 20:17:30.274205 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Jan 13 20:17:30.275492 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:17:30.276760 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 13 20:17:30.276898 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Jan 13 20:17:30.278058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:17:30.278206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:17:30.279456 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 13 20:17:30.279596 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 13 20:17:30.280593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:17:30.280714 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:17:30.281975 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 13 20:17:30.282107 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Jan 13 20:17:30.283143 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:17:30.283268 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:17:30.284345 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:17:30.285524 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Jan 13 20:17:30.286669 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Jan 13 20:17:30.297663 systemd[1]: Reached target network-pre.target - Preparation for Network.
Jan 13 20:17:30.308431 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Jan 13 20:17:30.310171 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Jan 13 20:17:30.311052 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Jan 13 20:17:30.311085 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 13 20:17:30.313012 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Jan 13 20:17:30.314874 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Jan 13 20:17:30.316663 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Jan 13 20:17:30.317505 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:17:30.319007 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Jan 13 20:17:30.320895 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Jan 13 20:17:30.322020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 13 20:17:30.325466 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Jan 13 20:17:30.327484 systemd-journald[1115]: Time spent on flushing to /var/log/journal/155086cdce53402386aaa30bad2c4484 is 24.271ms for 837 entries.
Jan 13 20:17:30.327484 systemd-journald[1115]: System Journal (/var/log/journal/155086cdce53402386aaa30bad2c4484) is 8.0M, max 195.6M, 187.6M free.
Jan 13 20:17:30.356718 systemd-journald[1115]: Received client request to flush runtime journal.
Jan 13 20:17:30.327447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 13 20:17:30.328565 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:17:30.336157 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Jan 13 20:17:30.338145 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 13 20:17:30.340662 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:17:30.341920 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Jan 13 20:17:30.342889 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Jan 13 20:17:30.344289 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Jan 13 20:17:30.352496 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Jan 13 20:17:30.354622 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:17:30.356525 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Jan 13 20:17:30.358314 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Jan 13 20:17:30.360762 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Jan 13 20:17:30.362581 kernel: loop0: detected capacity change from 0 to 194096
Jan 13 20:17:30.370159 systemd-tmpfiles[1161]: ACLs are not supported, ignoring.
Jan 13 20:17:30.370175 systemd-tmpfiles[1161]: ACLs are not supported, ignoring.
Jan 13 20:17:30.374564 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Jan 13 20:17:30.377947 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 13 20:17:30.378996 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jan 13 20:17:30.381542 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Jan 13 20:17:30.386671 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Jan 13 20:17:30.410352 kernel: loop1: detected capacity change from 0 to 116808
Jan 13 20:17:30.425983 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Jan 13 20:17:30.426795 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Jan 13 20:17:30.428202 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Jan 13 20:17:30.436502 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 13 20:17:30.448351 kernel: loop2: detected capacity change from 0 to 113536
Jan 13 20:17:30.456173 systemd-tmpfiles[1183]: ACLs are not supported, ignoring.
Jan 13 20:17:30.456194 systemd-tmpfiles[1183]: ACLs are not supported, ignoring.
Jan 13 20:17:30.460558 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:17:30.490384 kernel: loop3: detected capacity change from 0 to 194096
Jan 13 20:17:30.496498 kernel: loop4: detected capacity change from 0 to 116808
Jan 13 20:17:30.502179 kernel: loop5: detected capacity change from 0 to 113536
Jan 13 20:17:30.503587 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Jan 13 20:17:30.503932 (sd-merge)[1187]: Merged extensions into '/usr'.
Jan 13 20:17:30.507486 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)...
Jan 13 20:17:30.507498 systemd[1]: Reloading...
Jan 13 20:17:30.553488 zram_generator::config[1209]: No configuration found.
Jan 13 20:17:30.599380 ldconfig[1154]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Jan 13 20:17:30.652894 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:17:30.687657 systemd[1]: Reloading finished in 179 ms.
Jan 13 20:17:30.715708 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Jan 13 20:17:30.716898 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Jan 13 20:17:30.729592 systemd[1]: Starting ensure-sysext.service...
Jan 13 20:17:30.731241 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 13 20:17:30.742386 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)...
Jan 13 20:17:30.742405 systemd[1]: Reloading...
Jan 13 20:17:30.750276 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Jan 13 20:17:30.750554 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Jan 13 20:17:30.751168 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Jan 13 20:17:30.751487 systemd-tmpfiles[1248]: ACLs are not supported, ignoring.
Jan 13 20:17:30.751543 systemd-tmpfiles[1248]: ACLs are not supported, ignoring.
Jan 13 20:17:30.754024 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot.
Jan 13 20:17:30.754032 systemd-tmpfiles[1248]: Skipping /boot
Jan 13 20:17:30.760738 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot.
Jan 13 20:17:30.760755 systemd-tmpfiles[1248]: Skipping /boot
Jan 13 20:17:30.785811 zram_generator::config[1276]: No configuration found.
Jan 13 20:17:30.864450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:17:30.899082 systemd[1]: Reloading finished in 156 ms.
Jan 13 20:17:30.915359 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Jan 13 20:17:30.927778 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:17:30.936042 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 13 20:17:30.938270 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Jan 13 20:17:30.940238 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Jan 13 20:17:30.945592 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 13 20:17:30.954283 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:17:30.960527 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Jan 13 20:17:30.963792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:17:30.973719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:17:30.975768 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:17:30.980613 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:17:30.981970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:17:30.984874 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Jan 13 20:17:30.988399 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Jan 13 20:17:30.989276 systemd-udevd[1322]: Using default interface naming scheme 'v255'.
Jan 13 20:17:30.989947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:17:30.990074 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:17:30.992528 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:17:30.992654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:17:30.994952 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:17:30.995937 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:17:31.003360 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Jan 13 20:17:31.008098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:17:31.016681 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:17:31.020601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:17:31.023487 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:17:31.024439 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:17:31.027647 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Jan 13 20:17:31.028480 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 13 20:17:31.029049 augenrules[1364]: No rules
Jan 13 20:17:31.031340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:17:31.033188 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 13 20:17:31.033423 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 13 20:17:31.034689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:17:31.034816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:17:31.036288 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Jan 13 20:17:31.037767 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:17:31.037889 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:17:31.043132 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Jan 13 20:17:31.044676 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:17:31.044805 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:17:31.062272 systemd[1]: Finished ensure-sysext.service.
Jan 13 20:17:31.066414 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Jan 13 20:17:31.074430 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1356)
Jan 13 20:17:31.075563 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 13 20:17:31.076465 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:17:31.079292 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:17:31.084988 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 13 20:17:31.089596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:17:31.093143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:17:31.094018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:17:31.097218 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 13 20:17:31.100237 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Jan 13 20:17:31.101456 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 13 20:17:31.101930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:17:31.103381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:17:31.104495 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 13 20:17:31.104615 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 13 20:17:31.111813 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:17:31.111977 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:17:31.114127 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:17:31.115375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:17:31.121809 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Jan 13 20:17:31.124243 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Jan 13 20:17:31.130036 augenrules[1388]: /sbin/augenrules: No change
Jan 13 20:17:31.133274 systemd-resolved[1315]: Positive Trust Anchors:
Jan 13 20:17:31.133365 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 13 20:17:31.133401 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 13 20:17:31.138579 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Jan 13 20:17:31.139526 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 13 20:17:31.139586 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 13 20:17:31.140511 systemd-resolved[1315]: Defaulting to hostname 'linux'.
Jan 13 20:17:31.148503 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 13 20:17:31.149507 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:17:31.151393 augenrules[1420]: No rules
Jan 13 20:17:31.153243 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 13 20:17:31.153622 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 13 20:17:31.169258 systemd-networkd[1401]: lo: Link UP
Jan 13 20:17:31.169268 systemd-networkd[1401]: lo: Gained carrier
Jan 13 20:17:31.170036 systemd-networkd[1401]: Enumeration completed
Jan 13 20:17:31.170413 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Jan 13 20:17:31.170550 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:17:31.170557 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:17:31.171371 systemd-networkd[1401]: eth0: Link UP
Jan 13 20:17:31.171378 systemd-networkd[1401]: eth0: Gained carrier
Jan 13 20:17:31.171390 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:17:31.171889 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 13 20:17:31.174593 systemd[1]: Reached target network.target - Network.
Jan 13 20:17:31.185581 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Jan 13 20:17:31.186534 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Jan 13 20:17:31.187853 systemd[1]: Reached target time-set.target - System Time Set.
Jan 13 20:17:31.192399 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.86/16, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 13 20:17:31.194757 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection.
Jan 13 20:17:31.196489 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Jan 13 20:17:31.196544 systemd-timesyncd[1403]: Initial clock synchronization to Mon 2025-01-13 20:17:31.160701 UTC.
Jan 13 20:17:31.211610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:17:31.219534 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Jan 13 20:17:31.232567 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Jan 13 20:17:31.249667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:17:31.250335 lvm[1437]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 13 20:17:31.288863 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Jan 13 20:17:31.290014 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:17:31.290887 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 13 20:17:31.291738 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Jan 13 20:17:31.292631 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Jan 13 20:17:31.293665 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jan 13 20:17:31.294534 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Jan 13 20:17:31.295433 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Jan 13 20:17:31.296292 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Jan 13 20:17:31.296334 systemd[1]: Reached target paths.target - Path Units.
Jan 13 20:17:31.296964 systemd[1]: Reached target timers.target - Timer Units.
Jan 13 20:17:31.298398 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Jan 13 20:17:31.300384 systemd[1]: Starting docker.socket - Docker Socket for the API...
Jan 13 20:17:31.310075 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Jan 13 20:17:31.311925 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Jan 13 20:17:31.313135 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Jan 13 20:17:31.314014 systemd[1]: Reached target sockets.target - Socket Units.
Jan 13 20:17:31.314727 systemd[1]: Reached target basic.target - Basic System.
Jan 13 20:17:31.315432 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Jan 13 20:17:31.315462 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Jan 13 20:17:31.316262 systemd[1]: Starting containerd.service - containerd container runtime...
Jan 13 20:17:31.317977 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Jan 13 20:17:31.320462 lvm[1444]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 13 20:17:31.321544 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Jan 13 20:17:31.324602 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Jan 13 20:17:31.325622 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Jan 13 20:17:31.327479 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Jan 13 20:17:31.331464 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Jan 13 20:17:31.335235 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Jan 13 20:17:31.337637 jq[1447]: false
Jan 13 20:17:31.342431 systemd[1]: Starting systemd-logind.service - User Login Management...
Jan 13 20:17:31.343084 dbus-daemon[1446]: [system] SELinux support is enabled
Jan 13 20:17:31.343833 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Jan 13 20:17:31.344247 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Jan 13 20:17:31.344846 systemd[1]: Starting update-engine.service - Update Engine...
Jan 13 20:17:31.347463 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Jan 13 20:17:31.351904 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Jan 13 20:17:31.354533 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Jan 13 20:17:31.355971 extend-filesystems[1448]: Found loop3
Jan 13 20:17:31.356681 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Jan 13 20:17:31.356820 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Jan 13 20:17:31.356948 extend-filesystems[1448]: Found loop4
Jan 13 20:17:31.357059 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Jan 13 20:17:31.357184 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Jan 13 20:17:31.357616 extend-filesystems[1448]: Found loop5
Jan 13 20:17:31.358844 extend-filesystems[1448]: Found vda
Jan 13 20:17:31.360023 extend-filesystems[1448]: Found vda1
Jan 13 20:17:31.360984 extend-filesystems[1448]: Found vda2
Jan 13 20:17:31.360984 extend-filesystems[1448]: Found vda3
Jan 13 20:17:31.360984 extend-filesystems[1448]: Found usr
Jan 13 20:17:31.360984 extend-filesystems[1448]: Found vda4
Jan 13 20:17:31.360984 extend-filesystems[1448]: Found vda6
Jan 13 20:17:31.360984 extend-filesystems[1448]: Found vda7
Jan 13 20:17:31.360984 extend-filesystems[1448]: Found vda9
Jan 13 20:17:31.360984 extend-filesystems[1448]: Checking size of /dev/vda9
Jan 13 20:17:31.373386 jq[1456]: true
Jan 13 20:17:31.362752 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Jan 13 20:17:31.362786 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Jan 13 20:17:31.365476 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Jan 13 20:17:31.365492 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Jan 13 20:17:31.383931 jq[1467]: true
Jan 13 20:17:31.386967 systemd[1]: motdgen.service: Deactivated successfully.
Jan 13 20:17:31.389383 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Jan 13 20:17:31.390048 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Jan 13 20:17:31.393818 extend-filesystems[1448]: Resized partition /dev/vda9
Jan 13 20:17:31.403940 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024)
Jan 13 20:17:31.407690 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Jan 13 20:17:31.394235 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 13 20:17:31.399424 systemd-logind[1452]: New seat seat0.
Jan 13 20:17:31.406693 systemd[1]: Started systemd-logind.service - User Login Management.
Jan 13 20:17:31.411007 update_engine[1455]: I20250113 20:17:31.409502  1455 main.cc:92] Flatcar Update Engine starting
Jan 13 20:17:31.412696 update_engine[1455]: I20250113 20:17:31.412578  1455 update_check_scheduler.cc:74] Next update check in 2m1s
Jan 13 20:17:31.412822 systemd[1]: Started update-engine.service - Update Engine.
Jan 13 20:17:31.422359 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Jan 13 20:17:31.424605 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Jan 13 20:17:31.435768 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1351)
Jan 13 20:17:31.436196 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Jan 13 20:17:31.436196 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 1
Jan 13 20:17:31.436196 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Jan 13 20:17:31.439522 extend-filesystems[1448]: Resized filesystem in /dev/vda9
Jan 13 20:17:31.442610 systemd[1]: extend-filesystems.service: Deactivated successfully.
Jan 13 20:17:31.442820 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Jan 13 20:17:31.465302 bash[1495]: Updated "/home/core/.ssh/authorized_keys"
Jan 13 20:17:31.468202 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Jan 13 20:17:31.469705 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Jan 13 20:17:31.491081 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Jan 13 20:17:31.581700 containerd[1473]: time="2025-01-13T20:17:31.581570720Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Jan 13 20:17:31.604700 containerd[1473]: time="2025-01-13T20:17:31.604651840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606071 containerd[1473]: time="2025-01-13T20:17:31.606017800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606071 containerd[1473]: time="2025-01-13T20:17:31.606050200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 13 20:17:31.606071 containerd[1473]: time="2025-01-13T20:17:31.606067480Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 13 20:17:31.606261 containerd[1473]: time="2025-01-13T20:17:31.606233800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 13 20:17:31.606261 containerd[1473]: time="2025-01-13T20:17:31.606257280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606335 containerd[1473]: time="2025-01-13T20:17:31.606311960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606363 containerd[1473]: time="2025-01-13T20:17:31.606336560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606527 containerd[1473]: time="2025-01-13T20:17:31.606498280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606527 containerd[1473]: time="2025-01-13T20:17:31.606517640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606563 containerd[1473]: time="2025-01-13T20:17:31.606532320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606563 containerd[1473]: time="2025-01-13T20:17:31.606541440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606622 containerd[1473]: time="2025-01-13T20:17:31.606609720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606803 containerd[1473]: time="2025-01-13T20:17:31.606782000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606888 containerd[1473]: time="2025-01-13T20:17:31.606874880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:17:31.606909 containerd[1473]: time="2025-01-13T20:17:31.606891200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 13 20:17:31.606972 containerd[1473]: time="2025-01-13T20:17:31.606960120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 13 20:17:31.607010 containerd[1473]: time="2025-01-13T20:17:31.607000040Z" level=info msg="metadata content store policy set" policy=shared
Jan 13 20:17:31.611618 containerd[1473]: time="2025-01-13T20:17:31.611587120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 13 20:17:31.611663 containerd[1473]: time="2025-01-13T20:17:31.611638080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 13 20:17:31.611663 containerd[1473]: time="2025-01-13T20:17:31.611652840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 13 20:17:31.611699 containerd[1473]: time="2025-01-13T20:17:31.611669760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 13 20:17:31.611699 containerd[1473]: time="2025-01-13T20:17:31.611689320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 13 20:17:31.611859 containerd[1473]: time="2025-01-13T20:17:31.611842400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 13 20:17:31.612075 containerd[1473]: time="2025-01-13T20:17:31.612061480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 13 20:17:31.612182 containerd[1473]: time="2025-01-13T20:17:31.612168240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 13 20:17:31.612221 containerd[1473]: time="2025-01-13T20:17:31.612186400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 13 20:17:31.612221 containerd[1473]: time="2025-01-13T20:17:31.612200840Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 13 20:17:31.612258 containerd[1473]: time="2025-01-13T20:17:31.612221480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 13 20:17:31.612258 containerd[1473]: time="2025-01-13T20:17:31.612234920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 13 20:17:31.612258 containerd[1473]: time="2025-01-13T20:17:31.612247960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 13 20:17:31.612306 containerd[1473]: time="2025-01-13T20:17:31.612260560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 13 20:17:31.612306 containerd[1473]: time="2025-01-13T20:17:31.612283040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 13 20:17:31.612306 containerd[1473]: time="2025-01-13T20:17:31.612296720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 13 20:17:31.612381 containerd[1473]: time="2025-01-13T20:17:31.612309400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 13 20:17:31.612381 containerd[1473]: time="2025-01-13T20:17:31.612320520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 13 20:17:31.612381 containerd[1473]: time="2025-01-13T20:17:31.612352640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612381 containerd[1473]: time="2025-01-13T20:17:31.612366480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612381 containerd[1473]: time="2025-01-13T20:17:31.612377720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612487 containerd[1473]: time="2025-01-13T20:17:31.612388760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612487 containerd[1473]: time="2025-01-13T20:17:31.612400160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612487 containerd[1473]: time="2025-01-13T20:17:31.612413040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612487 containerd[1473]: time="2025-01-13T20:17:31.612425080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612487 containerd[1473]: time="2025-01-13T20:17:31.612437200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612487 containerd[1473]: time="2025-01-13T20:17:31.612449480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612487 containerd[1473]: time="2025-01-13T20:17:31.612463880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612487 containerd[1473]: time="2025-01-13T20:17:31.612477000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612487 containerd[1473]: time="2025-01-13T20:17:31.612488360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612638 containerd[1473]: time="2025-01-13T20:17:31.612500520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612638 containerd[1473]: time="2025-01-13T20:17:31.612517440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 13 20:17:31.612638 containerd[1473]: time="2025-01-13T20:17:31.612537080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612638 containerd[1473]: time="2025-01-13T20:17:31.612558280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612638 containerd[1473]: time="2025-01-13T20:17:31.612569520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 13 20:17:31.612756 containerd[1473]: time="2025-01-13T20:17:31.612743080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 13 20:17:31.612782 containerd[1473]: time="2025-01-13T20:17:31.612761680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 13 20:17:31.612782 containerd[1473]: time="2025-01-13T20:17:31.612774200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 13 20:17:31.612830 containerd[1473]: time="2025-01-13T20:17:31.612785600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 13 20:17:31.612830 containerd[1473]: time="2025-01-13T20:17:31.612794160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.612830 containerd[1473]: time="2025-01-13T20:17:31.612805920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 13 20:17:31.612830 containerd[1473]: time="2025-01-13T20:17:31.612815240Z" level=info msg="NRI interface is disabled by configuration."
Jan 13 20:17:31.612830 containerd[1473]: time="2025-01-13T20:17:31.612824960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 13 20:17:31.613131 containerd[1473]: time="2025-01-13T20:17:31.613067440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jan 13 20:17:31.613131 containerd[1473]: time="2025-01-13T20:17:31.613117160Z" level=info msg="Connect containerd service"
Jan 13 20:17:31.613281 containerd[1473]: time="2025-01-13T20:17:31.613146680Z" level=info msg="using legacy CRI server"
Jan 13 20:17:31.613281 containerd[1473]: time="2025-01-13T20:17:31.613154120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 13 20:17:31.613425 containerd[1473]: time="2025-01-13T20:17:31.613407080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Jan 13 20:17:31.614085 containerd[1473]: time="2025-01-13T20:17:31.614059160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 13 20:17:31.614705 containerd[1473]: time="2025-01-13T20:17:31.614383440Z" level=info msg="Start subscribing containerd event"
Jan 13 20:17:31.614705 containerd[1473]: time="2025-01-13T20:17:31.614437360Z" level=info msg="Start recovering state"
Jan 13 20:17:31.614705 containerd[1473]: time="2025-01-13T20:17:31.614497080Z" level=info msg="Start event monitor"
Jan 13 20:17:31.614705 containerd[1473]: time="2025-01-13T20:17:31.614508000Z" level=info msg="Start snapshots syncer"
Jan 13 20:17:31.614705 containerd[1473]: time="2025-01-13T20:17:31.614517320Z" level=info msg="Start cni network conf syncer for default"
Jan 13 20:17:31.614705 containerd[1473]: time="2025-01-13T20:17:31.614527160Z" level=info msg="Start streaming server"
Jan 13 20:17:31.614835 containerd[1473]: time="2025-01-13T20:17:31.614735560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 13 20:17:31.614835 containerd[1473]: time="2025-01-13T20:17:31.614786760Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 13 20:17:31.617416 containerd[1473]: time="2025-01-13T20:17:31.616585880Z" level=info msg="containerd successfully booted in 0.037161s"
Jan 13 20:17:31.616665 systemd[1]: Started containerd.service - containerd container runtime.
Jan 13 20:17:32.304928 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Jan 13 20:17:32.323386 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Jan 13 20:17:32.338957 systemd[1]: Starting issuegen.service - Generate /run/issue...
Jan 13 20:17:32.344070 systemd[1]: issuegen.service: Deactivated successfully.
Jan 13 20:17:32.344275 systemd[1]: Finished issuegen.service - Generate /run/issue.
Jan 13 20:17:32.347140 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jan 13 20:17:32.362196 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jan 13 20:17:32.372722 systemd[1]: Started getty@tty1.service - Getty on tty1.
Jan 13 20:17:32.374574 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Jan 13 20:17:32.375541 systemd[1]: Reached target getty.target - Login Prompts.
Jan 13 20:17:32.479475 systemd-networkd[1401]: eth0: Gained IPv6LL
Jan 13 20:17:32.482224 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Jan 13 20:17:32.483766 systemd[1]: Reached target network-online.target - Network is Online.
Jan 13 20:17:32.501623 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Jan 13 20:17:32.503962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:17:32.505973 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Jan 13 20:17:32.521349 systemd[1]: coreos-metadata.service: Deactivated successfully.
Jan 13 20:17:32.523378 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Jan 13 20:17:32.525435 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Jan 13 20:17:32.527951 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Jan 13 20:17:32.991511 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:17:32.992736 systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 13 20:17:32.996157 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:17:32.998088 systemd[1]: Startup finished in 515ms (kernel) + 4.042s (initrd) + 3.288s (userspace) = 7.846s.
Jan 13 20:17:33.455814 kubelet[1552]: E0113 20:17:33.455703    1552 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:17:33.458266 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:17:33.458441 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:17:38.302884 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Jan 13 20:17:38.303961 systemd[1]: Started sshd@0-10.0.0.86:22-10.0.0.1:55436.service - OpenSSH per-connection server daemon (10.0.0.1:55436).
Jan 13 20:17:38.361339 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 55436 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:17:38.364669 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:17:38.372376 systemd-logind[1452]: New session 1 of user core.
Jan 13 20:17:38.373283 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Jan 13 20:17:38.382525 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Jan 13 20:17:38.390713 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Jan 13 20:17:38.393802 systemd[1]: Starting user@500.service - User Manager for UID 500...
Jan 13 20:17:38.399789 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Jan 13 20:17:38.467095 systemd[1570]: Queued start job for default target default.target.
Jan 13 20:17:38.475191 systemd[1570]: Created slice app.slice - User Application Slice.
Jan 13 20:17:38.475234 systemd[1570]: Reached target paths.target - Paths.
Jan 13 20:17:38.475245 systemd[1570]: Reached target timers.target - Timers.
Jan 13 20:17:38.476372 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket...
Jan 13 20:17:38.484809 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Jan 13 20:17:38.484865 systemd[1570]: Reached target sockets.target - Sockets.
Jan 13 20:17:38.484877 systemd[1570]: Reached target basic.target - Basic System.
Jan 13 20:17:38.484910 systemd[1570]: Reached target default.target - Main User Target.
Jan 13 20:17:38.484934 systemd[1570]: Startup finished in 80ms.
Jan 13 20:17:38.485206 systemd[1]: Started user@500.service - User Manager for UID 500.
Jan 13 20:17:38.486422 systemd[1]: Started session-1.scope - Session 1 of User core.
Jan 13 20:17:38.547792 systemd[1]: Started sshd@1-10.0.0.86:22-10.0.0.1:55446.service - OpenSSH per-connection server daemon (10.0.0.1:55446).
Jan 13 20:17:38.593526 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 55446 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:17:38.594663 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:17:38.598417 systemd-logind[1452]: New session 2 of user core.
Jan 13 20:17:38.606469 systemd[1]: Started session-2.scope - Session 2 of User core.
Jan 13 20:17:38.655950 sshd[1583]: Connection closed by 10.0.0.1 port 55446
Jan 13 20:17:38.656230 sshd-session[1581]: pam_unix(sshd:session): session closed for user core
Jan 13 20:17:38.666423 systemd[1]: sshd@1-10.0.0.86:22-10.0.0.1:55446.service: Deactivated successfully.
Jan 13 20:17:38.667755 systemd[1]: session-2.scope: Deactivated successfully.
Jan 13 20:17:38.668877 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit.
Jan 13 20:17:38.669934 systemd[1]: Started sshd@2-10.0.0.86:22-10.0.0.1:55450.service - OpenSSH per-connection server daemon (10.0.0.1:55450).
Jan 13 20:17:38.670646 systemd-logind[1452]: Removed session 2.
Jan 13 20:17:38.706871 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 55450 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:17:38.707939 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:17:38.711379 systemd-logind[1452]: New session 3 of user core.
Jan 13 20:17:38.719475 systemd[1]: Started session-3.scope - Session 3 of User core.
Jan 13 20:17:38.766807 sshd[1590]: Connection closed by 10.0.0.1 port 55450
Jan 13 20:17:38.767161 sshd-session[1588]: pam_unix(sshd:session): session closed for user core
Jan 13 20:17:38.785280 systemd[1]: sshd@2-10.0.0.86:22-10.0.0.1:55450.service: Deactivated successfully.
Jan 13 20:17:38.786451 systemd[1]: session-3.scope: Deactivated successfully.
Jan 13 20:17:38.788290 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit.
Jan 13 20:17:38.789301 systemd[1]: Started sshd@3-10.0.0.86:22-10.0.0.1:55456.service - OpenSSH per-connection server daemon (10.0.0.1:55456).
Jan 13 20:17:38.790009 systemd-logind[1452]: Removed session 3.
Jan 13 20:17:38.826247 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 55456 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:17:38.827345 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:17:38.830769 systemd-logind[1452]: New session 4 of user core.
Jan 13 20:17:38.843519 systemd[1]: Started session-4.scope - Session 4 of User core.
Jan 13 20:17:38.895351 sshd[1597]: Connection closed by 10.0.0.1 port 55456
Jan 13 20:17:38.895770 sshd-session[1595]: pam_unix(sshd:session): session closed for user core
Jan 13 20:17:38.910549 systemd[1]: sshd@3-10.0.0.86:22-10.0.0.1:55456.service: Deactivated successfully.
Jan 13 20:17:38.912871 systemd[1]: session-4.scope: Deactivated successfully.
Jan 13 20:17:38.913833 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit.
Jan 13 20:17:38.923680 systemd[1]: Started sshd@4-10.0.0.86:22-10.0.0.1:55466.service - OpenSSH per-connection server daemon (10.0.0.1:55466).
Jan 13 20:17:38.924607 systemd-logind[1452]: Removed session 4.
Jan 13 20:17:38.957837 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 55466 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:17:38.958946 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:17:38.963088 systemd-logind[1452]: New session 5 of user core.
Jan 13 20:17:38.977497 systemd[1]: Started session-5.scope - Session 5 of User core.
Jan 13 20:17:39.040226 sudo[1605]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Jan 13 20:17:39.042373 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:17:39.059078 sudo[1605]: pam_unix(sudo:session): session closed for user root
Jan 13 20:17:39.060399 sshd[1604]: Connection closed by 10.0.0.1 port 55466
Jan 13 20:17:39.060878 sshd-session[1602]: pam_unix(sshd:session): session closed for user core
Jan 13 20:17:39.082548 systemd[1]: sshd@4-10.0.0.86:22-10.0.0.1:55466.service: Deactivated successfully.
Jan 13 20:17:39.083838 systemd[1]: session-5.scope: Deactivated successfully.
Jan 13 20:17:39.086374 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit.
Jan 13 20:17:39.095554 systemd[1]: Started sshd@5-10.0.0.86:22-10.0.0.1:55480.service - OpenSSH per-connection server daemon (10.0.0.1:55480).
Jan 13 20:17:39.096827 systemd-logind[1452]: Removed session 5.
Jan 13 20:17:39.130243 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 55480 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:17:39.131317 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:17:39.135384 systemd-logind[1452]: New session 6 of user core.
Jan 13 20:17:39.150470 systemd[1]: Started session-6.scope - Session 6 of User core.
Jan 13 20:17:39.200394 sudo[1614]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Jan 13 20:17:39.200663 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:17:39.203555 sudo[1614]: pam_unix(sudo:session): session closed for user root
Jan 13 20:17:39.207753 sudo[1613]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Jan 13 20:17:39.208021 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:17:39.226582 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 13 20:17:39.248105 augenrules[1636]: No rules
Jan 13 20:17:39.249248 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 13 20:17:39.251352 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 13 20:17:39.252292 sudo[1613]: pam_unix(sudo:session): session closed for user root
Jan 13 20:17:39.253451 sshd[1612]: Connection closed by 10.0.0.1 port 55480
Jan 13 20:17:39.253920 sshd-session[1610]: pam_unix(sshd:session): session closed for user core
Jan 13 20:17:39.270660 systemd[1]: sshd@5-10.0.0.86:22-10.0.0.1:55480.service: Deactivated successfully.
Jan 13 20:17:39.271975 systemd[1]: session-6.scope: Deactivated successfully.
Jan 13 20:17:39.273092 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit.
Jan 13 20:17:39.277564 systemd[1]: Started sshd@6-10.0.0.86:22-10.0.0.1:55486.service - OpenSSH per-connection server daemon (10.0.0.1:55486).
Jan 13 20:17:39.278321 systemd-logind[1452]: Removed session 6.
Jan 13 20:17:39.311483 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 55486 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:17:39.312905 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:17:39.316379 systemd-logind[1452]: New session 7 of user core.
Jan 13 20:17:39.329467 systemd[1]: Started session-7.scope - Session 7 of User core.
Jan 13 20:17:39.378750 sudo[1647]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Jan 13 20:17:39.379159 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:17:39.403827 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Jan 13 20:17:39.417470 systemd[1]: coreos-metadata.service: Deactivated successfully.
Jan 13 20:17:39.418411 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Jan 13 20:17:39.896305 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:17:39.904527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:17:39.919883 systemd[1]: Reloading requested from client PID 1696 ('systemctl') (unit session-7.scope)...
Jan 13 20:17:39.919898 systemd[1]: Reloading...
Jan 13 20:17:39.969372 zram_generator::config[1731]: No configuration found.
Jan 13 20:17:40.135936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:17:40.186096 systemd[1]: Reloading finished in 265 ms.
Jan 13 20:17:40.229881 systemd[1]: kubelet.service: Deactivated successfully.
Jan 13 20:17:40.230069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:17:40.232712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:17:40.321971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:17:40.325766 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 13 20:17:40.361517 kubelet[1780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:17:40.361517 kubelet[1780]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 13 20:17:40.361517 kubelet[1780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:17:40.361884 kubelet[1780]: I0113 20:17:40.361688    1780 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 13 20:17:41.583589 kubelet[1780]: I0113 20:17:41.583539    1780 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Jan 13 20:17:41.583589 kubelet[1780]: I0113 20:17:41.583569    1780 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 13 20:17:41.583973 kubelet[1780]: I0113 20:17:41.583772    1780 server.go:927] "Client rotation is on, will bootstrap in background"
Jan 13 20:17:41.607204 kubelet[1780]: I0113 20:17:41.607097    1780 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 13 20:17:41.616693 kubelet[1780]: I0113 20:17:41.616665    1780 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 13 20:17:41.618166 kubelet[1780]: I0113 20:17:41.617909    1780 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 13 20:17:41.618166 kubelet[1780]: I0113 20:17:41.617954    1780 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.86","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Jan 13 20:17:41.618339 kubelet[1780]: I0113 20:17:41.618178    1780 topology_manager.go:138] "Creating topology manager with none policy"
Jan 13 20:17:41.618339 kubelet[1780]: I0113 20:17:41.618187    1780 container_manager_linux.go:301] "Creating device plugin manager"
Jan 13 20:17:41.618531 kubelet[1780]: I0113 20:17:41.618515    1780 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:17:41.619685 kubelet[1780]: I0113 20:17:41.619659    1780 kubelet.go:400] "Attempting to sync node with API server"
Jan 13 20:17:41.619685 kubelet[1780]: I0113 20:17:41.619685    1780 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 13 20:17:41.620284 kubelet[1780]: I0113 20:17:41.619897    1780 kubelet.go:312] "Adding apiserver pod source"
Jan 13 20:17:41.620284 kubelet[1780]: I0113 20:17:41.620032    1780 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 13 20:17:41.620284 kubelet[1780]: E0113 20:17:41.620158    1780 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:41.620284 kubelet[1780]: E0113 20:17:41.620258    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:41.622908 kubelet[1780]: I0113 20:17:41.622881    1780 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 13 20:17:41.623241 kubelet[1780]: I0113 20:17:41.623230    1780 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 13 20:17:41.623351 kubelet[1780]: W0113 20:17:41.623339    1780 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jan 13 20:17:41.624237 kubelet[1780]: I0113 20:17:41.624091    1780 server.go:1264] "Started kubelet"
Jan 13 20:17:41.625319 kubelet[1780]: I0113 20:17:41.625174    1780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 13 20:17:41.625319 kubelet[1780]: I0113 20:17:41.625285    1780 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Jan 13 20:17:41.626149 kubelet[1780]: I0113 20:17:41.625285    1780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 13 20:17:41.626479 kubelet[1780]: I0113 20:17:41.626174    1780 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 13 20:17:41.627747 kubelet[1780]: I0113 20:17:41.626826    1780 server.go:455] "Adding debug handlers to kubelet server"
Jan 13 20:17:41.634058 kubelet[1780]: E0113 20:17:41.633691    1780 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 13 20:17:41.634058 kubelet[1780]: E0113 20:17:41.633691    1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.86\" not found"
Jan 13 20:17:41.634058 kubelet[1780]: I0113 20:17:41.633776    1780 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 13 20:17:41.634058 kubelet[1780]: I0113 20:17:41.633861    1780 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Jan 13 20:17:41.634058 kubelet[1780]: I0113 20:17:41.633985    1780 reconciler.go:26] "Reconciler: start to sync state"
Jan 13 20:17:41.637630 kubelet[1780]: I0113 20:17:41.634560    1780 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 13 20:17:41.637630 kubelet[1780]: I0113 20:17:41.635551    1780 factory.go:221] Registration of the containerd container factory successfully
Jan 13 20:17:41.637630 kubelet[1780]: I0113 20:17:41.635565    1780 factory.go:221] Registration of the systemd container factory successfully
Jan 13 20:17:41.638662 kubelet[1780]: E0113 20:17:41.638493    1780 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.86.181a59e88c31384a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.86,UID:10.0.0.86,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.86,},FirstTimestamp:2025-01-13 20:17:41.62406817 +0000 UTC m=+1.295216509,LastTimestamp:2025-01-13 20:17:41.62406817 +0000 UTC m=+1.295216509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.86,}"
Jan 13 20:17:41.638920 kubelet[1780]: W0113 20:17:41.638816    1780 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.86" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Jan 13 20:17:41.638920 kubelet[1780]: E0113 20:17:41.638841    1780 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.86" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Jan 13 20:17:41.638920 kubelet[1780]: W0113 20:17:41.638883    1780 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Jan 13 20:17:41.638920 kubelet[1780]: E0113 20:17:41.638893    1780 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Jan 13 20:17:41.639281 kubelet[1780]: E0113 20:17:41.639004    1780 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.86\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms"
Jan 13 20:17:41.639281 kubelet[1780]: W0113 20:17:41.639137    1780 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Jan 13 20:17:41.639281 kubelet[1780]: E0113 20:17:41.639152    1780 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Jan 13 20:17:41.649788 kubelet[1780]: I0113 20:17:41.649754    1780 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 13 20:17:41.649788 kubelet[1780]: I0113 20:17:41.649778    1780 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 13 20:17:41.649788 kubelet[1780]: I0113 20:17:41.649795    1780 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:17:41.722786 kubelet[1780]: I0113 20:17:41.722720    1780 policy_none.go:49] "None policy: Start"
Jan 13 20:17:41.723663 kubelet[1780]: I0113 20:17:41.723638    1780 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 13 20:17:41.723740 kubelet[1780]: I0113 20:17:41.723672    1780 state_mem.go:35] "Initializing new in-memory state store"
Jan 13 20:17:41.732022 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Jan 13 20:17:41.734586 kubelet[1780]: I0113 20:17:41.734551    1780 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.86"
Jan 13 20:17:41.740249 kubelet[1780]: I0113 20:17:41.740220    1780 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.86"
Jan 13 20:17:41.747177 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Jan 13 20:17:41.747394 kubelet[1780]: I0113 20:17:41.747309    1780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 13 20:17:41.748538 kubelet[1780]: I0113 20:17:41.748503    1780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 13 20:17:41.748945 kubelet[1780]: I0113 20:17:41.748731    1780 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 13 20:17:41.748945 kubelet[1780]: I0113 20:17:41.748756    1780 kubelet.go:2337] "Starting kubelet main sync loop"
Jan 13 20:17:41.748945 kubelet[1780]: E0113 20:17:41.748776    1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.86\" not found"
Jan 13 20:17:41.748945 kubelet[1780]: E0113 20:17:41.748813    1780 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 13 20:17:41.751887 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Jan 13 20:17:41.763355 kubelet[1780]: I0113 20:17:41.763309    1780 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 13 20:17:41.763875 kubelet[1780]: I0113 20:17:41.763812    1780 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jan 13 20:17:41.764311 kubelet[1780]: I0113 20:17:41.764294    1780 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 13 20:17:41.765227 kubelet[1780]: E0113 20:17:41.765139    1780 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.86\" not found"
Jan 13 20:17:41.843134 sudo[1647]: pam_unix(sudo:session): session closed for user root
Jan 13 20:17:41.845186 sshd[1646]: Connection closed by 10.0.0.1 port 55486
Jan 13 20:17:41.844862 sshd-session[1644]: pam_unix(sshd:session): session closed for user core
Jan 13 20:17:41.847586 systemd[1]: sshd@6-10.0.0.86:22-10.0.0.1:55486.service: Deactivated successfully.
Jan 13 20:17:41.848965 kubelet[1780]: E0113 20:17:41.848935    1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.86\" not found"
Jan 13 20:17:41.849048 systemd[1]: session-7.scope: Deactivated successfully.
Jan 13 20:17:41.850208 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit.
Jan 13 20:17:41.851049 systemd-logind[1452]: Removed session 7.
Jan 13 20:17:41.949355 kubelet[1780]: E0113 20:17:41.949284    1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.86\" not found"
Jan 13 20:17:42.049738 kubelet[1780]: E0113 20:17:42.049704    1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.86\" not found"
Jan 13 20:17:42.150425 kubelet[1780]: E0113 20:17:42.150340    1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.86\" not found"
Jan 13 20:17:42.250756 kubelet[1780]: E0113 20:17:42.250732    1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.86\" not found"
Jan 13 20:17:42.351205 kubelet[1780]: E0113 20:17:42.351178    1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.86\" not found"
Jan 13 20:17:42.451752 kubelet[1780]: E0113 20:17:42.451670    1780 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.86\" not found"
Jan 13 20:17:42.553258 kubelet[1780]: I0113 20:17:42.553231    1780 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Jan 13 20:17:42.553683 containerd[1473]: time="2025-01-13T20:17:42.553633912Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Jan 13 20:17:42.553969 kubelet[1780]: I0113 20:17:42.553841    1780 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Jan 13 20:17:42.586532 kubelet[1780]: I0113 20:17:42.586494    1780 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Jan 13 20:17:42.586848 kubelet[1780]: W0113 20:17:42.586646    1780 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 13 20:17:42.586848 kubelet[1780]: W0113 20:17:42.586667    1780 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 13 20:17:42.586848 kubelet[1780]: W0113 20:17:42.586679    1780 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 13 20:17:42.620851 kubelet[1780]: E0113 20:17:42.620801    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:42.620851 kubelet[1780]: I0113 20:17:42.620820    1780 apiserver.go:52] "Watching apiserver"
Jan 13 20:17:42.632440 kubelet[1780]: I0113 20:17:42.632388    1780 topology_manager.go:215] "Topology Admit Handler" podUID="eab2cdb3-5447-43d1-bc99-43ec8a82ecc5" podNamespace="kube-system" podName="kube-proxy-qmn8m"
Jan 13 20:17:42.634802 kubelet[1780]: I0113 20:17:42.634771    1780 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Jan 13 20:17:42.637883 systemd[1]: Created slice kubepods-besteffort-podeab2cdb3_5447_43d1_bc99_43ec8a82ecc5.slice - libcontainer container kubepods-besteffort-podeab2cdb3_5447_43d1_bc99_43ec8a82ecc5.slice.
Jan 13 20:17:42.638612 kubelet[1780]: I0113 20:17:42.638586    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eab2cdb3-5447-43d1-bc99-43ec8a82ecc5-kube-proxy\") pod \"kube-proxy-qmn8m\" (UID: \"eab2cdb3-5447-43d1-bc99-43ec8a82ecc5\") " pod="kube-system/kube-proxy-qmn8m"
Jan 13 20:17:42.638670 kubelet[1780]: I0113 20:17:42.638616    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eab2cdb3-5447-43d1-bc99-43ec8a82ecc5-xtables-lock\") pod \"kube-proxy-qmn8m\" (UID: \"eab2cdb3-5447-43d1-bc99-43ec8a82ecc5\") " pod="kube-system/kube-proxy-qmn8m"
Jan 13 20:17:42.638670 kubelet[1780]: I0113 20:17:42.638635    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eab2cdb3-5447-43d1-bc99-43ec8a82ecc5-lib-modules\") pod \"kube-proxy-qmn8m\" (UID: \"eab2cdb3-5447-43d1-bc99-43ec8a82ecc5\") " pod="kube-system/kube-proxy-qmn8m"
Jan 13 20:17:42.638670 kubelet[1780]: I0113 20:17:42.638650    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb9qt\" (UniqueName: \"kubernetes.io/projected/eab2cdb3-5447-43d1-bc99-43ec8a82ecc5-kube-api-access-zb9qt\") pod \"kube-proxy-qmn8m\" (UID: \"eab2cdb3-5447-43d1-bc99-43ec8a82ecc5\") " pod="kube-system/kube-proxy-qmn8m"
Jan 13 20:17:42.947851 kubelet[1780]: E0113 20:17:42.947766    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:42.948870 containerd[1473]: time="2025-01-13T20:17:42.948805702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qmn8m,Uid:eab2cdb3-5447-43d1-bc99-43ec8a82ecc5,Namespace:kube-system,Attempt:0,}"
Jan 13 20:17:43.535253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733446796.mount: Deactivated successfully.
Jan 13 20:17:43.557729 containerd[1473]: time="2025-01-13T20:17:43.557427741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:17:43.558582 containerd[1473]: time="2025-01-13T20:17:43.558343136Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175"
Jan 13 20:17:43.559454 containerd[1473]: time="2025-01-13T20:17:43.559413241Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:17:43.561979 containerd[1473]: time="2025-01-13T20:17:43.561939307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:17:43.562806 containerd[1473]: time="2025-01-13T20:17:43.562780024Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 613.900368ms"
Jan 13 20:17:43.621815 kubelet[1780]: E0113 20:17:43.621764    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:43.646477 containerd[1473]: time="2025-01-13T20:17:43.646029769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:17:43.646627 containerd[1473]: time="2025-01-13T20:17:43.646447910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:17:43.646627 containerd[1473]: time="2025-01-13T20:17:43.646468607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:17:43.646627 containerd[1473]: time="2025-01-13T20:17:43.646555512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:17:43.714575 systemd[1]: Started cri-containerd-4f80f75739cfecc42e935cad93e136b9db5ca31092b395a71866a8ed4cc7acaa.scope - libcontainer container 4f80f75739cfecc42e935cad93e136b9db5ca31092b395a71866a8ed4cc7acaa.
Jan 13 20:17:43.733266 containerd[1473]: time="2025-01-13T20:17:43.733227140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qmn8m,Uid:eab2cdb3-5447-43d1-bc99-43ec8a82ecc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f80f75739cfecc42e935cad93e136b9db5ca31092b395a71866a8ed4cc7acaa\""
Jan 13 20:17:43.734390 kubelet[1780]: E0113 20:17:43.734368    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:43.735499 containerd[1473]: time="2025-01-13T20:17:43.735455853Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\""
Jan 13 20:17:44.622886 kubelet[1780]: E0113 20:17:44.622844    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:44.726824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096247435.mount: Deactivated successfully.
Jan 13 20:17:44.936291 containerd[1473]: time="2025-01-13T20:17:44.936168173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:44.936838 containerd[1473]: time="2025-01-13T20:17:44.936805717Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662013"
Jan 13 20:17:44.937702 containerd[1473]: time="2025-01-13T20:17:44.937662116Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:44.940461 containerd[1473]: time="2025-01-13T20:17:44.940422994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:44.941194 containerd[1473]: time="2025-01-13T20:17:44.941139976Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.205628702s"
Jan 13 20:17:44.941194 containerd[1473]: time="2025-01-13T20:17:44.941179535Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\""
Jan 13 20:17:44.943648 containerd[1473]: time="2025-01-13T20:17:44.943613709Z" level=info msg="CreateContainer within sandbox \"4f80f75739cfecc42e935cad93e136b9db5ca31092b395a71866a8ed4cc7acaa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jan 13 20:17:44.955086 containerd[1473]: time="2025-01-13T20:17:44.954954555Z" level=info msg="CreateContainer within sandbox \"4f80f75739cfecc42e935cad93e136b9db5ca31092b395a71866a8ed4cc7acaa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"03d9d914268dfa179b351b00dd3e24d88a291d7b1732ef67974de985cbc09daf\""
Jan 13 20:17:44.955687 containerd[1473]: time="2025-01-13T20:17:44.955618472Z" level=info msg="StartContainer for \"03d9d914268dfa179b351b00dd3e24d88a291d7b1732ef67974de985cbc09daf\""
Jan 13 20:17:44.980488 systemd[1]: Started cri-containerd-03d9d914268dfa179b351b00dd3e24d88a291d7b1732ef67974de985cbc09daf.scope - libcontainer container 03d9d914268dfa179b351b00dd3e24d88a291d7b1732ef67974de985cbc09daf.
Jan 13 20:17:45.008559 containerd[1473]: time="2025-01-13T20:17:45.008514845Z" level=info msg="StartContainer for \"03d9d914268dfa179b351b00dd3e24d88a291d7b1732ef67974de985cbc09daf\" returns successfully"
Jan 13 20:17:45.623809 kubelet[1780]: E0113 20:17:45.623766    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:45.757203 kubelet[1780]: E0113 20:17:45.757165    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:46.624104 kubelet[1780]: E0113 20:17:46.624071    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:46.758413 kubelet[1780]: E0113 20:17:46.758358    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:47.625064 kubelet[1780]: E0113 20:17:47.625020    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:48.625144 kubelet[1780]: E0113 20:17:48.625103    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:49.304486 kubelet[1780]: I0113 20:17:49.304413    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qmn8m" podStartSLOduration=7.097399147 podStartE2EDuration="8.304393611s" podCreationTimestamp="2025-01-13 20:17:41 +0000 UTC" firstStartedPulling="2025-01-13 20:17:43.735016535 +0000 UTC m=+3.406164874" lastFinishedPulling="2025-01-13 20:17:44.942010959 +0000 UTC m=+4.613159338" observedRunningTime="2025-01-13 20:17:45.766035056 +0000 UTC m=+5.437183435" watchObservedRunningTime="2025-01-13 20:17:49.304393611 +0000 UTC m=+8.975541990"
Jan 13 20:17:49.304655 kubelet[1780]: I0113 20:17:49.304588    1780 topology_manager.go:215] "Topology Admit Handler" podUID="68ad8c4a-cf0c-4c97-9560-ddda3f223949" podNamespace="calico-system" podName="calico-typha-69bdfc6695-5vcnm"
Jan 13 20:17:49.309480 systemd[1]: Created slice kubepods-besteffort-pod68ad8c4a_cf0c_4c97_9560_ddda3f223949.slice - libcontainer container kubepods-besteffort-pod68ad8c4a_cf0c_4c97_9560_ddda3f223949.slice.
Jan 13 20:17:49.358309 kubelet[1780]: I0113 20:17:49.358259    1780 topology_manager.go:215] "Topology Admit Handler" podUID="7a24cf25-811a-4670-8401-a2d8815cdc3e" podNamespace="calico-system" podName="calico-node-ntvmq"
Jan 13 20:17:49.363166 systemd[1]: Created slice kubepods-besteffort-pod7a24cf25_811a_4670_8401_a2d8815cdc3e.slice - libcontainer container kubepods-besteffort-pod7a24cf25_811a_4670_8401_a2d8815cdc3e.slice.
Jan 13 20:17:49.376722 kubelet[1780]: I0113 20:17:49.376675    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a24cf25-811a-4670-8401-a2d8815cdc3e-xtables-lock\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.376722 kubelet[1780]: I0113 20:17:49.376719    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7a24cf25-811a-4670-8401-a2d8815cdc3e-cni-bin-dir\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.376865 kubelet[1780]: I0113 20:17:49.376739    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7a24cf25-811a-4670-8401-a2d8815cdc3e-flexvol-driver-host\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.376865 kubelet[1780]: I0113 20:17:49.376757    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47v2k\" (UniqueName: \"kubernetes.io/projected/68ad8c4a-cf0c-4c97-9560-ddda3f223949-kube-api-access-47v2k\") pod \"calico-typha-69bdfc6695-5vcnm\" (UID: \"68ad8c4a-cf0c-4c97-9560-ddda3f223949\") " pod="calico-system/calico-typha-69bdfc6695-5vcnm"
Jan 13 20:17:49.376865 kubelet[1780]: I0113 20:17:49.376777    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7a24cf25-811a-4670-8401-a2d8815cdc3e-policysync\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.376865 kubelet[1780]: I0113 20:17:49.376792    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a24cf25-811a-4670-8401-a2d8815cdc3e-var-lib-calico\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.376865 kubelet[1780]: I0113 20:17:49.376809    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7a24cf25-811a-4670-8401-a2d8815cdc3e-cni-net-dir\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.376971 kubelet[1780]: I0113 20:17:49.376823    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7a24cf25-811a-4670-8401-a2d8815cdc3e-node-certs\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.376971 kubelet[1780]: I0113 20:17:49.376839    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68ad8c4a-cf0c-4c97-9560-ddda3f223949-tigera-ca-bundle\") pod \"calico-typha-69bdfc6695-5vcnm\" (UID: \"68ad8c4a-cf0c-4c97-9560-ddda3f223949\") " pod="calico-system/calico-typha-69bdfc6695-5vcnm"
Jan 13 20:17:49.376971 kubelet[1780]: I0113 20:17:49.376853    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a24cf25-811a-4670-8401-a2d8815cdc3e-lib-modules\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.376971 kubelet[1780]: I0113 20:17:49.376868    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a24cf25-811a-4670-8401-a2d8815cdc3e-tigera-ca-bundle\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.376971 kubelet[1780]: I0113 20:17:49.376882    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7a24cf25-811a-4670-8401-a2d8815cdc3e-var-run-calico\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.377079 kubelet[1780]: I0113 20:17:49.376900    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7a24cf25-811a-4670-8401-a2d8815cdc3e-cni-log-dir\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.377079 kubelet[1780]: I0113 20:17:49.376914    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vln7\" (UniqueName: \"kubernetes.io/projected/7a24cf25-811a-4670-8401-a2d8815cdc3e-kube-api-access-4vln7\") pod \"calico-node-ntvmq\" (UID: \"7a24cf25-811a-4670-8401-a2d8815cdc3e\") " pod="calico-system/calico-node-ntvmq"
Jan 13 20:17:49.377079 kubelet[1780]: I0113 20:17:49.376930    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/68ad8c4a-cf0c-4c97-9560-ddda3f223949-typha-certs\") pod \"calico-typha-69bdfc6695-5vcnm\" (UID: \"68ad8c4a-cf0c-4c97-9560-ddda3f223949\") " pod="calico-system/calico-typha-69bdfc6695-5vcnm"
Jan 13 20:17:49.470164 kubelet[1780]: I0113 20:17:49.470108    1780 topology_manager.go:215] "Topology Admit Handler" podUID="9744204a-04ef-4999-88e2-3d074458261a" podNamespace="calico-system" podName="csi-node-driver-4sxhx"
Jan 13 20:17:49.470399 kubelet[1780]: E0113 20:17:49.470370    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4sxhx" podUID="9744204a-04ef-4999-88e2-3d074458261a"
Jan 13 20:17:49.478414 kubelet[1780]: I0113 20:17:49.477520    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9744204a-04ef-4999-88e2-3d074458261a-registration-dir\") pod \"csi-node-driver-4sxhx\" (UID: \"9744204a-04ef-4999-88e2-3d074458261a\") " pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:17:49.478414 kubelet[1780]: I0113 20:17:49.477595    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9744204a-04ef-4999-88e2-3d074458261a-kubelet-dir\") pod \"csi-node-driver-4sxhx\" (UID: \"9744204a-04ef-4999-88e2-3d074458261a\") " pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:17:49.478414 kubelet[1780]: I0113 20:17:49.477640    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd52f\" (UniqueName: \"kubernetes.io/projected/9744204a-04ef-4999-88e2-3d074458261a-kube-api-access-cd52f\") pod \"csi-node-driver-4sxhx\" (UID: \"9744204a-04ef-4999-88e2-3d074458261a\") " pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:17:49.478414 kubelet[1780]: I0113 20:17:49.477696    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9744204a-04ef-4999-88e2-3d074458261a-varrun\") pod \"csi-node-driver-4sxhx\" (UID: \"9744204a-04ef-4999-88e2-3d074458261a\") " pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:17:49.478414 kubelet[1780]: I0113 20:17:49.477733    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9744204a-04ef-4999-88e2-3d074458261a-socket-dir\") pod \"csi-node-driver-4sxhx\" (UID: \"9744204a-04ef-4999-88e2-3d074458261a\") " pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:17:49.478809 kubelet[1780]: E0113 20:17:49.478766    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.478809 kubelet[1780]: W0113 20:17:49.478787    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.478809 kubelet[1780]: E0113 20:17:49.478802    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.479099 kubelet[1780]: E0113 20:17:49.478951    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.479099 kubelet[1780]: W0113 20:17:49.478959    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.479099 kubelet[1780]: E0113 20:17:49.478977    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.479171 kubelet[1780]: E0113 20:17:49.479142    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.479171 kubelet[1780]: W0113 20:17:49.479151    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.479171 kubelet[1780]: E0113 20:17:49.479167    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.479546 kubelet[1780]: E0113 20:17:49.479512    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.479546 kubelet[1780]: W0113 20:17:49.479529    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.479621 kubelet[1780]: E0113 20:17:49.479551    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.480352 kubelet[1780]: E0113 20:17:49.480069    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.480352 kubelet[1780]: W0113 20:17:49.480087    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.480352 kubelet[1780]: E0113 20:17:49.480100    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.480464 kubelet[1780]: E0113 20:17:49.480359    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.480464 kubelet[1780]: W0113 20:17:49.480369    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.480464 kubelet[1780]: E0113 20:17:49.480383    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.481295 kubelet[1780]: E0113 20:17:49.480551    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.481295 kubelet[1780]: W0113 20:17:49.480563    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.481295 kubelet[1780]: E0113 20:17:49.480576    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.481295 kubelet[1780]: E0113 20:17:49.480788    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.481295 kubelet[1780]: W0113 20:17:49.480803    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.481295 kubelet[1780]: E0113 20:17:49.480817    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.481295 kubelet[1780]: E0113 20:17:49.481261    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.481295 kubelet[1780]: W0113 20:17:49.481274    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.481784 kubelet[1780]: E0113 20:17:49.481290    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.481902 kubelet[1780]: E0113 20:17:49.481853    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.481902 kubelet[1780]: W0113 20:17:49.481867    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.481991 kubelet[1780]: E0113 20:17:49.481974    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.482621 kubelet[1780]: E0113 20:17:49.482446    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.482621 kubelet[1780]: W0113 20:17:49.482461    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.482621 kubelet[1780]: E0113 20:17:49.482538    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.482732 kubelet[1780]: E0113 20:17:49.482712    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.482732 kubelet[1780]: W0113 20:17:49.482721    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.482777 kubelet[1780]: E0113 20:17:49.482730    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.483300 kubelet[1780]: E0113 20:17:49.483245    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.483300 kubelet[1780]: W0113 20:17:49.483258    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.483300 kubelet[1780]: E0113 20:17:49.483270    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.488527 kubelet[1780]: E0113 20:17:49.488502    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.488527 kubelet[1780]: W0113 20:17:49.488524    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.488616 kubelet[1780]: E0113 20:17:49.488539    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.491617 kubelet[1780]: E0113 20:17:49.491594    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.491617 kubelet[1780]: W0113 20:17:49.491611    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.491690 kubelet[1780]: E0113 20:17:49.491624    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.580137 kubelet[1780]: E0113 20:17:49.580029    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.580137 kubelet[1780]: W0113 20:17:49.580053    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.580137 kubelet[1780]: E0113 20:17:49.580077    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.580385 kubelet[1780]: E0113 20:17:49.580363    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.580385 kubelet[1780]: W0113 20:17:49.580380    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.580433 kubelet[1780]: E0113 20:17:49.580399    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.581233 kubelet[1780]: E0113 20:17:49.581204    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.581233 kubelet[1780]: W0113 20:17:49.581223    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.581233 kubelet[1780]: E0113 20:17:49.581240    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.582304 kubelet[1780]: E0113 20:17:49.582277    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.582304 kubelet[1780]: W0113 20:17:49.582293    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.582304 kubelet[1780]: E0113 20:17:49.582310    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.582538 kubelet[1780]: E0113 20:17:49.582523    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.582538 kubelet[1780]: W0113 20:17:49.582537    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.582612 kubelet[1780]: E0113 20:17:49.582593    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.582739 kubelet[1780]: E0113 20:17:49.582725    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.582764 kubelet[1780]: W0113 20:17:49.582739    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.582785 kubelet[1780]: E0113 20:17:49.582771    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.582976 kubelet[1780]: E0113 20:17:49.582958    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.582976 kubelet[1780]: W0113 20:17:49.582974    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.583034 kubelet[1780]: E0113 20:17:49.583012    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.584406 kubelet[1780]: E0113 20:17:49.584378    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.584454 kubelet[1780]: W0113 20:17:49.584407    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.584454 kubelet[1780]: E0113 20:17:49.584425    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.584699 kubelet[1780]: E0113 20:17:49.584672    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.584699 kubelet[1780]: W0113 20:17:49.584691    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.584763 kubelet[1780]: E0113 20:17:49.584710    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.584959 kubelet[1780]: E0113 20:17:49.584931    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.584959 kubelet[1780]: W0113 20:17:49.584948    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.585017 kubelet[1780]: E0113 20:17:49.584964    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.585207 kubelet[1780]: E0113 20:17:49.585181    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.585207 kubelet[1780]: W0113 20:17:49.585199    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.585260 kubelet[1780]: E0113 20:17:49.585227    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.586621 kubelet[1780]: E0113 20:17:49.586591    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.586621 kubelet[1780]: W0113 20:17:49.586610    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.586677 kubelet[1780]: E0113 20:17:49.586650    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.586879 kubelet[1780]: E0113 20:17:49.586853    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.586879 kubelet[1780]: W0113 20:17:49.586871    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.586941 kubelet[1780]: E0113 20:17:49.586906    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.587100 kubelet[1780]: E0113 20:17:49.587085    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.587100 kubelet[1780]: W0113 20:17:49.587098    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.587153 kubelet[1780]: E0113 20:17:49.587143    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.587292 kubelet[1780]: E0113 20:17:49.587268    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.587292 kubelet[1780]: W0113 20:17:49.587280    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.587354 kubelet[1780]: E0113 20:17:49.587301    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.587517 kubelet[1780]: E0113 20:17:49.587495    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.587517 kubelet[1780]: W0113 20:17:49.587514    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.587566 kubelet[1780]: E0113 20:17:49.587529    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.587760 kubelet[1780]: E0113 20:17:49.587745    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.587760 kubelet[1780]: W0113 20:17:49.587758    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.587815 kubelet[1780]: E0113 20:17:49.587771    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.587970 kubelet[1780]: E0113 20:17:49.587956    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.587970 kubelet[1780]: W0113 20:17:49.587970    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.588019 kubelet[1780]: E0113 20:17:49.587982    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.588164 kubelet[1780]: E0113 20:17:49.588152    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.588164 kubelet[1780]: W0113 20:17:49.588163    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.588213 kubelet[1780]: E0113 20:17:49.588176    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.588366 kubelet[1780]: E0113 20:17:49.588353    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.588366 kubelet[1780]: W0113 20:17:49.588365    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.588424 kubelet[1780]: E0113 20:17:49.588385    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.588662 kubelet[1780]: E0113 20:17:49.588633    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.588662 kubelet[1780]: W0113 20:17:49.588649    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.588720 kubelet[1780]: E0113 20:17:49.588677    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.588873 kubelet[1780]: E0113 20:17:49.588861    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.588901 kubelet[1780]: W0113 20:17:49.588874    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.588922 kubelet[1780]: E0113 20:17:49.588914    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.589343 kubelet[1780]: E0113 20:17:49.589315    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.589343 kubelet[1780]: W0113 20:17:49.589341    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.589406 kubelet[1780]: E0113 20:17:49.589355    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.589525 kubelet[1780]: E0113 20:17:49.589511    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.589549 kubelet[1780]: W0113 20:17:49.589523    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.589549 kubelet[1780]: E0113 20:17:49.589532    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.589734 kubelet[1780]: E0113 20:17:49.589720    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.589734 kubelet[1780]: W0113 20:17:49.589732    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.589792 kubelet[1780]: E0113 20:17:49.589741    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.598858 kubelet[1780]: E0113 20:17:49.598829    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:49.598858 kubelet[1780]: W0113 20:17:49.598846    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:49.598858 kubelet[1780]: E0113 20:17:49.598857    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:49.612070 kubelet[1780]: E0113 20:17:49.612036    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:49.612521 containerd[1473]: time="2025-01-13T20:17:49.612473892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69bdfc6695-5vcnm,Uid:68ad8c4a-cf0c-4c97-9560-ddda3f223949,Namespace:calico-system,Attempt:0,}"
Jan 13 20:17:49.626062 kubelet[1780]: E0113 20:17:49.626025    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:49.634354 containerd[1473]: time="2025-01-13T20:17:49.634106850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:17:49.634987 containerd[1473]: time="2025-01-13T20:17:49.634794577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:17:49.634987 containerd[1473]: time="2025-01-13T20:17:49.634842062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:17:49.634987 containerd[1473]: time="2025-01-13T20:17:49.634946344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:17:49.661535 systemd[1]: Started cri-containerd-9bb0b2ba4ab88e47941dabc78b4496f67d67c60911ec59a95b1b3f141caa411a.scope - libcontainer container 9bb0b2ba4ab88e47941dabc78b4496f67d67c60911ec59a95b1b3f141caa411a.
Jan 13 20:17:49.665430 kubelet[1780]: E0113 20:17:49.665390    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:49.666253 containerd[1473]: time="2025-01-13T20:17:49.666195735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ntvmq,Uid:7a24cf25-811a-4670-8401-a2d8815cdc3e,Namespace:calico-system,Attempt:0,}"
Jan 13 20:17:49.685718 containerd[1473]: time="2025-01-13T20:17:49.685619899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:17:49.685718 containerd[1473]: time="2025-01-13T20:17:49.685675298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:17:49.685908 containerd[1473]: time="2025-01-13T20:17:49.685698401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:17:49.686250 containerd[1473]: time="2025-01-13T20:17:49.686208860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:17:49.688432 containerd[1473]: time="2025-01-13T20:17:49.688373847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69bdfc6695-5vcnm,Uid:68ad8c4a-cf0c-4c97-9560-ddda3f223949,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bb0b2ba4ab88e47941dabc78b4496f67d67c60911ec59a95b1b3f141caa411a\""
Jan 13 20:17:49.689405 kubelet[1780]: E0113 20:17:49.689164    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:49.690126 containerd[1473]: time="2025-01-13T20:17:49.690097362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\""
Jan 13 20:17:49.706492 systemd[1]: Started cri-containerd-d2e68df0d417917a66511af7aeef258df57c36d2995ba44b9a46edddb5ba1a73.scope - libcontainer container d2e68df0d417917a66511af7aeef258df57c36d2995ba44b9a46edddb5ba1a73.
Jan 13 20:17:49.725623 containerd[1473]: time="2025-01-13T20:17:49.725583956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ntvmq,Uid:7a24cf25-811a-4670-8401-a2d8815cdc3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2e68df0d417917a66511af7aeef258df57c36d2995ba44b9a46edddb5ba1a73\""
Jan 13 20:17:49.726254 kubelet[1780]: E0113 20:17:49.726216    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:50.626416 kubelet[1780]: E0113 20:17:50.626367    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:50.665293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount960563015.mount: Deactivated successfully.
Jan 13 20:17:50.749267 kubelet[1780]: E0113 20:17:50.749211    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4sxhx" podUID="9744204a-04ef-4999-88e2-3d074458261a"
Jan 13 20:17:51.093367 containerd[1473]: time="2025-01-13T20:17:51.093296596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:51.093822 containerd[1473]: time="2025-01-13T20:17:51.093774003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308"
Jan 13 20:17:51.094463 containerd[1473]: time="2025-01-13T20:17:51.094437009Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:51.096290 containerd[1473]: time="2025-01-13T20:17:51.096260695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:51.097733 containerd[1473]: time="2025-01-13T20:17:51.097701431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.407434234s"
Jan 13 20:17:51.097781 containerd[1473]: time="2025-01-13T20:17:51.097732171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\""
Jan 13 20:17:51.098659 containerd[1473]: time="2025-01-13T20:17:51.098496350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Jan 13 20:17:51.104348 containerd[1473]: time="2025-01-13T20:17:51.104291955Z" level=info msg="CreateContainer within sandbox \"9bb0b2ba4ab88e47941dabc78b4496f67d67c60911ec59a95b1b3f141caa411a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Jan 13 20:17:51.116081 containerd[1473]: time="2025-01-13T20:17:51.116027708Z" level=info msg="CreateContainer within sandbox \"9bb0b2ba4ab88e47941dabc78b4496f67d67c60911ec59a95b1b3f141caa411a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0f28f0e1818b8745a0435d0b01bec920df1c41b0e4df90d8aa2fa2a34912fd3e\""
Jan 13 20:17:51.116514 containerd[1473]: time="2025-01-13T20:17:51.116483610Z" level=info msg="StartContainer for \"0f28f0e1818b8745a0435d0b01bec920df1c41b0e4df90d8aa2fa2a34912fd3e\""
Jan 13 20:17:51.142482 systemd[1]: Started cri-containerd-0f28f0e1818b8745a0435d0b01bec920df1c41b0e4df90d8aa2fa2a34912fd3e.scope - libcontainer container 0f28f0e1818b8745a0435d0b01bec920df1c41b0e4df90d8aa2fa2a34912fd3e.
Jan 13 20:17:51.170887 containerd[1473]: time="2025-01-13T20:17:51.170846285Z" level=info msg="StartContainer for \"0f28f0e1818b8745a0435d0b01bec920df1c41b0e4df90d8aa2fa2a34912fd3e\" returns successfully"
Jan 13 20:17:51.627440 kubelet[1780]: E0113 20:17:51.627389    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:51.768339 kubelet[1780]: E0113 20:17:51.768301    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:51.788890 kubelet[1780]: E0113 20:17:51.788853    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.788890 kubelet[1780]: W0113 20:17:51.788874    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.788890 kubelet[1780]: E0113 20:17:51.788889    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.789104 kubelet[1780]: E0113 20:17:51.789084    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.789104 kubelet[1780]: W0113 20:17:51.789095    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.789104 kubelet[1780]: E0113 20:17:51.789103    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.789254 kubelet[1780]: E0113 20:17:51.789238    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.789254 kubelet[1780]: W0113 20:17:51.789248    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.789303 kubelet[1780]: E0113 20:17:51.789255    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.789418 kubelet[1780]: E0113 20:17:51.789401    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.789418 kubelet[1780]: W0113 20:17:51.789412    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.789473 kubelet[1780]: E0113 20:17:51.789420    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.789587 kubelet[1780]: E0113 20:17:51.789570    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.789587 kubelet[1780]: W0113 20:17:51.789580    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.789632 kubelet[1780]: E0113 20:17:51.789588    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.789719 kubelet[1780]: E0113 20:17:51.789710    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.789740 kubelet[1780]: W0113 20:17:51.789719    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.789740 kubelet[1780]: E0113 20:17:51.789726    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.789860 kubelet[1780]: E0113 20:17:51.789852    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.789884 kubelet[1780]: W0113 20:17:51.789860    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.789884 kubelet[1780]: E0113 20:17:51.789867    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.789997 kubelet[1780]: E0113 20:17:51.789988    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.790025 kubelet[1780]: W0113 20:17:51.789997    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.790025 kubelet[1780]: E0113 20:17:51.790004    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.790140 kubelet[1780]: E0113 20:17:51.790131    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.790164 kubelet[1780]: W0113 20:17:51.790141    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.790164 kubelet[1780]: E0113 20:17:51.790148    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.790284 kubelet[1780]: E0113 20:17:51.790275    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.790311 kubelet[1780]: W0113 20:17:51.790284    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.790311 kubelet[1780]: E0113 20:17:51.790292    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.790433 kubelet[1780]: E0113 20:17:51.790423    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.790433 kubelet[1780]: W0113 20:17:51.790432    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.790478 kubelet[1780]: E0113 20:17:51.790439    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.790568 kubelet[1780]: E0113 20:17:51.790559    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.790592 kubelet[1780]: W0113 20:17:51.790568    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.790592 kubelet[1780]: E0113 20:17:51.790575    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.790746 kubelet[1780]: E0113 20:17:51.790723    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.790746 kubelet[1780]: W0113 20:17:51.790736    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.790746 kubelet[1780]: E0113 20:17:51.790744    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.790874 kubelet[1780]: E0113 20:17:51.790865    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.790898 kubelet[1780]: W0113 20:17:51.790874    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.790898 kubelet[1780]: E0113 20:17:51.790881    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.791010 kubelet[1780]: E0113 20:17:51.791001    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.791035 kubelet[1780]: W0113 20:17:51.791010    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.791035 kubelet[1780]: E0113 20:17:51.791017    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.799381 kubelet[1780]: E0113 20:17:51.799355    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.799381 kubelet[1780]: W0113 20:17:51.799369    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.799441 kubelet[1780]: E0113 20:17:51.799380    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.799581 kubelet[1780]: E0113 20:17:51.799559    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.799581 kubelet[1780]: W0113 20:17:51.799571    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.799626 kubelet[1780]: E0113 20:17:51.799584    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.799757 kubelet[1780]: E0113 20:17:51.799738    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.799757 kubelet[1780]: W0113 20:17:51.799750    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.799802 kubelet[1780]: E0113 20:17:51.799762    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.799941 kubelet[1780]: E0113 20:17:51.799919    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.799941 kubelet[1780]: W0113 20:17:51.799929    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.799992 kubelet[1780]: E0113 20:17:51.799941    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.800115 kubelet[1780]: E0113 20:17:51.800105    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.800115 kubelet[1780]: W0113 20:17:51.800114    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.800168 kubelet[1780]: E0113 20:17:51.800127    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.800268 kubelet[1780]: E0113 20:17:51.800253    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.800268 kubelet[1780]: W0113 20:17:51.800262    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.800268 kubelet[1780]: E0113 20:17:51.800273    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.800516 kubelet[1780]: E0113 20:17:51.800503    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.800572 kubelet[1780]: W0113 20:17:51.800561    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.800636 kubelet[1780]: E0113 20:17:51.800625    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.800830 kubelet[1780]: E0113 20:17:51.800799    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.800830 kubelet[1780]: W0113 20:17:51.800819    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.800894 kubelet[1780]: E0113 20:17:51.800839    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.800998 kubelet[1780]: E0113 20:17:51.800985    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.800998 kubelet[1780]: W0113 20:17:51.800996    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.801070 kubelet[1780]: E0113 20:17:51.801009    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.801147 kubelet[1780]: E0113 20:17:51.801136    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.801174 kubelet[1780]: W0113 20:17:51.801146    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.801174 kubelet[1780]: E0113 20:17:51.801159    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.801322 kubelet[1780]: E0113 20:17:51.801311    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.801367 kubelet[1780]: W0113 20:17:51.801322    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.801367 kubelet[1780]: E0113 20:17:51.801361    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.801636 kubelet[1780]: E0113 20:17:51.801620    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.801636 kubelet[1780]: W0113 20:17:51.801635    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.801703 kubelet[1780]: E0113 20:17:51.801653    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.801831 kubelet[1780]: E0113 20:17:51.801817    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.801857 kubelet[1780]: W0113 20:17:51.801830    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.801857 kubelet[1780]: E0113 20:17:51.801844    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.802016 kubelet[1780]: E0113 20:17:51.802006    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.802043 kubelet[1780]: W0113 20:17:51.802016    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.802043 kubelet[1780]: E0113 20:17:51.802028    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.802182 kubelet[1780]: E0113 20:17:51.802172    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.802206 kubelet[1780]: W0113 20:17:51.802185    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.802206 kubelet[1780]: E0113 20:17:51.802197    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.802442 kubelet[1780]: E0113 20:17:51.802425    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.802442 kubelet[1780]: W0113 20:17:51.802441    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.802510 kubelet[1780]: E0113 20:17:51.802458    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.802697 kubelet[1780]: E0113 20:17:51.802684    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.802732 kubelet[1780]: W0113 20:17:51.802697    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.802732 kubelet[1780]: E0113 20:17:51.802712    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.802879 kubelet[1780]: E0113 20:17:51.802867    1780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:17:51.802879 kubelet[1780]: W0113 20:17:51.802878    1780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:17:51.802945 kubelet[1780]: E0113 20:17:51.802886    1780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:17:51.973676 containerd[1473]: time="2025-01-13T20:17:51.973566022Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:51.974442 containerd[1473]: time="2025-01-13T20:17:51.974252052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811"
Jan 13 20:17:51.976261 containerd[1473]: time="2025-01-13T20:17:51.975886222Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:51.978036 containerd[1473]: time="2025-01-13T20:17:51.977979451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:51.978668 containerd[1473]: time="2025-01-13T20:17:51.978610198Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 880.08031ms"
Jan 13 20:17:51.978668 containerd[1473]: time="2025-01-13T20:17:51.978645095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\""
Jan 13 20:17:51.980633 containerd[1473]: time="2025-01-13T20:17:51.980608689Z" level=info msg="CreateContainer within sandbox \"d2e68df0d417917a66511af7aeef258df57c36d2995ba44b9a46edddb5ba1a73\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Jan 13 20:17:51.989898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2366231250.mount: Deactivated successfully.
Jan 13 20:17:51.993611 containerd[1473]: time="2025-01-13T20:17:51.993578355Z" level=info msg="CreateContainer within sandbox \"d2e68df0d417917a66511af7aeef258df57c36d2995ba44b9a46edddb5ba1a73\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"86ed43cd15d9dc39f1afdbc5b52a6f434b0a72f830385e400ffd85e6c19cbe62\""
Jan 13 20:17:51.994342 containerd[1473]: time="2025-01-13T20:17:51.993944315Z" level=info msg="StartContainer for \"86ed43cd15d9dc39f1afdbc5b52a6f434b0a72f830385e400ffd85e6c19cbe62\""
Jan 13 20:17:52.016479 systemd[1]: Started cri-containerd-86ed43cd15d9dc39f1afdbc5b52a6f434b0a72f830385e400ffd85e6c19cbe62.scope - libcontainer container 86ed43cd15d9dc39f1afdbc5b52a6f434b0a72f830385e400ffd85e6c19cbe62.
Jan 13 20:17:52.044705 containerd[1473]: time="2025-01-13T20:17:52.044662433Z" level=info msg="StartContainer for \"86ed43cd15d9dc39f1afdbc5b52a6f434b0a72f830385e400ffd85e6c19cbe62\" returns successfully"
Jan 13 20:17:52.061844 systemd[1]: cri-containerd-86ed43cd15d9dc39f1afdbc5b52a6f434b0a72f830385e400ffd85e6c19cbe62.scope: Deactivated successfully.
Jan 13 20:17:52.183410 containerd[1473]: time="2025-01-13T20:17:52.183352199Z" level=info msg="shim disconnected" id=86ed43cd15d9dc39f1afdbc5b52a6f434b0a72f830385e400ffd85e6c19cbe62 namespace=k8s.io
Jan 13 20:17:52.183410 containerd[1473]: time="2025-01-13T20:17:52.183406885Z" level=warning msg="cleaning up after shim disconnected" id=86ed43cd15d9dc39f1afdbc5b52a6f434b0a72f830385e400ffd85e6c19cbe62 namespace=k8s.io
Jan 13 20:17:52.183410 containerd[1473]: time="2025-01-13T20:17:52.183415520Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:17:52.481538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86ed43cd15d9dc39f1afdbc5b52a6f434b0a72f830385e400ffd85e6c19cbe62-rootfs.mount: Deactivated successfully.
Jan 13 20:17:52.627718 kubelet[1780]: E0113 20:17:52.627680    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:52.749803 kubelet[1780]: E0113 20:17:52.749659    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4sxhx" podUID="9744204a-04ef-4999-88e2-3d074458261a"
Jan 13 20:17:52.770603 kubelet[1780]: E0113 20:17:52.770567    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:52.770862 kubelet[1780]: I0113 20:17:52.770820    1780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 13 20:17:52.771464 containerd[1473]: time="2025-01-13T20:17:52.771236122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Jan 13 20:17:52.771563 kubelet[1780]: E0113 20:17:52.771451    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:52.782641 kubelet[1780]: I0113 20:17:52.782568    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69bdfc6695-5vcnm" podStartSLOduration=2.374077418 podStartE2EDuration="3.782554373s" podCreationTimestamp="2025-01-13 20:17:49 +0000 UTC" firstStartedPulling="2025-01-13 20:17:49.689855223 +0000 UTC m=+9.361003602" lastFinishedPulling="2025-01-13 20:17:51.098332178 +0000 UTC m=+10.769480557" observedRunningTime="2025-01-13 20:17:51.778505337 +0000 UTC m=+11.449653716" watchObservedRunningTime="2025-01-13 20:17:52.782554373 +0000 UTC m=+12.453702752"
Jan 13 20:17:53.627977 kubelet[1780]: E0113 20:17:53.627903    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:54.602999 containerd[1473]: time="2025-01-13T20:17:54.602939182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:54.603541 containerd[1473]: time="2025-01-13T20:17:54.603486846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123"
Jan 13 20:17:54.604291 containerd[1473]: time="2025-01-13T20:17:54.604256311Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:54.606341 containerd[1473]: time="2025-01-13T20:17:54.606294091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:17:54.607220 containerd[1473]: time="2025-01-13T20:17:54.607185130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 1.835910671s"
Jan 13 20:17:54.607253 containerd[1473]: time="2025-01-13T20:17:54.607219512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\""
Jan 13 20:17:54.609631 containerd[1473]: time="2025-01-13T20:17:54.609603625Z" level=info msg="CreateContainer within sandbox \"d2e68df0d417917a66511af7aeef258df57c36d2995ba44b9a46edddb5ba1a73\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Jan 13 20:17:54.621085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081731695.mount: Deactivated successfully.
Jan 13 20:17:54.622462 containerd[1473]: time="2025-01-13T20:17:54.622415072Z" level=info msg="CreateContainer within sandbox \"d2e68df0d417917a66511af7aeef258df57c36d2995ba44b9a46edddb5ba1a73\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"729a2aba1e68b7b9c8db1af6314bfd662b2a6443cd080b71dd4157f7ffc5095b\""
Jan 13 20:17:54.623207 containerd[1473]: time="2025-01-13T20:17:54.623170784Z" level=info msg="StartContainer for \"729a2aba1e68b7b9c8db1af6314bfd662b2a6443cd080b71dd4157f7ffc5095b\""
Jan 13 20:17:54.628999 kubelet[1780]: E0113 20:17:54.628960    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:54.643172 systemd[1]: run-containerd-runc-k8s.io-729a2aba1e68b7b9c8db1af6314bfd662b2a6443cd080b71dd4157f7ffc5095b-runc.pf42zY.mount: Deactivated successfully.
Jan 13 20:17:54.655490 systemd[1]: Started cri-containerd-729a2aba1e68b7b9c8db1af6314bfd662b2a6443cd080b71dd4157f7ffc5095b.scope - libcontainer container 729a2aba1e68b7b9c8db1af6314bfd662b2a6443cd080b71dd4157f7ffc5095b.
Jan 13 20:17:54.682711 containerd[1473]: time="2025-01-13T20:17:54.682668679Z" level=info msg="StartContainer for \"729a2aba1e68b7b9c8db1af6314bfd662b2a6443cd080b71dd4157f7ffc5095b\" returns successfully"
Jan 13 20:17:54.749570 kubelet[1780]: E0113 20:17:54.749525    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4sxhx" podUID="9744204a-04ef-4999-88e2-3d074458261a"
Jan 13 20:17:54.774960 kubelet[1780]: E0113 20:17:54.774643    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:55.099875 containerd[1473]: time="2025-01-13T20:17:55.099810032Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 13 20:17:55.101557 systemd[1]: cri-containerd-729a2aba1e68b7b9c8db1af6314bfd662b2a6443cd080b71dd4157f7ffc5095b.scope: Deactivated successfully.
Jan 13 20:17:55.168976 kubelet[1780]: I0113 20:17:55.168936    1780 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Jan 13 20:17:55.214578 kubelet[1780]: I0113 20:17:55.214443    1780 topology_manager.go:215] "Topology Admit Handler" podUID="6589cd90-9a49-4dc8-928c-d4bfb9fedb8d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:55.219396 kubelet[1780]: I0113 20:17:55.219321    1780 topology_manager.go:215] "Topology Admit Handler" podUID="b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:55.219516 kubelet[1780]: I0113 20:17:55.219492    1780 topology_manager.go:215] "Topology Admit Handler" podUID="ee272026-1034-439f-966c-6692ba8e0711" podNamespace="calico-system" podName="calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:55.219849 systemd[1]: Created slice kubepods-burstable-pod6589cd90_9a49_4dc8_928c_d4bfb9fedb8d.slice - libcontainer container kubepods-burstable-pod6589cd90_9a49_4dc8_928c_d4bfb9fedb8d.slice.
Jan 13 20:17:55.223051 kubelet[1780]: I0113 20:17:55.223003    1780 topology_manager.go:215] "Topology Admit Handler" podUID="cdcf8404-fe95-41b4-a33a-7e988931f7a3" podNamespace="calico-apiserver" podName="calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:55.223522 kubelet[1780]: I0113 20:17:55.223186    1780 topology_manager.go:215] "Topology Admit Handler" podUID="10648835-fde4-414e-8a1f-6abaa02ccacc" podNamespace="calico-apiserver" podName="calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:55.227602 kubelet[1780]: I0113 20:17:55.227560    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4xp4\" (UniqueName: \"kubernetes.io/projected/10648835-fde4-414e-8a1f-6abaa02ccacc-kube-api-access-m4xp4\") pod \"calico-apiserver-5d9cccdfc4-mpttr\" (UID: \"10648835-fde4-414e-8a1f-6abaa02ccacc\") " pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:55.227668 kubelet[1780]: I0113 20:17:55.227603    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6589cd90-9a49-4dc8-928c-d4bfb9fedb8d-config-volume\") pod \"coredns-7db6d8ff4d-x5ccf\" (UID: \"6589cd90-9a49-4dc8-928c-d4bfb9fedb8d\") " pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:55.227668 kubelet[1780]: I0113 20:17:55.227623    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkqcg\" (UniqueName: \"kubernetes.io/projected/6589cd90-9a49-4dc8-928c-d4bfb9fedb8d-kube-api-access-bkqcg\") pod \"coredns-7db6d8ff4d-x5ccf\" (UID: \"6589cd90-9a49-4dc8-928c-d4bfb9fedb8d\") " pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:55.227668 kubelet[1780]: I0113 20:17:55.227639    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7-config-volume\") pod \"coredns-7db6d8ff4d-7h9bs\" (UID: \"b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7\") " pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:55.227668 kubelet[1780]: I0113 20:17:55.227655    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee272026-1034-439f-966c-6692ba8e0711-tigera-ca-bundle\") pod \"calico-kube-controllers-f94864fc5-tqbwc\" (UID: \"ee272026-1034-439f-966c-6692ba8e0711\") " pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:55.227769 kubelet[1780]: I0113 20:17:55.227671    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x2f4\" (UniqueName: \"kubernetes.io/projected/ee272026-1034-439f-966c-6692ba8e0711-kube-api-access-6x2f4\") pod \"calico-kube-controllers-f94864fc5-tqbwc\" (UID: \"ee272026-1034-439f-966c-6692ba8e0711\") " pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:55.227769 kubelet[1780]: I0113 20:17:55.227688    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cdcf8404-fe95-41b4-a33a-7e988931f7a3-calico-apiserver-certs\") pod \"calico-apiserver-5d9cccdfc4-gpjkp\" (UID: \"cdcf8404-fe95-41b4-a33a-7e988931f7a3\") " pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:55.227769 kubelet[1780]: I0113 20:17:55.227704    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n44jg\" (UniqueName: \"kubernetes.io/projected/cdcf8404-fe95-41b4-a33a-7e988931f7a3-kube-api-access-n44jg\") pod \"calico-apiserver-5d9cccdfc4-gpjkp\" (UID: \"cdcf8404-fe95-41b4-a33a-7e988931f7a3\") " pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:55.227769 kubelet[1780]: I0113 20:17:55.227721    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/10648835-fde4-414e-8a1f-6abaa02ccacc-calico-apiserver-certs\") pod \"calico-apiserver-5d9cccdfc4-mpttr\" (UID: \"10648835-fde4-414e-8a1f-6abaa02ccacc\") " pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:55.227769 kubelet[1780]: I0113 20:17:55.227737    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t6c5\" (UniqueName: \"kubernetes.io/projected/b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7-kube-api-access-5t6c5\") pod \"coredns-7db6d8ff4d-7h9bs\" (UID: \"b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7\") " pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:55.250132 systemd[1]: Created slice kubepods-burstable-podb6ad32c1_bf16_4af4_b2ca_d0d48cfec6e7.slice - libcontainer container kubepods-burstable-podb6ad32c1_bf16_4af4_b2ca_d0d48cfec6e7.slice.
Jan 13 20:17:55.265029 systemd[1]: Created slice kubepods-besteffort-podee272026_1034_439f_966c_6692ba8e0711.slice - libcontainer container kubepods-besteffort-podee272026_1034_439f_966c_6692ba8e0711.slice.
Jan 13 20:17:55.270536 systemd[1]: Created slice kubepods-besteffort-podcdcf8404_fe95_41b4_a33a_7e988931f7a3.slice - libcontainer container kubepods-besteffort-podcdcf8404_fe95_41b4_a33a_7e988931f7a3.slice.
Jan 13 20:17:55.275738 systemd[1]: Created slice kubepods-besteffort-pod10648835_fde4_414e_8a1f_6abaa02ccacc.slice - libcontainer container kubepods-besteffort-pod10648835_fde4_414e_8a1f_6abaa02ccacc.slice.
Jan 13 20:17:55.448706 containerd[1473]: time="2025-01-13T20:17:55.448570049Z" level=info msg="shim disconnected" id=729a2aba1e68b7b9c8db1af6314bfd662b2a6443cd080b71dd4157f7ffc5095b namespace=k8s.io
Jan 13 20:17:55.448706 containerd[1473]: time="2025-01-13T20:17:55.448630058Z" level=warning msg="cleaning up after shim disconnected" id=729a2aba1e68b7b9c8db1af6314bfd662b2a6443cd080b71dd4157f7ffc5095b namespace=k8s.io
Jan 13 20:17:55.448706 containerd[1473]: time="2025-01-13T20:17:55.448640453Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:17:55.548265 kubelet[1780]: E0113 20:17:55.548222    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:55.549310 containerd[1473]: time="2025-01-13T20:17:55.548701996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:0,}"
Jan 13 20:17:55.561939 kubelet[1780]: E0113 20:17:55.561898    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:55.562652 containerd[1473]: time="2025-01-13T20:17:55.562607522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:0,}"
Jan 13 20:17:55.568828 containerd[1473]: time="2025-01-13T20:17:55.568597012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:0,}"
Jan 13 20:17:55.577248 containerd[1473]: time="2025-01-13T20:17:55.577213293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:0,}"
Jan 13 20:17:55.578357 containerd[1473]: time="2025-01-13T20:17:55.578329369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:0,}"
Jan 13 20:17:55.628682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-729a2aba1e68b7b9c8db1af6314bfd662b2a6443cd080b71dd4157f7ffc5095b-rootfs.mount: Deactivated successfully.
Jan 13 20:17:55.636213 kubelet[1780]: E0113 20:17:55.632745    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:55.712420 containerd[1473]: time="2025-01-13T20:17:55.711046192Z" level=error msg="Failed to destroy network for sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.712420 containerd[1473]: time="2025-01-13T20:17:55.711397655Z" level=error msg="encountered an error cleaning up failed sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.712420 containerd[1473]: time="2025-01-13T20:17:55.711454986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.712752 kubelet[1780]: E0113 20:17:55.712487    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.712752 kubelet[1780]: E0113 20:17:55.712560    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:55.712752 kubelet[1780]: E0113 20:17:55.712592    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:55.712840 kubelet[1780]: E0113 20:17:55.712628    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x5ccf" podUID="6589cd90-9a49-4dc8-928c-d4bfb9fedb8d"
Jan 13 20:17:55.737486 containerd[1473]: time="2025-01-13T20:17:55.737436643Z" level=error msg="Failed to destroy network for sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.737833 containerd[1473]: time="2025-01-13T20:17:55.737764557Z" level=error msg="encountered an error cleaning up failed sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.737833 containerd[1473]: time="2025-01-13T20:17:55.737823647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.738097 kubelet[1780]: E0113 20:17:55.738018    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.738097 kubelet[1780]: E0113 20:17:55.738080    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:55.738213 kubelet[1780]: E0113 20:17:55.738103    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:55.738213 kubelet[1780]: E0113 20:17:55.738154    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7h9bs" podUID="b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7"
Jan 13 20:17:55.742048 containerd[1473]: time="2025-01-13T20:17:55.742008570Z" level=error msg="Failed to destroy network for sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.743499 containerd[1473]: time="2025-01-13T20:17:55.743461235Z" level=error msg="encountered an error cleaning up failed sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.743573 containerd[1473]: time="2025-01-13T20:17:55.743534078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.744673 kubelet[1780]: E0113 20:17:55.744496    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.744673 kubelet[1780]: E0113 20:17:55.744566    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:55.744673 kubelet[1780]: E0113 20:17:55.744588    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:55.744809 kubelet[1780]: E0113 20:17:55.744629    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc" podUID="ee272026-1034-439f-966c-6692ba8e0711"
Jan 13 20:17:55.750746 containerd[1473]: time="2025-01-13T20:17:55.750279266Z" level=error msg="Failed to destroy network for sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.750954 containerd[1473]: time="2025-01-13T20:17:55.750920342Z" level=error msg="encountered an error cleaning up failed sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.751011 containerd[1473]: time="2025-01-13T20:17:55.750991866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.754349 kubelet[1780]: E0113 20:17:55.751187    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.754349 kubelet[1780]: E0113 20:17:55.751239    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:55.754349 kubelet[1780]: E0113 20:17:55.751273    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:55.754505 kubelet[1780]: E0113 20:17:55.751309    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp" podUID="cdcf8404-fe95-41b4-a33a-7e988931f7a3"
Jan 13 20:17:55.764856 containerd[1473]: time="2025-01-13T20:17:55.764798361Z" level=error msg="Failed to destroy network for sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.765169 containerd[1473]: time="2025-01-13T20:17:55.765133792Z" level=error msg="encountered an error cleaning up failed sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.765219 containerd[1473]: time="2025-01-13T20:17:55.765199079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.765471 kubelet[1780]: E0113 20:17:55.765430    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.765878 kubelet[1780]: E0113 20:17:55.765577    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:55.765878 kubelet[1780]: E0113 20:17:55.765603    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:55.765878 kubelet[1780]: E0113 20:17:55.765654    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr" podUID="10648835-fde4-414e-8a1f-6abaa02ccacc"
Jan 13 20:17:55.778537 kubelet[1780]: E0113 20:17:55.778508    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:55.781308 containerd[1473]: time="2025-01-13T20:17:55.779598395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Jan 13 20:17:55.781308 containerd[1473]: time="2025-01-13T20:17:55.781054418Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\""
Jan 13 20:17:55.781308 containerd[1473]: time="2025-01-13T20:17:55.781258355Z" level=info msg="Ensure that sandbox 81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74 in task-service has been cleanup successfully"
Jan 13 20:17:55.781490 kubelet[1780]: I0113 20:17:55.780370    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74"
Jan 13 20:17:55.781522 containerd[1473]: time="2025-01-13T20:17:55.781438184Z" level=info msg="TearDown network for sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" successfully"
Jan 13 20:17:55.781522 containerd[1473]: time="2025-01-13T20:17:55.781452137Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" returns successfully"
Jan 13 20:17:55.781809 kubelet[1780]: I0113 20:17:55.781780    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318"
Jan 13 20:17:55.782400 containerd[1473]: time="2025-01-13T20:17:55.782285076Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\""
Jan 13 20:17:55.782513 containerd[1473]: time="2025-01-13T20:17:55.782286915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:1,}"
Jan 13 20:17:55.782838 containerd[1473]: time="2025-01-13T20:17:55.782816727Z" level=info msg="Ensure that sandbox 9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318 in task-service has been cleanup successfully"
Jan 13 20:17:55.783356 containerd[1473]: time="2025-01-13T20:17:55.782987640Z" level=info msg="TearDown network for sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" successfully"
Jan 13 20:17:55.783356 containerd[1473]: time="2025-01-13T20:17:55.783005591Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" returns successfully"
Jan 13 20:17:55.783550 kubelet[1780]: I0113 20:17:55.783035    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b"
Jan 13 20:17:55.783596 containerd[1473]: time="2025-01-13T20:17:55.783438052Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\""
Jan 13 20:17:55.783596 containerd[1473]: time="2025-01-13T20:17:55.783578501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:1,}"
Jan 13 20:17:55.783838 containerd[1473]: time="2025-01-13T20:17:55.783816221Z" level=info msg="Ensure that sandbox daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b in task-service has been cleanup successfully"
Jan 13 20:17:55.783990 containerd[1473]: time="2025-01-13T20:17:55.783961907Z" level=info msg="TearDown network for sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" successfully"
Jan 13 20:17:55.783990 containerd[1473]: time="2025-01-13T20:17:55.783978019Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" returns successfully"
Jan 13 20:17:55.784270 kubelet[1780]: I0113 20:17:55.784242    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9"
Jan 13 20:17:55.785420 containerd[1473]: time="2025-01-13T20:17:55.785392784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:1,}"
Jan 13 20:17:55.786391 kubelet[1780]: I0113 20:17:55.786367    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf"
Jan 13 20:17:55.790063 containerd[1473]: time="2025-01-13T20:17:55.790034875Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\""
Jan 13 20:17:55.790202 containerd[1473]: time="2025-01-13T20:17:55.790181401Z" level=info msg="Ensure that sandbox bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9 in task-service has been cleanup successfully"
Jan 13 20:17:55.790369 containerd[1473]: time="2025-01-13T20:17:55.790350676Z" level=info msg="TearDown network for sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" successfully"
Jan 13 20:17:55.790402 containerd[1473]: time="2025-01-13T20:17:55.790370226Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" returns successfully"
Jan 13 20:17:55.790612 kubelet[1780]: E0113 20:17:55.790593    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:55.790858 containerd[1473]: time="2025-01-13T20:17:55.790830513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:1,}"
Jan 13 20:17:55.797866 containerd[1473]: time="2025-01-13T20:17:55.797831131Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\""
Jan 13 20:17:55.798010 containerd[1473]: time="2025-01-13T20:17:55.797990211Z" level=info msg="Ensure that sandbox 0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf in task-service has been cleanup successfully"
Jan 13 20:17:55.798228 containerd[1473]: time="2025-01-13T20:17:55.798210340Z" level=info msg="TearDown network for sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" successfully"
Jan 13 20:17:55.798267 containerd[1473]: time="2025-01-13T20:17:55.798228850Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" returns successfully"
Jan 13 20:17:55.798702 kubelet[1780]: E0113 20:17:55.798455    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:55.798768 containerd[1473]: time="2025-01-13T20:17:55.798734035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:1,}"
Jan 13 20:17:55.909813 containerd[1473]: time="2025-01-13T20:17:55.909763869Z" level=error msg="Failed to destroy network for sandbox \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.911358 containerd[1473]: time="2025-01-13T20:17:55.910103577Z" level=error msg="encountered an error cleaning up failed sandbox \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.911358 containerd[1473]: time="2025-01-13T20:17:55.910172542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.911970 kubelet[1780]: E0113 20:17:55.911637    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.911970 kubelet[1780]: E0113 20:17:55.911694    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:55.911970 kubelet[1780]: E0113 20:17:55.911715    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:55.912152 kubelet[1780]: E0113 20:17:55.911751    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp" podUID="cdcf8404-fe95-41b4-a33a-7e988931f7a3"
Jan 13 20:17:55.923261 containerd[1473]: time="2025-01-13T20:17:55.923206149Z" level=error msg="Failed to destroy network for sandbox \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.923692 containerd[1473]: time="2025-01-13T20:17:55.923662319Z" level=error msg="encountered an error cleaning up failed sandbox \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.923807 containerd[1473]: time="2025-01-13T20:17:55.923786296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.924098 kubelet[1780]: E0113 20:17:55.924054    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.924157 kubelet[1780]: E0113 20:17:55.924119    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:55.924157 kubelet[1780]: E0113 20:17:55.924140    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:55.924255 kubelet[1780]: E0113 20:17:55.924182    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7h9bs" podUID="b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7"
Jan 13 20:17:55.931697 containerd[1473]: time="2025-01-13T20:17:55.931646520Z" level=error msg="Failed to destroy network for sandbox \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.931999 containerd[1473]: time="2025-01-13T20:17:55.931963879Z" level=error msg="encountered an error cleaning up failed sandbox \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.932070 containerd[1473]: time="2025-01-13T20:17:55.932039561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.932353 kubelet[1780]: E0113 20:17:55.932309    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.932740 kubelet[1780]: E0113 20:17:55.932462    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:55.932740 kubelet[1780]: E0113 20:17:55.932497    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:55.932740 kubelet[1780]: E0113 20:17:55.932540    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr" podUID="10648835-fde4-414e-8a1f-6abaa02ccacc"
Jan 13 20:17:55.935486 containerd[1473]: time="2025-01-13T20:17:55.935445838Z" level=error msg="Failed to destroy network for sandbox \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.935869 containerd[1473]: time="2025-01-13T20:17:55.935839719Z" level=error msg="encountered an error cleaning up failed sandbox \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.935983 containerd[1473]: time="2025-01-13T20:17:55.935958099Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.936436 kubelet[1780]: E0113 20:17:55.936399    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.936491 kubelet[1780]: E0113 20:17:55.936452    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:55.936491 kubelet[1780]: E0113 20:17:55.936470    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:55.936566 kubelet[1780]: E0113 20:17:55.936514    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc" podUID="ee272026-1034-439f-966c-6692ba8e0711"
Jan 13 20:17:55.944819 containerd[1473]: time="2025-01-13T20:17:55.944765163Z" level=error msg="Failed to destroy network for sandbox \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.945226 containerd[1473]: time="2025-01-13T20:17:55.945196705Z" level=error msg="encountered an error cleaning up failed sandbox \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.945379 containerd[1473]: time="2025-01-13T20:17:55.945354865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.945724 kubelet[1780]: E0113 20:17:55.945679    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:55.945788 kubelet[1780]: E0113 20:17:55.945736    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:55.945788 kubelet[1780]: E0113 20:17:55.945764    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:55.945856 kubelet[1780]: E0113 20:17:55.945817    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x5ccf" podUID="6589cd90-9a49-4dc8-928c-d4bfb9fedb8d"
Jan 13 20:17:56.619494 systemd[1]: run-netns-cni\x2d3dba61e2\x2db3cd\x2d3e7b\x2daa17\x2dfa86a18a8b8e.mount: Deactivated successfully.
Jan 13 20:17:56.619580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b-shm.mount: Deactivated successfully.
Jan 13 20:17:56.619629 systemd[1]: run-netns-cni\x2d1f253008\x2d639d\x2dafb7\x2d86c1\x2dbc9ef2bb6991.mount: Deactivated successfully.
Jan 13 20:17:56.619672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9-shm.mount: Deactivated successfully.
Jan 13 20:17:56.619730 systemd[1]: run-netns-cni\x2d799b3ce7\x2de822\x2dba46\x2d0da8\x2d409ed095c374.mount: Deactivated successfully.
Jan 13 20:17:56.619787 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf-shm.mount: Deactivated successfully.
Jan 13 20:17:56.633426 kubelet[1780]: E0113 20:17:56.633390    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:56.761693 systemd[1]: Created slice kubepods-besteffort-pod9744204a_04ef_4999_88e2_3d074458261a.slice - libcontainer container kubepods-besteffort-pod9744204a_04ef_4999_88e2_3d074458261a.slice.
Jan 13 20:17:56.763507 containerd[1473]: time="2025-01-13T20:17:56.763470670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4sxhx,Uid:9744204a-04ef-4999-88e2-3d074458261a,Namespace:calico-system,Attempt:0,}"
Jan 13 20:17:56.790375 kubelet[1780]: I0113 20:17:56.790051    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed"
Jan 13 20:17:56.793832 kubelet[1780]: I0113 20:17:56.793577    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9"
Jan 13 20:17:56.796450 kubelet[1780]: I0113 20:17:56.795951    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7"
Jan 13 20:17:56.796970 containerd[1473]: time="2025-01-13T20:17:56.796758284Z" level=info msg="StopPodSandbox for \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\""
Jan 13 20:17:56.796970 containerd[1473]: time="2025-01-13T20:17:56.796928323Z" level=info msg="Ensure that sandbox 978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7 in task-service has been cleanup successfully"
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.797346885Z" level=info msg="TearDown network for sandbox \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\" successfully"
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.797365876Z" level=info msg="StopPodSandbox for \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\" returns successfully"
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.797735141Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\""
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.797807946Z" level=info msg="TearDown network for sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" successfully"
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.797817462Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" returns successfully"
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.797872476Z" level=info msg="StopPodSandbox for \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\""
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.798004933Z" level=info msg="Ensure that sandbox b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed in task-service has been cleanup successfully"
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.798291477Z" level=info msg="TearDown network for sandbox \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\" successfully"
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.798309268Z" level=info msg="StopPodSandbox for \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\" returns successfully"
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.798524366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:2,}"
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.798655104Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\""
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.798720114Z" level=info msg="TearDown network for sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" successfully"
Jan 13 20:17:56.799231 containerd[1473]: time="2025-01-13T20:17:56.798728949Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" returns successfully"
Jan 13 20:17:56.798963 systemd[1]: run-netns-cni\x2df2eac6df\x2d363e\x2deaf7\x2df813\x2db7d831c183df.mount: Deactivated successfully.
Jan 13 20:17:56.802558 kubelet[1780]: E0113 20:17:56.802112    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:56.803395 kubelet[1780]: I0113 20:17:56.802873    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e"
Jan 13 20:17:56.803572 containerd[1473]: time="2025-01-13T20:17:56.803021114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:2,}"
Jan 13 20:17:56.802976 systemd[1]: run-netns-cni\x2dac5e21ad\x2d6c06\x2d1cc7\x2d2de8\x2d536ecac5dd51.mount: Deactivated successfully.
Jan 13 20:17:56.804257 containerd[1473]: time="2025-01-13T20:17:56.803659891Z" level=info msg="StopPodSandbox for \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\""
Jan 13 20:17:56.804545 kubelet[1780]: I0113 20:17:56.804468    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c"
Jan 13 20:17:56.804800 containerd[1473]: time="2025-01-13T20:17:56.804660457Z" level=info msg="Ensure that sandbox dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e in task-service has been cleanup successfully"
Jan 13 20:17:56.805312 containerd[1473]: time="2025-01-13T20:17:56.805229307Z" level=info msg="TearDown network for sandbox \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\" successfully"
Jan 13 20:17:56.805908 containerd[1473]: time="2025-01-13T20:17:56.805674536Z" level=info msg="StopPodSandbox for \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\" returns successfully"
Jan 13 20:17:56.805908 containerd[1473]: time="2025-01-13T20:17:56.805837298Z" level=info msg="StopPodSandbox for \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\""
Jan 13 20:17:56.806131 containerd[1473]: time="2025-01-13T20:17:56.806009257Z" level=info msg="Ensure that sandbox 315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c in task-service has been cleanup successfully"
Jan 13 20:17:56.806131 containerd[1473]: time="2025-01-13T20:17:56.806103892Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\""
Jan 13 20:17:56.806410 containerd[1473]: time="2025-01-13T20:17:56.806187772Z" level=info msg="TearDown network for sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" successfully"
Jan 13 20:17:56.806410 containerd[1473]: time="2025-01-13T20:17:56.806203245Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" returns successfully"
Jan 13 20:17:56.806410 containerd[1473]: time="2025-01-13T20:17:56.806228673Z" level=info msg="TearDown network for sandbox \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\" successfully"
Jan 13 20:17:56.806410 containerd[1473]: time="2025-01-13T20:17:56.806243506Z" level=info msg="StopPodSandbox for \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\" returns successfully"
Jan 13 20:17:56.806795 containerd[1473]: time="2025-01-13T20:17:56.806768697Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\""
Jan 13 20:17:56.807003 containerd[1473]: time="2025-01-13T20:17:56.806931579Z" level=info msg="TearDown network for sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" successfully"
Jan 13 20:17:56.807003 containerd[1473]: time="2025-01-13T20:17:56.806948851Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" returns successfully"
Jan 13 20:17:56.807003 containerd[1473]: time="2025-01-13T20:17:56.806829308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:2,}"
Jan 13 20:17:56.807393 containerd[1473]: time="2025-01-13T20:17:56.807364774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:2,}"
Jan 13 20:17:56.815291 containerd[1473]: time="2025-01-13T20:17:56.815253633Z" level=info msg="StopPodSandbox for \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\""
Jan 13 20:17:56.815970 containerd[1473]: time="2025-01-13T20:17:56.815820124Z" level=info msg="Ensure that sandbox 9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9 in task-service has been cleanup successfully"
Jan 13 20:17:56.816142 containerd[1473]: time="2025-01-13T20:17:56.816116744Z" level=info msg="TearDown network for sandbox \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\" successfully"
Jan 13 20:17:56.816252 containerd[1473]: time="2025-01-13T20:17:56.816224692Z" level=info msg="StopPodSandbox for \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\" returns successfully"
Jan 13 20:17:56.817241 containerd[1473]: time="2025-01-13T20:17:56.817218061Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\""
Jan 13 20:17:56.817531 containerd[1473]: time="2025-01-13T20:17:56.817495050Z" level=info msg="TearDown network for sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" successfully"
Jan 13 20:17:56.817531 containerd[1473]: time="2025-01-13T20:17:56.817510523Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" returns successfully"
Jan 13 20:17:56.817943 kubelet[1780]: E0113 20:17:56.817901    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:56.818276 containerd[1473]: time="2025-01-13T20:17:56.818253930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:2,}"
Jan 13 20:17:56.825970 containerd[1473]: time="2025-01-13T20:17:56.825929290Z" level=error msg="Failed to destroy network for sandbox \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.826248 containerd[1473]: time="2025-01-13T20:17:56.826224870Z" level=error msg="encountered an error cleaning up failed sandbox \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.826319 containerd[1473]: time="2025-01-13T20:17:56.826294317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4sxhx,Uid:9744204a-04ef-4999-88e2-3d074458261a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.826508 kubelet[1780]: E0113 20:17:56.826475    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.826553 kubelet[1780]: E0113 20:17:56.826527    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:17:56.826602 kubelet[1780]: E0113 20:17:56.826558    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:17:56.826628 kubelet[1780]: E0113 20:17:56.826609    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4sxhx_calico-system(9744204a-04ef-4999-88e2-3d074458261a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4sxhx_calico-system(9744204a-04ef-4999-88e2-3d074458261a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4sxhx" podUID="9744204a-04ef-4999-88e2-3d074458261a"
Jan 13 20:17:56.931991 containerd[1473]: time="2025-01-13T20:17:56.930571146Z" level=error msg="Failed to destroy network for sandbox \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.931991 containerd[1473]: time="2025-01-13T20:17:56.930895112Z" level=error msg="encountered an error cleaning up failed sandbox \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.931991 containerd[1473]: time="2025-01-13T20:17:56.930952805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.932175 kubelet[1780]: E0113 20:17:56.931352    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.932175 kubelet[1780]: E0113 20:17:56.931407    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:56.932175 kubelet[1780]: E0113 20:17:56.931436    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:56.932264 kubelet[1780]: E0113 20:17:56.931481    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr" podUID="10648835-fde4-414e-8a1f-6abaa02ccacc"
Jan 13 20:17:56.932561 containerd[1473]: time="2025-01-13T20:17:56.932489197Z" level=error msg="Failed to destroy network for sandbox \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.934174 containerd[1473]: time="2025-01-13T20:17:56.933996482Z" level=error msg="encountered an error cleaning up failed sandbox \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.934174 containerd[1473]: time="2025-01-13T20:17:56.934069367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.934349 kubelet[1780]: E0113 20:17:56.934286    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.934392 kubelet[1780]: E0113 20:17:56.934356    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:56.934392 kubelet[1780]: E0113 20:17:56.934377    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:56.934456 kubelet[1780]: E0113 20:17:56.934416    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x5ccf" podUID="6589cd90-9a49-4dc8-928c-d4bfb9fedb8d"
Jan 13 20:17:56.938405 containerd[1473]: time="2025-01-13T20:17:56.938360132Z" level=error msg="Failed to destroy network for sandbox \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.938970 containerd[1473]: time="2025-01-13T20:17:56.938811118Z" level=error msg="encountered an error cleaning up failed sandbox \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.938970 containerd[1473]: time="2025-01-13T20:17:56.938871690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.939089 kubelet[1780]: E0113 20:17:56.939056    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.939150 kubelet[1780]: E0113 20:17:56.939109    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:56.939150 kubelet[1780]: E0113 20:17:56.939127    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:56.939198 kubelet[1780]: E0113 20:17:56.939168    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7h9bs" podUID="b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7"
Jan 13 20:17:56.939463 containerd[1473]: time="2025-01-13T20:17:56.939310961Z" level=error msg="Failed to destroy network for sandbox \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.939733 containerd[1473]: time="2025-01-13T20:17:56.939706014Z" level=error msg="encountered an error cleaning up failed sandbox \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.939963 containerd[1473]: time="2025-01-13T20:17:56.939818721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.940039 kubelet[1780]: E0113 20:17:56.939977    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.940074 kubelet[1780]: E0113 20:17:56.940045    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:56.940074 kubelet[1780]: E0113 20:17:56.940065    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:56.940166 kubelet[1780]: E0113 20:17:56.940110    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp" podUID="cdcf8404-fe95-41b4-a33a-7e988931f7a3"
Jan 13 20:17:56.946322 containerd[1473]: time="2025-01-13T20:17:56.946197536Z" level=error msg="Failed to destroy network for sandbox \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.946593 containerd[1473]: time="2025-01-13T20:17:56.946542012Z" level=error msg="encountered an error cleaning up failed sandbox \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.946660 containerd[1473]: time="2025-01-13T20:17:56.946605662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.946976 kubelet[1780]: E0113 20:17:56.946826    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:56.946976 kubelet[1780]: E0113 20:17:56.946876    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:56.946976 kubelet[1780]: E0113 20:17:56.946895    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:56.947073 kubelet[1780]: E0113 20:17:56.946931    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc" podUID="ee272026-1034-439f-966c-6692ba8e0711"
Jan 13 20:17:57.619610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839-shm.mount: Deactivated successfully.
Jan 13 20:17:57.619698 systemd[1]: run-netns-cni\x2d13e0af0b\x2d1e97\x2d50e1\x2db135\x2ded8dc41aa779.mount: Deactivated successfully.
Jan 13 20:17:57.619745 systemd[1]: run-netns-cni\x2d8f8dd283\x2d6def\x2db4a4\x2d13f5\x2d23cb675b4ce6.mount: Deactivated successfully.
Jan 13 20:17:57.619787 systemd[1]: run-netns-cni\x2dbb21a0e8\x2d9e19\x2da642\x2d8e7e\x2db056757d30e0.mount: Deactivated successfully.
Jan 13 20:17:57.634365 kubelet[1780]: E0113 20:17:57.634302    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:57.807602 kubelet[1780]: I0113 20:17:57.807532    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46"
Jan 13 20:17:57.809444 containerd[1473]: time="2025-01-13T20:17:57.808081217Z" level=info msg="StopPodSandbox for \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\""
Jan 13 20:17:57.809444 containerd[1473]: time="2025-01-13T20:17:57.808250981Z" level=info msg="Ensure that sandbox 4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46 in task-service has been cleanup successfully"
Jan 13 20:17:57.809870 systemd[1]: run-netns-cni\x2db7cef994\x2d14b7\x2d007a\x2d0e3b\x2d2ca8ec25acc9.mount: Deactivated successfully.
Jan 13 20:17:57.810754 containerd[1473]: time="2025-01-13T20:17:57.810530688Z" level=info msg="TearDown network for sandbox \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\" successfully"
Jan 13 20:17:57.810754 containerd[1473]: time="2025-01-13T20:17:57.810560435Z" level=info msg="StopPodSandbox for \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\" returns successfully"
Jan 13 20:17:57.810934 containerd[1473]: time="2025-01-13T20:17:57.810810923Z" level=info msg="StopPodSandbox for \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\""
Jan 13 20:17:57.810934 containerd[1473]: time="2025-01-13T20:17:57.810888609Z" level=info msg="TearDown network for sandbox \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\" successfully"
Jan 13 20:17:57.810934 containerd[1473]: time="2025-01-13T20:17:57.810897964Z" level=info msg="StopPodSandbox for \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\" returns successfully"
Jan 13 20:17:57.811063 kubelet[1780]: I0113 20:17:57.810839    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142"
Jan 13 20:17:57.811371 containerd[1473]: time="2025-01-13T20:17:57.811347285Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\""
Jan 13 20:17:57.811466 containerd[1473]: time="2025-01-13T20:17:57.811428968Z" level=info msg="TearDown network for sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" successfully"
Jan 13 20:17:57.811466 containerd[1473]: time="2025-01-13T20:17:57.811442762Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" returns successfully"
Jan 13 20:17:57.811642 containerd[1473]: time="2025-01-13T20:17:57.811547036Z" level=info msg="StopPodSandbox for \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\""
Jan 13 20:17:57.811681 kubelet[1780]: E0113 20:17:57.811617    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:57.811716 containerd[1473]: time="2025-01-13T20:17:57.811698888Z" level=info msg="Ensure that sandbox 235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142 in task-service has been cleanup successfully"
Jan 13 20:17:57.812279 containerd[1473]: time="2025-01-13T20:17:57.812251603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:3,}"
Jan 13 20:17:57.813465 kubelet[1780]: I0113 20:17:57.812958    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839"
Jan 13 20:17:57.813530 containerd[1473]: time="2025-01-13T20:17:57.813302575Z" level=info msg="TearDown network for sandbox \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\" successfully"
Jan 13 20:17:57.813530 containerd[1473]: time="2025-01-13T20:17:57.813322447Z" level=info msg="StopPodSandbox for \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\" returns successfully"
Jan 13 20:17:57.813180 systemd[1]: run-netns-cni\x2dcf0b7f39\x2df798\x2d0b91\x2d9f80\x2df44c3c7c3249.mount: Deactivated successfully.
Jan 13 20:17:57.813636 containerd[1473]: time="2025-01-13T20:17:57.813537191Z" level=info msg="StopPodSandbox for \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\""
Jan 13 20:17:57.813755 containerd[1473]: time="2025-01-13T20:17:57.813666214Z" level=info msg="Ensure that sandbox 19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839 in task-service has been cleanup successfully"
Jan 13 20:17:57.813846 containerd[1473]: time="2025-01-13T20:17:57.813823824Z" level=info msg="TearDown network for sandbox \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\" successfully"
Jan 13 20:17:57.813877 containerd[1473]: time="2025-01-13T20:17:57.813843335Z" level=info msg="StopPodSandbox for \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\" returns successfully"
Jan 13 20:17:57.813897 containerd[1473]: time="2025-01-13T20:17:57.813870763Z" level=info msg="StopPodSandbox for \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\""
Jan 13 20:17:57.814033 containerd[1473]: time="2025-01-13T20:17:57.813967440Z" level=info msg="TearDown network for sandbox \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\" successfully"
Jan 13 20:17:57.814033 containerd[1473]: time="2025-01-13T20:17:57.813978075Z" level=info msg="StopPodSandbox for \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\" returns successfully"
Jan 13 20:17:57.814695 containerd[1473]: time="2025-01-13T20:17:57.814421718Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\""
Jan 13 20:17:57.814695 containerd[1473]: time="2025-01-13T20:17:57.814550900Z" level=info msg="TearDown network for sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" successfully"
Jan 13 20:17:57.814695 containerd[1473]: time="2025-01-13T20:17:57.814564494Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" returns successfully"
Jan 13 20:17:57.814791 kubelet[1780]: E0113 20:17:57.814747    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:57.815315 containerd[1473]: time="2025-01-13T20:17:57.814978191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:3,}"
Jan 13 20:17:57.815709 containerd[1473]: time="2025-01-13T20:17:57.815639137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4sxhx,Uid:9744204a-04ef-4999-88e2-3d074458261a,Namespace:calico-system,Attempt:1,}"
Jan 13 20:17:57.815953 kubelet[1780]: I0113 20:17:57.815771    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4"
Jan 13 20:17:57.816476 containerd[1473]: time="2025-01-13T20:17:57.816275614Z" level=info msg="StopPodSandbox for \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\""
Jan 13 20:17:57.816476 containerd[1473]: time="2025-01-13T20:17:57.816429945Z" level=info msg="Ensure that sandbox 860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4 in task-service has been cleanup successfully"
Jan 13 20:17:57.816581 systemd[1]: run-netns-cni\x2dc8e41668\x2d4ab7\x2d54be\x2d7edf\x2d8303c5de6809.mount: Deactivated successfully.
Jan 13 20:17:57.816639 containerd[1473]: time="2025-01-13T20:17:57.816613463Z" level=info msg="TearDown network for sandbox \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\" successfully"
Jan 13 20:17:57.816639 containerd[1473]: time="2025-01-13T20:17:57.816629097Z" level=info msg="StopPodSandbox for \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\" returns successfully"
Jan 13 20:17:57.818423 systemd[1]: run-netns-cni\x2d21846afb\x2dbbfb\x2da229\x2de15f\x2d43b7388a893a.mount: Deactivated successfully.
Jan 13 20:17:57.819132 containerd[1473]: time="2025-01-13T20:17:57.818676826Z" level=info msg="StopPodSandbox for \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\""
Jan 13 20:17:57.819132 containerd[1473]: time="2025-01-13T20:17:57.818747675Z" level=info msg="TearDown network for sandbox \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\" successfully"
Jan 13 20:17:57.819132 containerd[1473]: time="2025-01-13T20:17:57.818756671Z" level=info msg="StopPodSandbox for \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\" returns successfully"
Jan 13 20:17:57.819132 containerd[1473]: time="2025-01-13T20:17:57.819104676Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\""
Jan 13 20:17:57.819264 containerd[1473]: time="2025-01-13T20:17:57.819182361Z" level=info msg="TearDown network for sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" successfully"
Jan 13 20:17:57.819264 containerd[1473]: time="2025-01-13T20:17:57.819192277Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" returns successfully"
Jan 13 20:17:57.820259 containerd[1473]: time="2025-01-13T20:17:57.820205946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:3,}"
Jan 13 20:17:57.820572 kubelet[1780]: I0113 20:17:57.820555    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec"
Jan 13 20:17:57.821489 containerd[1473]: time="2025-01-13T20:17:57.821462708Z" level=info msg="StopPodSandbox for \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\""
Jan 13 20:17:57.821715 containerd[1473]: time="2025-01-13T20:17:57.821675893Z" level=info msg="Ensure that sandbox f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec in task-service has been cleanup successfully"
Jan 13 20:17:57.822017 containerd[1473]: time="2025-01-13T20:17:57.821997710Z" level=info msg="TearDown network for sandbox \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\" successfully"
Jan 13 20:17:57.822052 containerd[1473]: time="2025-01-13T20:17:57.822016022Z" level=info msg="StopPodSandbox for \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\" returns successfully"
Jan 13 20:17:57.822409 containerd[1473]: time="2025-01-13T20:17:57.822384138Z" level=info msg="StopPodSandbox for \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\""
Jan 13 20:17:57.822483 containerd[1473]: time="2025-01-13T20:17:57.822468780Z" level=info msg="TearDown network for sandbox \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\" successfully"
Jan 13 20:17:57.822512 containerd[1473]: time="2025-01-13T20:17:57.822482614Z" level=info msg="StopPodSandbox for \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\" returns successfully"
Jan 13 20:17:57.822903 containerd[1473]: time="2025-01-13T20:17:57.822877918Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\""
Jan 13 20:17:57.823026 containerd[1473]: time="2025-01-13T20:17:57.822957043Z" level=info msg="TearDown network for sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" successfully"
Jan 13 20:17:57.823026 containerd[1473]: time="2025-01-13T20:17:57.822970597Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" returns successfully"
Jan 13 20:17:57.823453 containerd[1473]: time="2025-01-13T20:17:57.823373658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:3,}"
Jan 13 20:17:57.823516 kubelet[1780]: I0113 20:17:57.823492    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3"
Jan 13 20:17:57.823960 containerd[1473]: time="2025-01-13T20:17:57.823937008Z" level=info msg="StopPodSandbox for \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\""
Jan 13 20:17:57.824133 containerd[1473]: time="2025-01-13T20:17:57.824108451Z" level=info msg="Ensure that sandbox 03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3 in task-service has been cleanup successfully"
Jan 13 20:17:57.824352 containerd[1473]: time="2025-01-13T20:17:57.824306523Z" level=info msg="TearDown network for sandbox \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\" successfully"
Jan 13 20:17:57.824390 containerd[1473]: time="2025-01-13T20:17:57.824341988Z" level=info msg="StopPodSandbox for \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\" returns successfully"
Jan 13 20:17:57.824730 containerd[1473]: time="2025-01-13T20:17:57.824703547Z" level=info msg="StopPodSandbox for \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\""
Jan 13 20:17:57.824811 containerd[1473]: time="2025-01-13T20:17:57.824795026Z" level=info msg="TearDown network for sandbox \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\" successfully"
Jan 13 20:17:57.824843 containerd[1473]: time="2025-01-13T20:17:57.824810499Z" level=info msg="StopPodSandbox for \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\" returns successfully"
Jan 13 20:17:57.825138 containerd[1473]: time="2025-01-13T20:17:57.825106448Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\""
Jan 13 20:17:57.825211 containerd[1473]: time="2025-01-13T20:17:57.825196808Z" level=info msg="TearDown network for sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" successfully"
Jan 13 20:17:57.825211 containerd[1473]: time="2025-01-13T20:17:57.825209402Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" returns successfully"
Jan 13 20:17:57.825801 containerd[1473]: time="2025-01-13T20:17:57.825774951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:3,}"
Jan 13 20:17:57.927807 containerd[1473]: time="2025-01-13T20:17:57.927676328Z" level=error msg="Failed to destroy network for sandbox \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.928452 containerd[1473]: time="2025-01-13T20:17:57.928400286Z" level=error msg="encountered an error cleaning up failed sandbox \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.928511 containerd[1473]: time="2025-01-13T20:17:57.928463778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.928771 kubelet[1780]: E0113 20:17:57.928732    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.928840 kubelet[1780]: E0113 20:17:57.928788    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:57.928840 kubelet[1780]: E0113 20:17:57.928809    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:57.928889 kubelet[1780]: E0113 20:17:57.928855    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7h9bs" podUID="b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7"
Jan 13 20:17:57.950746 containerd[1473]: time="2025-01-13T20:17:57.950603335Z" level=error msg="Failed to destroy network for sandbox \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.951456 containerd[1473]: time="2025-01-13T20:17:57.951419732Z" level=error msg="encountered an error cleaning up failed sandbox \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.951819 containerd[1473]: time="2025-01-13T20:17:57.951791567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.952322 kubelet[1780]: E0113 20:17:57.952165    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.952322 kubelet[1780]: E0113 20:17:57.952222    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:57.952322 kubelet[1780]: E0113 20:17:57.952244    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:57.952458 kubelet[1780]: E0113 20:17:57.952279    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x5ccf" podUID="6589cd90-9a49-4dc8-928c-d4bfb9fedb8d"
Jan 13 20:17:57.955160 containerd[1473]: time="2025-01-13T20:17:57.954600678Z" level=error msg="Failed to destroy network for sandbox \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.955160 containerd[1473]: time="2025-01-13T20:17:57.954755889Z" level=error msg="Failed to destroy network for sandbox \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.955160 containerd[1473]: time="2025-01-13T20:17:57.954985866Z" level=error msg="encountered an error cleaning up failed sandbox \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.955160 containerd[1473]: time="2025-01-13T20:17:57.955033845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4sxhx,Uid:9744204a-04ef-4999-88e2-3d074458261a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.955723 containerd[1473]: time="2025-01-13T20:17:57.955366737Z" level=error msg="encountered an error cleaning up failed sandbox \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.955723 containerd[1473]: time="2025-01-13T20:17:57.955507874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.955829 kubelet[1780]: E0113 20:17:57.955243    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.955829 kubelet[1780]: E0113 20:17:57.955291    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:17:57.955829 kubelet[1780]: E0113 20:17:57.955315    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:17:57.955921 kubelet[1780]: E0113 20:17:57.955379    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4sxhx_calico-system(9744204a-04ef-4999-88e2-3d074458261a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4sxhx_calico-system(9744204a-04ef-4999-88e2-3d074458261a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4sxhx" podUID="9744204a-04ef-4999-88e2-3d074458261a"
Jan 13 20:17:57.955921 kubelet[1780]: E0113 20:17:57.955724    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.955921 kubelet[1780]: E0113 20:17:57.955796    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:57.956275 kubelet[1780]: E0113 20:17:57.955813    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:57.956275 kubelet[1780]: E0113 20:17:57.955868    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr" podUID="10648835-fde4-414e-8a1f-6abaa02ccacc"
Jan 13 20:17:57.966187 containerd[1473]: time="2025-01-13T20:17:57.966135150Z" level=error msg="Failed to destroy network for sandbox \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.966545 containerd[1473]: time="2025-01-13T20:17:57.966510623Z" level=error msg="encountered an error cleaning up failed sandbox \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.966545 containerd[1473]: time="2025-01-13T20:17:57.966581072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.966874 kubelet[1780]: E0113 20:17:57.966776    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.966874 kubelet[1780]: E0113 20:17:57.966847    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:57.966958 kubelet[1780]: E0113 20:17:57.966865    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:57.966958 kubelet[1780]: E0113 20:17:57.966930    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc" podUID="ee272026-1034-439f-966c-6692ba8e0711"
Jan 13 20:17:57.977048 containerd[1473]: time="2025-01-13T20:17:57.976985846Z" level=error msg="Failed to destroy network for sandbox \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.977375 containerd[1473]: time="2025-01-13T20:17:57.977320097Z" level=error msg="encountered an error cleaning up failed sandbox \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.977426 containerd[1473]: time="2025-01-13T20:17:57.977405179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.977628 kubelet[1780]: E0113 20:17:57.977590    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:57.977681 kubelet[1780]: E0113 20:17:57.977650    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:57.977681 kubelet[1780]: E0113 20:17:57.977670    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:57.977752 kubelet[1780]: E0113 20:17:57.977711    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp" podUID="cdcf8404-fe95-41b4-a33a-7e988931f7a3"
Jan 13 20:17:57.984014 kubelet[1780]: I0113 20:17:57.983471    1780 topology_manager.go:215] "Topology Admit Handler" podUID="77a581ef-9403-4f14-b6fa-ada6a2a597bc" podNamespace="default" podName="nginx-deployment-85f456d6dd-xdxbv"
Jan 13 20:17:58.005023 systemd[1]: Created slice kubepods-besteffort-pod77a581ef_9403_4f14_b6fa_ada6a2a597bc.slice - libcontainer container kubepods-besteffort-pod77a581ef_9403_4f14_b6fa_ada6a2a597bc.slice.
Jan 13 20:17:58.148740 kubelet[1780]: I0113 20:17:58.148694    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-977jk\" (UniqueName: \"kubernetes.io/projected/77a581ef-9403-4f14-b6fa-ada6a2a597bc-kube-api-access-977jk\") pod \"nginx-deployment-85f456d6dd-xdxbv\" (UID: \"77a581ef-9403-4f14-b6fa-ada6a2a597bc\") " pod="default/nginx-deployment-85f456d6dd-xdxbv"
Jan 13 20:17:58.309521 containerd[1473]: time="2025-01-13T20:17:58.309470037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xdxbv,Uid:77a581ef-9403-4f14-b6fa-ada6a2a597bc,Namespace:default,Attempt:0,}"
Jan 13 20:17:58.400975 containerd[1473]: time="2025-01-13T20:17:58.400780341Z" level=error msg="Failed to destroy network for sandbox \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.401361 containerd[1473]: time="2025-01-13T20:17:58.401209282Z" level=error msg="encountered an error cleaning up failed sandbox \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.401361 containerd[1473]: time="2025-01-13T20:17:58.401273055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xdxbv,Uid:77a581ef-9403-4f14-b6fa-ada6a2a597bc,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.401606 kubelet[1780]: E0113 20:17:58.401486    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.401606 kubelet[1780]: E0113 20:17:58.401541    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xdxbv"
Jan 13 20:17:58.401606 kubelet[1780]: E0113 20:17:58.401560    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xdxbv"
Jan 13 20:17:58.401703 kubelet[1780]: E0113 20:17:58.401611    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-xdxbv_default(77a581ef-9403-4f14-b6fa-ada6a2a597bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-xdxbv_default(77a581ef-9403-4f14-b6fa-ada6a2a597bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xdxbv" podUID="77a581ef-9403-4f14-b6fa-ada6a2a597bc"
Jan 13 20:17:58.623035 systemd[1]: run-netns-cni\x2d0ffb7f28\x2d82b8\x2d751e\x2d9d54\x2d92ef25d9c655.mount: Deactivated successfully.
Jan 13 20:17:58.623159 systemd[1]: run-netns-cni\x2d1acbee69\x2d96a9\x2d2c70\x2dedd3\x2d2e36480fb30f.mount: Deactivated successfully.
Jan 13 20:17:58.635800 kubelet[1780]: E0113 20:17:58.635239    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:58.828314 kubelet[1780]: I0113 20:17:58.828287    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985"
Jan 13 20:17:58.831119 kubelet[1780]: I0113 20:17:58.831102    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850"
Jan 13 20:17:58.832242 containerd[1473]: time="2025-01-13T20:17:58.832210490Z" level=info msg="StopPodSandbox for \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\""
Jan 13 20:17:58.832534 containerd[1473]: time="2025-01-13T20:17:58.832379140Z" level=info msg="Ensure that sandbox 242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850 in task-service has been cleanup successfully"
Jan 13 20:17:58.833007 containerd[1473]: time="2025-01-13T20:17:58.832562783Z" level=info msg="TearDown network for sandbox \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\" successfully"
Jan 13 20:17:58.833007 containerd[1473]: time="2025-01-13T20:17:58.832579816Z" level=info msg="StopPodSandbox for \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\" returns successfully"
Jan 13 20:17:58.833849 containerd[1473]: time="2025-01-13T20:17:58.833823018Z" level=info msg="StopPodSandbox for \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\""
Jan 13 20:17:58.834159 containerd[1473]: time="2025-01-13T20:17:58.834137407Z" level=info msg="TearDown network for sandbox \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\" successfully"
Jan 13 20:17:58.834150 systemd[1]: run-netns-cni\x2dc36c0513\x2dd9e7\x2d3911\x2deba9\x2d2c93e6db8774.mount: Deactivated successfully.
Jan 13 20:17:58.835169 containerd[1473]: time="2025-01-13T20:17:58.835085612Z" level=info msg="StopPodSandbox for \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\" returns successfully"
Jan 13 20:17:58.835648 containerd[1473]: time="2025-01-13T20:17:58.835613552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4sxhx,Uid:9744204a-04ef-4999-88e2-3d074458261a,Namespace:calico-system,Attempt:2,}"
Jan 13 20:17:58.836566 kubelet[1780]: I0113 20:17:58.836539    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79"
Jan 13 20:17:58.837597 containerd[1473]: time="2025-01-13T20:17:58.837443909Z" level=info msg="StopPodSandbox for \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\""
Jan 13 20:17:58.837660 containerd[1473]: time="2025-01-13T20:17:58.837624234Z" level=info msg="Ensure that sandbox b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79 in task-service has been cleanup successfully"
Jan 13 20:17:58.837859 containerd[1473]: time="2025-01-13T20:17:58.837836385Z" level=info msg="TearDown network for sandbox \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\" successfully"
Jan 13 20:17:58.838084 containerd[1473]: time="2025-01-13T20:17:58.837855817Z" level=info msg="StopPodSandbox for \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\" returns successfully"
Jan 13 20:17:58.838901 kubelet[1780]: I0113 20:17:58.838715    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59"
Jan 13 20:17:58.839363 containerd[1473]: time="2025-01-13T20:17:58.838574518Z" level=info msg="StopPodSandbox for \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\""
Jan 13 20:17:58.839864 containerd[1473]: time="2025-01-13T20:17:58.838662161Z" level=info msg="StopPodSandbox for \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\""
Jan 13 20:17:58.839864 containerd[1473]: time="2025-01-13T20:17:58.839818039Z" level=info msg="TearDown network for sandbox \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\" successfully"
Jan 13 20:17:58.839864 containerd[1473]: time="2025-01-13T20:17:58.839833913Z" level=info msg="StopPodSandbox for \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\" returns successfully"
Jan 13 20:17:58.839972 containerd[1473]: time="2025-01-13T20:17:58.839666742Z" level=info msg="Ensure that sandbox 918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985 in task-service has been cleanup successfully"
Jan 13 20:17:58.840147 systemd[1]: run-netns-cni\x2da829134f\x2d1d69\x2dd0ac\x2d91f8\x2d02a90d17b120.mount: Deactivated successfully.
Jan 13 20:17:58.841599 containerd[1473]: time="2025-01-13T20:17:58.841571948Z" level=info msg="StopPodSandbox for \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\""
Jan 13 20:17:58.842189 containerd[1473]: time="2025-01-13T20:17:58.842064263Z" level=info msg="Ensure that sandbox ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59 in task-service has been cleanup successfully"
Jan 13 20:17:58.842931 systemd[1]: run-netns-cni\x2d0f8389c4\x2da186\x2deea2\x2df100\x2d766212f9fe02.mount: Deactivated successfully.
Jan 13 20:17:58.844260 containerd[1473]: time="2025-01-13T20:17:58.841705013Z" level=info msg="TearDown network for sandbox \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\" successfully"
Jan 13 20:17:58.844260 containerd[1473]: time="2025-01-13T20:17:58.844074665Z" level=info msg="StopPodSandbox for \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\" returns successfully"
Jan 13 20:17:58.844395 containerd[1473]: time="2025-01-13T20:17:58.844372781Z" level=info msg="TearDown network for sandbox \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\" successfully"
Jan 13 20:17:58.844395 containerd[1473]: time="2025-01-13T20:17:58.844392333Z" level=info msg="StopPodSandbox for \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\" returns successfully"
Jan 13 20:17:58.844680 containerd[1473]: time="2025-01-13T20:17:58.844461104Z" level=info msg="StopPodSandbox for \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\""
Jan 13 20:17:58.844835 containerd[1473]: time="2025-01-13T20:17:58.844663260Z" level=info msg="StopPodSandbox for \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\""
Jan 13 20:17:58.844835 containerd[1473]: time="2025-01-13T20:17:58.844810479Z" level=info msg="TearDown network for sandbox \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\" successfully"
Jan 13 20:17:58.844835 containerd[1473]: time="2025-01-13T20:17:58.844819315Z" level=info msg="StopPodSandbox for \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\" returns successfully"
Jan 13 20:17:58.844991 containerd[1473]: time="2025-01-13T20:17:58.844960336Z" level=info msg="TearDown network for sandbox \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\" successfully"
Jan 13 20:17:58.844991 containerd[1473]: time="2025-01-13T20:17:58.844977129Z" level=info msg="StopPodSandbox for \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\" returns successfully"
Jan 13 20:17:58.845045 containerd[1473]: time="2025-01-13T20:17:58.845016473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xdxbv,Uid:77a581ef-9403-4f14-b6fa-ada6a2a597bc,Namespace:default,Attempt:1,}"
Jan 13 20:17:58.845212 systemd[1]: run-netns-cni\x2d86c1ec06\x2d9e01\x2d6483\x2d2cd7\x2de32e768a49db.mount: Deactivated successfully.
Jan 13 20:17:58.845672 containerd[1473]: time="2025-01-13T20:17:58.845276524Z" level=info msg="StopPodSandbox for \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\""
Jan 13 20:17:58.845672 containerd[1473]: time="2025-01-13T20:17:58.845414467Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\""
Jan 13 20:17:58.845672 containerd[1473]: time="2025-01-13T20:17:58.845505669Z" level=info msg="TearDown network for sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" successfully"
Jan 13 20:17:58.845672 containerd[1473]: time="2025-01-13T20:17:58.845523581Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" returns successfully"
Jan 13 20:17:58.845672 containerd[1473]: time="2025-01-13T20:17:58.845418545Z" level=info msg="TearDown network for sandbox \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\" successfully"
Jan 13 20:17:58.845672 containerd[1473]: time="2025-01-13T20:17:58.845597751Z" level=info msg="StopPodSandbox for \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\" returns successfully"
Jan 13 20:17:58.845842 kubelet[1780]: E0113 20:17:58.845707    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:58.846443 containerd[1473]: time="2025-01-13T20:17:58.845965277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:4,}"
Jan 13 20:17:58.846443 containerd[1473]: time="2025-01-13T20:17:58.846118014Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\""
Jan 13 20:17:58.846443 containerd[1473]: time="2025-01-13T20:17:58.846205697Z" level=info msg="TearDown network for sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" successfully"
Jan 13 20:17:58.846443 containerd[1473]: time="2025-01-13T20:17:58.846416769Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" returns successfully"
Jan 13 20:17:58.847028 kubelet[1780]: E0113 20:17:58.846816    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:58.847483 containerd[1473]: time="2025-01-13T20:17:58.847447979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:4,}"
Jan 13 20:17:58.847830 kubelet[1780]: I0113 20:17:58.847809    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9"
Jan 13 20:17:58.849043 containerd[1473]: time="2025-01-13T20:17:58.849010488Z" level=info msg="StopPodSandbox for \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\""
Jan 13 20:17:58.849310 containerd[1473]: time="2025-01-13T20:17:58.849169662Z" level=info msg="Ensure that sandbox a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9 in task-service has been cleanup successfully"
Jan 13 20:17:58.849389 containerd[1473]: time="2025-01-13T20:17:58.849318840Z" level=info msg="TearDown network for sandbox \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\" successfully"
Jan 13 20:17:58.849389 containerd[1473]: time="2025-01-13T20:17:58.849342070Z" level=info msg="StopPodSandbox for \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\" returns successfully"
Jan 13 20:17:58.849804 containerd[1473]: time="2025-01-13T20:17:58.849778408Z" level=info msg="StopPodSandbox for \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\""
Jan 13 20:17:58.849891 containerd[1473]: time="2025-01-13T20:17:58.849851058Z" level=info msg="TearDown network for sandbox \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\" successfully"
Jan 13 20:17:58.849891 containerd[1473]: time="2025-01-13T20:17:58.849861933Z" level=info msg="StopPodSandbox for \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\" returns successfully"
Jan 13 20:17:58.850294 containerd[1473]: time="2025-01-13T20:17:58.850161768Z" level=info msg="StopPodSandbox for \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\""
Jan 13 20:17:58.850294 containerd[1473]: time="2025-01-13T20:17:58.850238616Z" level=info msg="TearDown network for sandbox \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\" successfully"
Jan 13 20:17:58.850294 containerd[1473]: time="2025-01-13T20:17:58.850248452Z" level=info msg="StopPodSandbox for \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\" returns successfully"
Jan 13 20:17:58.852018 containerd[1473]: time="2025-01-13T20:17:58.851971534Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\""
Jan 13 20:17:58.852266 containerd[1473]: time="2025-01-13T20:17:58.852188244Z" level=info msg="TearDown network for sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" successfully"
Jan 13 20:17:58.852266 containerd[1473]: time="2025-01-13T20:17:58.852205317Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" returns successfully"
Jan 13 20:17:58.853965 containerd[1473]: time="2025-01-13T20:17:58.853901850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:4,}"
Jan 13 20:17:58.855827 kubelet[1780]: I0113 20:17:58.855548    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932"
Jan 13 20:17:58.856366 containerd[1473]: time="2025-01-13T20:17:58.856320122Z" level=info msg="StopPodSandbox for \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\""
Jan 13 20:17:58.856623 containerd[1473]: time="2025-01-13T20:17:58.856601404Z" level=info msg="Ensure that sandbox dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932 in task-service has been cleanup successfully"
Jan 13 20:17:58.856863 containerd[1473]: time="2025-01-13T20:17:58.856843264Z" level=info msg="TearDown network for sandbox \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\" successfully"
Jan 13 20:17:58.856937 containerd[1473]: time="2025-01-13T20:17:58.856923230Z" level=info msg="StopPodSandbox for \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\" returns successfully"
Jan 13 20:17:58.858107 containerd[1473]: time="2025-01-13T20:17:58.858080828Z" level=info msg="StopPodSandbox for \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\""
Jan 13 20:17:58.858169 containerd[1473]: time="2025-01-13T20:17:58.858158595Z" level=info msg="TearDown network for sandbox \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\" successfully"
Jan 13 20:17:58.858205 containerd[1473]: time="2025-01-13T20:17:58.858169271Z" level=info msg="StopPodSandbox for \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\" returns successfully"
Jan 13 20:17:58.859045 containerd[1473]: time="2025-01-13T20:17:58.859017677Z" level=info msg="StopPodSandbox for \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\""
Jan 13 20:17:58.860173 containerd[1473]: time="2025-01-13T20:17:58.860148006Z" level=info msg="TearDown network for sandbox \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\" successfully"
Jan 13 20:17:58.860253 containerd[1473]: time="2025-01-13T20:17:58.860238369Z" level=info msg="StopPodSandbox for \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\" returns successfully"
Jan 13 20:17:58.860636 kubelet[1780]: I0113 20:17:58.860599    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609"
Jan 13 20:17:58.860761 containerd[1473]: time="2025-01-13T20:17:58.860729564Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\""
Jan 13 20:17:58.860975 containerd[1473]: time="2025-01-13T20:17:58.860811650Z" level=info msg="TearDown network for sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" successfully"
Jan 13 20:17:58.860975 containerd[1473]: time="2025-01-13T20:17:58.860828123Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" returns successfully"
Jan 13 20:17:58.861184 containerd[1473]: time="2025-01-13T20:17:58.861148949Z" level=info msg="StopPodSandbox for \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\""
Jan 13 20:17:58.861320 containerd[1473]: time="2025-01-13T20:17:58.861298207Z" level=info msg="Ensure that sandbox 25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609 in task-service has been cleanup successfully"
Jan 13 20:17:58.861834 containerd[1473]: time="2025-01-13T20:17:58.861797399Z" level=info msg="TearDown network for sandbox \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\" successfully"
Jan 13 20:17:58.861834 containerd[1473]: time="2025-01-13T20:17:58.861825067Z" level=info msg="StopPodSandbox for \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\" returns successfully"
Jan 13 20:17:58.862086 containerd[1473]: time="2025-01-13T20:17:58.862044336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:4,}"
Jan 13 20:17:58.862572 containerd[1473]: time="2025-01-13T20:17:58.862547286Z" level=info msg="StopPodSandbox for \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\""
Jan 13 20:17:58.862654 containerd[1473]: time="2025-01-13T20:17:58.862637329Z" level=info msg="TearDown network for sandbox \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\" successfully"
Jan 13 20:17:58.862681 containerd[1473]: time="2025-01-13T20:17:58.862661279Z" level=info msg="StopPodSandbox for \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\" returns successfully"
Jan 13 20:17:58.862992 containerd[1473]: time="2025-01-13T20:17:58.862964193Z" level=info msg="StopPodSandbox for \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\""
Jan 13 20:17:58.863062 containerd[1473]: time="2025-01-13T20:17:58.863047718Z" level=info msg="TearDown network for sandbox \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\" successfully"
Jan 13 20:17:58.863093 containerd[1473]: time="2025-01-13T20:17:58.863061072Z" level=info msg="StopPodSandbox for \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\" returns successfully"
Jan 13 20:17:58.863576 containerd[1473]: time="2025-01-13T20:17:58.863515763Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\""
Jan 13 20:17:58.863621 containerd[1473]: time="2025-01-13T20:17:58.863596609Z" level=info msg="TearDown network for sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" successfully"
Jan 13 20:17:58.863621 containerd[1473]: time="2025-01-13T20:17:58.863606525Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" returns successfully"
Jan 13 20:17:58.864611 containerd[1473]: time="2025-01-13T20:17:58.864162813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:4,}"
Jan 13 20:17:58.950464 containerd[1473]: time="2025-01-13T20:17:58.950285959Z" level=error msg="Failed to destroy network for sandbox \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.953164 containerd[1473]: time="2025-01-13T20:17:58.953031455Z" level=error msg="encountered an error cleaning up failed sandbox \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.953164 containerd[1473]: time="2025-01-13T20:17:58.953133932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4sxhx,Uid:9744204a-04ef-4999-88e2-3d074458261a,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.953708 kubelet[1780]: E0113 20:17:58.953644    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.953772 kubelet[1780]: E0113 20:17:58.953716    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:17:58.953772 kubelet[1780]: E0113 20:17:58.953738    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:17:58.953833 kubelet[1780]: E0113 20:17:58.953775    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4sxhx_calico-system(9744204a-04ef-4999-88e2-3d074458261a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4sxhx_calico-system(9744204a-04ef-4999-88e2-3d074458261a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4sxhx" podUID="9744204a-04ef-4999-88e2-3d074458261a"
Jan 13 20:17:58.967431 containerd[1473]: time="2025-01-13T20:17:58.967377515Z" level=error msg="Failed to destroy network for sandbox \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.968219 containerd[1473]: time="2025-01-13T20:17:58.967768472Z" level=error msg="encountered an error cleaning up failed sandbox \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.968219 containerd[1473]: time="2025-01-13T20:17:58.967836684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.968343 kubelet[1780]: E0113 20:17:58.968027    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.968343 kubelet[1780]: E0113 20:17:58.968078    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:58.968343 kubelet[1780]: E0113 20:17:58.968097    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:17:58.968423 kubelet[1780]: E0113 20:17:58.968142    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x5ccf" podUID="6589cd90-9a49-4dc8-928c-d4bfb9fedb8d"
Jan 13 20:17:58.983504 containerd[1473]: time="2025-01-13T20:17:58.983444579Z" level=error msg="Failed to destroy network for sandbox \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.983833 containerd[1473]: time="2025-01-13T20:17:58.983800751Z" level=error msg="encountered an error cleaning up failed sandbox \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.983890 containerd[1473]: time="2025-01-13T20:17:58.983868322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xdxbv,Uid:77a581ef-9403-4f14-b6fa-ada6a2a597bc,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.984468 kubelet[1780]: E0113 20:17:58.984073    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.984468 kubelet[1780]: E0113 20:17:58.984135    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xdxbv"
Jan 13 20:17:58.984468 kubelet[1780]: E0113 20:17:58.984158    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xdxbv"
Jan 13 20:17:58.984599 kubelet[1780]: E0113 20:17:58.984201    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-xdxbv_default(77a581ef-9403-4f14-b6fa-ada6a2a597bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-xdxbv_default(77a581ef-9403-4f14-b6fa-ada6a2a597bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xdxbv" podUID="77a581ef-9403-4f14-b6fa-ada6a2a597bc"
Jan 13 20:17:58.987491 containerd[1473]: time="2025-01-13T20:17:58.987443552Z" level=error msg="Failed to destroy network for sandbox \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.989041 containerd[1473]: time="2025-01-13T20:17:58.988993626Z" level=error msg="encountered an error cleaning up failed sandbox \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.989295 containerd[1473]: time="2025-01-13T20:17:58.989237804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.989730 kubelet[1780]: E0113 20:17:58.989673    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.989819 kubelet[1780]: E0113 20:17:58.989739    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:58.989819 kubelet[1780]: E0113 20:17:58.989758    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:17:58.989819 kubelet[1780]: E0113 20:17:58.989798    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7h9bs" podUID="b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7"
Jan 13 20:17:58.990820 containerd[1473]: time="2025-01-13T20:17:58.990780482Z" level=error msg="Failed to destroy network for sandbox \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.991887 containerd[1473]: time="2025-01-13T20:17:58.991845798Z" level=error msg="encountered an error cleaning up failed sandbox \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.991950 containerd[1473]: time="2025-01-13T20:17:58.991923605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.992279 kubelet[1780]: E0113 20:17:58.992104    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:58.992279 kubelet[1780]: E0113 20:17:58.992148    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:58.992279 kubelet[1780]: E0113 20:17:58.992165    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:17:58.992497 kubelet[1780]: E0113 20:17:58.992206    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc" podUID="ee272026-1034-439f-966c-6692ba8e0711"
Jan 13 20:17:59.003408 containerd[1473]: time="2025-01-13T20:17:59.003352974Z" level=error msg="Failed to destroy network for sandbox \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:59.003802 containerd[1473]: time="2025-01-13T20:17:59.003714873Z" level=error msg="encountered an error cleaning up failed sandbox \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:59.003802 containerd[1473]: time="2025-01-13T20:17:59.003778728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:59.004088 kubelet[1780]: E0113 20:17:59.003982    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:59.004088 kubelet[1780]: E0113 20:17:59.004052    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:59.004088 kubelet[1780]: E0113 20:17:59.004072    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:17:59.004191 kubelet[1780]: E0113 20:17:59.004115    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp" podUID="cdcf8404-fe95-41b4-a33a-7e988931f7a3"
Jan 13 20:17:59.005757 containerd[1473]: time="2025-01-13T20:17:59.005596898Z" level=error msg="Failed to destroy network for sandbox \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:59.005990 containerd[1473]: time="2025-01-13T20:17:59.005893222Z" level=error msg="encountered an error cleaning up failed sandbox \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:59.005990 containerd[1473]: time="2025-01-13T20:17:59.005946081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:59.006154 kubelet[1780]: E0113 20:17:59.006121    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:17:59.006218 kubelet[1780]: E0113 20:17:59.006168    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:59.006218 kubelet[1780]: E0113 20:17:59.006185    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:17:59.006263 kubelet[1780]: E0113 20:17:59.006232    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr" podUID="10648835-fde4-414e-8a1f-6abaa02ccacc"
Jan 13 20:17:59.621460 systemd[1]: run-netns-cni\x2dbb0da877\x2d9686\x2d4a86\x2d262d\x2d17581dbd4e5d.mount: Deactivated successfully.
Jan 13 20:17:59.621566 systemd[1]: run-netns-cni\x2d10663456\x2d8a0e\x2d7172\x2dc5e0\x2d4b5670c35725.mount: Deactivated successfully.
Jan 13 20:17:59.621613 systemd[1]: run-netns-cni\x2d54d68c66\x2d07ae\x2dc2c8\x2d76aa\x2dfc0bacd4d9ae.mount: Deactivated successfully.
Jan 13 20:17:59.636583 kubelet[1780]: E0113 20:17:59.636314    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:17:59.866443 kubelet[1780]: I0113 20:17:59.865831    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84"
Jan 13 20:17:59.867197 containerd[1473]: time="2025-01-13T20:17:59.866910006Z" level=info msg="StopPodSandbox for \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\""
Jan 13 20:17:59.867197 containerd[1473]: time="2025-01-13T20:17:59.867081699Z" level=info msg="Ensure that sandbox 7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84 in task-service has been cleanup successfully"
Jan 13 20:17:59.867843 containerd[1473]: time="2025-01-13T20:17:59.867723808Z" level=info msg="TearDown network for sandbox \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\" successfully"
Jan 13 20:17:59.867843 containerd[1473]: time="2025-01-13T20:17:59.867746239Z" level=info msg="StopPodSandbox for \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\" returns successfully"
Jan 13 20:17:59.869736 containerd[1473]: time="2025-01-13T20:17:59.868105579Z" level=info msg="StopPodSandbox for \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\""
Jan 13 20:17:59.869736 containerd[1473]: time="2025-01-13T20:17:59.868240926Z" level=info msg="TearDown network for sandbox \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\" successfully"
Jan 13 20:17:59.869736 containerd[1473]: time="2025-01-13T20:17:59.868252961Z" level=info msg="StopPodSandbox for \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\" returns successfully"
Jan 13 20:17:59.869736 containerd[1473]: time="2025-01-13T20:17:59.868808104Z" level=info msg="StopPodSandbox for \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\""
Jan 13 20:17:59.869736 containerd[1473]: time="2025-01-13T20:17:59.868881396Z" level=info msg="TearDown network for sandbox \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\" successfully"
Jan 13 20:17:59.869736 containerd[1473]: time="2025-01-13T20:17:59.868895150Z" level=info msg="StopPodSandbox for \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\" returns successfully"
Jan 13 20:17:59.869736 containerd[1473]: time="2025-01-13T20:17:59.869238056Z" level=info msg="StopPodSandbox for \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\""
Jan 13 20:17:59.869736 containerd[1473]: time="2025-01-13T20:17:59.869346894Z" level=info msg="TearDown network for sandbox \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\" successfully"
Jan 13 20:17:59.869736 containerd[1473]: time="2025-01-13T20:17:59.869359329Z" level=info msg="StopPodSandbox for \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\" returns successfully"
Jan 13 20:17:59.869736 containerd[1473]: time="2025-01-13T20:17:59.869705314Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\""
Jan 13 20:17:59.869560 systemd[1]: run-netns-cni\x2dc1b5b045\x2d5647\x2d00bb\x2d2b7a\x2d00142a809a75.mount: Deactivated successfully.
Jan 13 20:17:59.870122 containerd[1473]: time="2025-01-13T20:17:59.869776766Z" level=info msg="TearDown network for sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" successfully"
Jan 13 20:17:59.870122 containerd[1473]: time="2025-01-13T20:17:59.869785722Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" returns successfully"
Jan 13 20:17:59.870164 kubelet[1780]: E0113 20:17:59.870052    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:59.870622 kubelet[1780]: I0113 20:17:59.870596    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c"
Jan 13 20:17:59.871469 containerd[1473]: time="2025-01-13T20:17:59.871013403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:5,}"
Jan 13 20:17:59.871469 containerd[1473]: time="2025-01-13T20:17:59.871017201Z" level=info msg="StopPodSandbox for \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\""
Jan 13 20:17:59.871469 containerd[1473]: time="2025-01-13T20:17:59.871336316Z" level=info msg="Ensure that sandbox 3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c in task-service has been cleanup successfully"
Jan 13 20:17:59.871813 containerd[1473]: time="2025-01-13T20:17:59.871630801Z" level=info msg="TearDown network for sandbox \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\" successfully"
Jan 13 20:17:59.871813 containerd[1473]: time="2025-01-13T20:17:59.871651273Z" level=info msg="StopPodSandbox for \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\" returns successfully"
Jan 13 20:17:59.873419 containerd[1473]: time="2025-01-13T20:17:59.872187904Z" level=info msg="StopPodSandbox for \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\""
Jan 13 20:17:59.873419 containerd[1473]: time="2025-01-13T20:17:59.872271951Z" level=info msg="TearDown network for sandbox \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\" successfully"
Jan 13 20:17:59.873419 containerd[1473]: time="2025-01-13T20:17:59.872295022Z" level=info msg="StopPodSandbox for \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\" returns successfully"
Jan 13 20:17:59.873419 containerd[1473]: time="2025-01-13T20:17:59.872775914Z" level=info msg="StopPodSandbox for \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\""
Jan 13 20:17:59.873419 containerd[1473]: time="2025-01-13T20:17:59.872847006Z" level=info msg="TearDown network for sandbox \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\" successfully"
Jan 13 20:17:59.873419 containerd[1473]: time="2025-01-13T20:17:59.872855723Z" level=info msg="StopPodSandbox for \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\" returns successfully"
Jan 13 20:17:59.873419 containerd[1473]: time="2025-01-13T20:17:59.873575721Z" level=info msg="StopPodSandbox for \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\""
Jan 13 20:17:59.873419 containerd[1473]: time="2025-01-13T20:17:59.873708789Z" level=info msg="Ensure that sandbox 707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9 in task-service has been cleanup successfully"
Jan 13 20:17:59.873419 containerd[1473]: time="2025-01-13T20:17:59.873590756Z" level=info msg="StopPodSandbox for \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\""
Jan 13 20:17:59.873419 containerd[1473]: time="2025-01-13T20:17:59.873864009Z" level=info msg="TearDown network for sandbox \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\" successfully"
Jan 13 20:17:59.873419 containerd[1473]: time="2025-01-13T20:17:59.873873925Z" level=info msg="StopPodSandbox for \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\" returns successfully"
Jan 13 20:17:59.874852 kubelet[1780]: I0113 20:17:59.873177    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9"
Jan 13 20:17:59.874448 systemd[1]: run-netns-cni\x2d964e2e04\x2dcc50\x2d2a51\x2db31a\x2d7064e358f120.mount: Deactivated successfully.
Jan 13 20:17:59.876199 containerd[1473]: time="2025-01-13T20:17:59.875824123Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\""
Jan 13 20:17:59.876199 containerd[1473]: time="2025-01-13T20:17:59.875916287Z" level=info msg="TearDown network for sandbox \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\" successfully"
Jan 13 20:17:59.876199 containerd[1473]: time="2025-01-13T20:17:59.875927203Z" level=info msg="StopPodSandbox for \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\" returns successfully"
Jan 13 20:17:59.876199 containerd[1473]: time="2025-01-13T20:17:59.875998295Z" level=info msg="TearDown network for sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" successfully"
Jan 13 20:17:59.876199 containerd[1473]: time="2025-01-13T20:17:59.876010410Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" returns successfully"
Jan 13 20:17:59.876344 containerd[1473]: time="2025-01-13T20:17:59.876303496Z" level=info msg="StopPodSandbox for \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\""
Jan 13 20:17:59.876385 kubelet[1780]: E0113 20:17:59.876213    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:17:59.876413 containerd[1473]: time="2025-01-13T20:17:59.876377587Z" level=info msg="TearDown network for sandbox \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\" successfully"
Jan 13 20:17:59.876413 containerd[1473]: time="2025-01-13T20:17:59.876388103Z" level=info msg="StopPodSandbox for \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\" returns successfully"
Jan 13 20:17:59.877726 containerd[1473]: time="2025-01-13T20:17:59.876544961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:5,}"
Jan 13 20:17:59.877726 containerd[1473]: time="2025-01-13T20:17:59.877195947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xdxbv,Uid:77a581ef-9403-4f14-b6fa-ada6a2a597bc,Namespace:default,Attempt:2,}"
Jan 13 20:17:59.877578 systemd[1]: run-netns-cni\x2d60f46857\x2d7703\x2dbdef\x2d4266\x2d727eeeea51ec.mount: Deactivated successfully.
Jan 13 20:17:59.877857 kubelet[1780]: I0113 20:17:59.877746    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3"
Jan 13 20:17:59.878234 containerd[1473]: time="2025-01-13T20:17:59.878198035Z" level=info msg="StopPodSandbox for \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\""
Jan 13 20:17:59.880344 containerd[1473]: time="2025-01-13T20:17:59.878387401Z" level=info msg="Ensure that sandbox 64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3 in task-service has been cleanup successfully"
Jan 13 20:17:59.880344 containerd[1473]: time="2025-01-13T20:17:59.879970383Z" level=info msg="TearDown network for sandbox \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\" successfully"
Jan 13 20:17:59.880344 containerd[1473]: time="2025-01-13T20:17:59.879998052Z" level=info msg="StopPodSandbox for \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\" returns successfully"
Jan 13 20:17:59.880215 systemd[1]: run-netns-cni\x2df4577bcb\x2d9920\x2df755\x2da3d6\x2dbc0b62ca819f.mount: Deactivated successfully.
Jan 13 20:17:59.880497 containerd[1473]: time="2025-01-13T20:17:59.880441319Z" level=info msg="StopPodSandbox for \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\""
Jan 13 20:17:59.880611 containerd[1473]: time="2025-01-13T20:17:59.880511931Z" level=info msg="TearDown network for sandbox \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\" successfully"
Jan 13 20:17:59.880611 containerd[1473]: time="2025-01-13T20:17:59.880533083Z" level=info msg="StopPodSandbox for \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\" returns successfully"
Jan 13 20:17:59.880895 containerd[1473]: time="2025-01-13T20:17:59.880836445Z" level=info msg="StopPodSandbox for \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\""
Jan 13 20:17:59.880946 containerd[1473]: time="2025-01-13T20:17:59.880906457Z" level=info msg="TearDown network for sandbox \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\" successfully"
Jan 13 20:17:59.880946 containerd[1473]: time="2025-01-13T20:17:59.880917373Z" level=info msg="StopPodSandbox for \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\" returns successfully"
Jan 13 20:17:59.881331 containerd[1473]: time="2025-01-13T20:17:59.881242366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4sxhx,Uid:9744204a-04ef-4999-88e2-3d074458261a,Namespace:calico-system,Attempt:3,}"
Jan 13 20:17:59.881955 kubelet[1780]: I0113 20:17:59.881909    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e"
Jan 13 20:17:59.882459 containerd[1473]: time="2025-01-13T20:17:59.882433421Z" level=info msg="StopPodSandbox for \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\""
Jan 13 20:17:59.882848 containerd[1473]: time="2025-01-13T20:17:59.882764571Z" level=info msg="Ensure that sandbox 5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e in task-service has been cleanup successfully"
Jan 13 20:17:59.883199 containerd[1473]: time="2025-01-13T20:17:59.883178849Z" level=info msg="TearDown network for sandbox \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\" successfully"
Jan 13 20:17:59.883232 containerd[1473]: time="2025-01-13T20:17:59.883200401Z" level=info msg="StopPodSandbox for \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\" returns successfully"
Jan 13 20:17:59.883616 containerd[1473]: time="2025-01-13T20:17:59.883542067Z" level=info msg="StopPodSandbox for \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\""
Jan 13 20:17:59.883845 containerd[1473]: time="2025-01-13T20:17:59.883827396Z" level=info msg="TearDown network for sandbox \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\" successfully"
Jan 13 20:17:59.883872 containerd[1473]: time="2025-01-13T20:17:59.883846868Z" level=info msg="StopPodSandbox for \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\" returns successfully"
Jan 13 20:17:59.884100 kubelet[1780]: I0113 20:17:59.884080    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c"
Jan 13 20:17:59.884320 containerd[1473]: time="2025-01-13T20:17:59.884295653Z" level=info msg="StopPodSandbox for \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\""
Jan 13 20:17:59.884505 containerd[1473]: time="2025-01-13T20:17:59.884482140Z" level=info msg="StopPodSandbox for \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\""
Jan 13 20:17:59.884552 containerd[1473]: time="2025-01-13T20:17:59.884538798Z" level=info msg="TearDown network for sandbox \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\" successfully"
Jan 13 20:17:59.884577 containerd[1473]: time="2025-01-13T20:17:59.884554392Z" level=info msg="StopPodSandbox for \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\" returns successfully"
Jan 13 20:17:59.884635 containerd[1473]: time="2025-01-13T20:17:59.884617127Z" level=info msg="Ensure that sandbox 78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c in task-service has been cleanup successfully"
Jan 13 20:17:59.884773 containerd[1473]: time="2025-01-13T20:17:59.884756033Z" level=info msg="TearDown network for sandbox \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\" successfully"
Jan 13 20:17:59.884773 containerd[1473]: time="2025-01-13T20:17:59.884772107Z" level=info msg="StopPodSandbox for \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\" returns successfully"
Jan 13 20:17:59.884848 containerd[1473]: time="2025-01-13T20:17:59.884829204Z" level=info msg="StopPodSandbox for \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\""
Jan 13 20:17:59.885069 containerd[1473]: time="2025-01-13T20:17:59.885039802Z" level=info msg="TearDown network for sandbox \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\" successfully"
Jan 13 20:17:59.885069 containerd[1473]: time="2025-01-13T20:17:59.885063713Z" level=info msg="StopPodSandbox for \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\" returns successfully"
Jan 13 20:17:59.885245 containerd[1473]: time="2025-01-13T20:17:59.885200579Z" level=info msg="StopPodSandbox for \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\""
Jan 13 20:17:59.885500 containerd[1473]: time="2025-01-13T20:17:59.885472193Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\""
Jan 13 20:17:59.885732 containerd[1473]: time="2025-01-13T20:17:59.885640488Z" level=info msg="TearDown network for sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" successfully"
Jan 13 20:17:59.885732 containerd[1473]: time="2025-01-13T20:17:59.885670596Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" returns successfully"
Jan 13 20:17:59.885798 containerd[1473]: time="2025-01-13T20:17:59.885727973Z" level=info msg="TearDown network for sandbox \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\" successfully"
Jan 13 20:17:59.885798 containerd[1473]: time="2025-01-13T20:17:59.885744847Z" level=info msg="StopPodSandbox for \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\" returns successfully"
Jan 13 20:17:59.886257 containerd[1473]: time="2025-01-13T20:17:59.886176438Z" level=info msg="StopPodSandbox for \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\""
Jan 13 20:17:59.886257 containerd[1473]: time="2025-01-13T20:17:59.886224699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:5,}"
Jan 13 20:17:59.886457 containerd[1473]: time="2025-01-13T20:17:59.886242132Z" level=info msg="TearDown network for sandbox \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\" successfully"
Jan 13 20:17:59.886457 containerd[1473]: time="2025-01-13T20:17:59.886456849Z" level=info msg="StopPodSandbox for \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\" returns successfully"
Jan 13 20:17:59.886830 containerd[1473]: time="2025-01-13T20:17:59.886802873Z" level=info msg="StopPodSandbox for \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\""
Jan 13 20:17:59.887044 kubelet[1780]: I0113 20:17:59.886925    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca"
Jan 13 20:17:59.887089 containerd[1473]: time="2025-01-13T20:17:59.886955294Z" level=info msg="TearDown network for sandbox \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\" successfully"
Jan 13 20:17:59.887089 containerd[1473]: time="2025-01-13T20:17:59.886980564Z" level=info msg="StopPodSandbox for \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\" returns successfully"
Jan 13 20:17:59.887355 containerd[1473]: time="2025-01-13T20:17:59.887311235Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\""
Jan 13 20:17:59.887419 containerd[1473]: time="2025-01-13T20:17:59.887404798Z" level=info msg="TearDown network for sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" successfully"
Jan 13 20:17:59.887452 containerd[1473]: time="2025-01-13T20:17:59.887418873Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" returns successfully"
Jan 13 20:17:59.887823 containerd[1473]: time="2025-01-13T20:17:59.887800084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:5,}"
Jan 13 20:17:59.888055 containerd[1473]: time="2025-01-13T20:17:59.887962980Z" level=info msg="StopPodSandbox for \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\""
Jan 13 20:17:59.888127 containerd[1473]: time="2025-01-13T20:17:59.888104765Z" level=info msg="Ensure that sandbox 815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca in task-service has been cleanup successfully"
Jan 13 20:17:59.889420 containerd[1473]: time="2025-01-13T20:17:59.889390662Z" level=info msg="TearDown network for sandbox \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\" successfully"
Jan 13 20:17:59.889463 containerd[1473]: time="2025-01-13T20:17:59.889423090Z" level=info msg="StopPodSandbox for \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\" returns successfully"
Jan 13 20:17:59.890934 containerd[1473]: time="2025-01-13T20:17:59.890002303Z" level=info msg="StopPodSandbox for \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\""
Jan 13 20:17:59.890934 containerd[1473]: time="2025-01-13T20:17:59.890128374Z" level=info msg="TearDown network for sandbox \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\" successfully"
Jan 13 20:17:59.890934 containerd[1473]: time="2025-01-13T20:17:59.890299427Z" level=info msg="StopPodSandbox for \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\" returns successfully"
Jan 13 20:17:59.890934 containerd[1473]: time="2025-01-13T20:17:59.890823023Z" level=info msg="StopPodSandbox for \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\""
Jan 13 20:17:59.890934 containerd[1473]: time="2025-01-13T20:17:59.890922064Z" level=info msg="TearDown network for sandbox \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\" successfully"
Jan 13 20:17:59.890934 containerd[1473]: time="2025-01-13T20:17:59.890934339Z" level=info msg="StopPodSandbox for \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\" returns successfully"
Jan 13 20:17:59.891229 containerd[1473]: time="2025-01-13T20:17:59.891198476Z" level=info msg="StopPodSandbox for \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\""
Jan 13 20:17:59.891336 containerd[1473]: time="2025-01-13T20:17:59.891312511Z" level=info msg="TearDown network for sandbox \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\" successfully"
Jan 13 20:17:59.891406 containerd[1473]: time="2025-01-13T20:17:59.891346818Z" level=info msg="StopPodSandbox for \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\" returns successfully"
Jan 13 20:17:59.893470 containerd[1473]: time="2025-01-13T20:17:59.893439760Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\""
Jan 13 20:17:59.893823 containerd[1473]: time="2025-01-13T20:17:59.893793302Z" level=info msg="TearDown network for sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" successfully"
Jan 13 20:17:59.893823 containerd[1473]: time="2025-01-13T20:17:59.893821891Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" returns successfully"
Jan 13 20:17:59.894515 containerd[1473]: time="2025-01-13T20:17:59.894479514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:5,}"
Jan 13 20:18:00.153263 containerd[1473]: time="2025-01-13T20:18:00.153133186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:00.166523 containerd[1473]: time="2025-01-13T20:18:00.166466742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762"
Jan 13 20:18:00.177544 containerd[1473]: time="2025-01-13T20:18:00.174355252Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:00.198770 containerd[1473]: time="2025-01-13T20:18:00.198711771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:00.200908 containerd[1473]: time="2025-01-13T20:18:00.200845429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.421028664s"
Jan 13 20:18:00.200908 containerd[1473]: time="2025-01-13T20:18:00.200895371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\""
Jan 13 20:18:00.212078 containerd[1473]: time="2025-01-13T20:18:00.212034891Z" level=info msg="CreateContainer within sandbox \"d2e68df0d417917a66511af7aeef258df57c36d2995ba44b9a46edddb5ba1a73\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Jan 13 20:18:00.236239 containerd[1473]: time="2025-01-13T20:18:00.235998393Z" level=info msg="CreateContainer within sandbox \"d2e68df0d417917a66511af7aeef258df57c36d2995ba44b9a46edddb5ba1a73\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bdd7a027d69ff21e456a12c30250b7b057dbb318d7fdc42666d7c8448157fe93\""
Jan 13 20:18:00.237288 containerd[1473]: time="2025-01-13T20:18:00.237239059Z" level=info msg="StartContainer for \"bdd7a027d69ff21e456a12c30250b7b057dbb318d7fdc42666d7c8448157fe93\""
Jan 13 20:18:00.255514 containerd[1473]: time="2025-01-13T20:18:00.255459745Z" level=error msg="Failed to destroy network for sandbox \"bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.255939 containerd[1473]: time="2025-01-13T20:18:00.255879871Z" level=error msg="encountered an error cleaning up failed sandbox \"bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.256033 containerd[1473]: time="2025-01-13T20:18:00.255959522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.256305 kubelet[1780]: E0113 20:18:00.256269    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.256407 kubelet[1780]: E0113 20:18:00.256380    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:18:00.256450 kubelet[1780]: E0113 20:18:00.256415    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7h9bs"
Jan 13 20:18:00.256490 kubelet[1780]: E0113 20:18:00.256462    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7h9bs_kube-system(b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7h9bs" podUID="b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7"
Jan 13 20:18:00.272616 containerd[1473]: time="2025-01-13T20:18:00.272553483Z" level=error msg="Failed to destroy network for sandbox \"c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.273987 containerd[1473]: time="2025-01-13T20:18:00.273950292Z" level=error msg="encountered an error cleaning up failed sandbox \"c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.274059 containerd[1473]: time="2025-01-13T20:18:00.274017347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xdxbv,Uid:77a581ef-9403-4f14-b6fa-ada6a2a597bc,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.274239 kubelet[1780]: E0113 20:18:00.274187    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.274910 kubelet[1780]: E0113 20:18:00.274248    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xdxbv"
Jan 13 20:18:00.274910 kubelet[1780]: E0113 20:18:00.274267    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xdxbv"
Jan 13 20:18:00.274910 kubelet[1780]: E0113 20:18:00.274311    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-xdxbv_default(77a581ef-9403-4f14-b6fa-ada6a2a597bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-xdxbv_default(77a581ef-9403-4f14-b6fa-ada6a2a597bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xdxbv" podUID="77a581ef-9403-4f14-b6fa-ada6a2a597bc"
Jan 13 20:18:00.274616 systemd[1]: Started cri-containerd-bdd7a027d69ff21e456a12c30250b7b057dbb318d7fdc42666d7c8448157fe93.scope - libcontainer container bdd7a027d69ff21e456a12c30250b7b057dbb318d7fdc42666d7c8448157fe93.
Jan 13 20:18:00.275535 containerd[1473]: time="2025-01-13T20:18:00.275364254Z" level=error msg="Failed to destroy network for sandbox \"4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.275690 containerd[1473]: time="2025-01-13T20:18:00.275608244Z" level=error msg="encountered an error cleaning up failed sandbox \"4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.275690 containerd[1473]: time="2025-01-13T20:18:00.275674180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.275907 kubelet[1780]: E0113 20:18:00.275872    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.275948 kubelet[1780]: E0113 20:18:00.275916    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:18:00.275948 kubelet[1780]: E0113 20:18:00.275934    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr"
Jan 13 20:18:00.276023 kubelet[1780]: E0113 20:18:00.275968    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-mpttr_calico-apiserver(10648835-fde4-414e-8a1f-6abaa02ccacc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr" podUID="10648835-fde4-414e-8a1f-6abaa02ccacc"
Jan 13 20:18:00.295748 containerd[1473]: time="2025-01-13T20:18:00.295659140Z" level=error msg="Failed to destroy network for sandbox \"a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.296228 containerd[1473]: time="2025-01-13T20:18:00.296004294Z" level=error msg="encountered an error cleaning up failed sandbox \"a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.296228 containerd[1473]: time="2025-01-13T20:18:00.296072748Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.296396 kubelet[1780]: E0113 20:18:00.296280    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.296396 kubelet[1780]: E0113 20:18:00.296348    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:18:00.296396 kubelet[1780]: E0113 20:18:00.296369    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x5ccf"
Jan 13 20:18:00.296479 kubelet[1780]: E0113 20:18:00.296408    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x5ccf_kube-system(6589cd90-9a49-4dc8-928c-d4bfb9fedb8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x5ccf" podUID="6589cd90-9a49-4dc8-928c-d4bfb9fedb8d"
Jan 13 20:18:00.297287 containerd[1473]: time="2025-01-13T20:18:00.297161030Z" level=error msg="Failed to destroy network for sandbox \"6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.297495 containerd[1473]: time="2025-01-13T20:18:00.297459201Z" level=error msg="encountered an error cleaning up failed sandbox \"6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.297535 containerd[1473]: time="2025-01-13T20:18:00.297515540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.297780 kubelet[1780]: E0113 20:18:00.297665    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.297780 kubelet[1780]: E0113 20:18:00.297710    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:18:00.297780 kubelet[1780]: E0113 20:18:00.297726    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc"
Jan 13 20:18:00.297888 kubelet[1780]: E0113 20:18:00.297761    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f94864fc5-tqbwc_calico-system(ee272026-1034-439f-966c-6692ba8e0711)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc" podUID="ee272026-1034-439f-966c-6692ba8e0711"
Jan 13 20:18:00.301340 containerd[1473]: time="2025-01-13T20:18:00.301292437Z" level=error msg="Failed to destroy network for sandbox \"6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.301668 containerd[1473]: time="2025-01-13T20:18:00.301639509Z" level=error msg="encountered an error cleaning up failed sandbox \"6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.302852 containerd[1473]: time="2025-01-13T20:18:00.302821396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4sxhx,Uid:9744204a-04ef-4999-88e2-3d074458261a,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.303177 kubelet[1780]: E0113 20:18:00.303118    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.303231 kubelet[1780]: E0113 20:18:00.303199    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:18:00.303231 kubelet[1780]: E0113 20:18:00.303216    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4sxhx"
Jan 13 20:18:00.303310 kubelet[1780]: E0113 20:18:00.303257    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4sxhx_calico-system(9744204a-04ef-4999-88e2-3d074458261a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4sxhx_calico-system(9744204a-04ef-4999-88e2-3d074458261a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4sxhx" podUID="9744204a-04ef-4999-88e2-3d074458261a"
Jan 13 20:18:00.305459 containerd[1473]: time="2025-01-13T20:18:00.305395973Z" level=error msg="Failed to destroy network for sandbox \"f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.306106 containerd[1473]: time="2025-01-13T20:18:00.305785831Z" level=error msg="encountered an error cleaning up failed sandbox \"f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.306106 containerd[1473]: time="2025-01-13T20:18:00.305839451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.306202 kubelet[1780]: E0113 20:18:00.306174    1780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:18:00.306250 kubelet[1780]: E0113 20:18:00.306225    1780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:18:00.306275 kubelet[1780]: E0113 20:18:00.306246    1780 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp"
Jan 13 20:18:00.306321 kubelet[1780]: E0113 20:18:00.306277    1780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d9cccdfc4-gpjkp_calico-apiserver(cdcf8404-fe95-41b4-a33a-7e988931f7a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp" podUID="cdcf8404-fe95-41b4-a33a-7e988931f7a3"
Jan 13 20:18:00.320373 containerd[1473]: time="2025-01-13T20:18:00.320284240Z" level=info msg="StartContainer for \"bdd7a027d69ff21e456a12c30250b7b057dbb318d7fdc42666d7c8448157fe93\" returns successfully"
Jan 13 20:18:00.514847 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Jan 13 20:18:00.514998 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Jan 13 20:18:00.622184 systemd[1]: run-netns-cni\x2d40edc302\x2def9b\x2d074e\x2d23d0\x2df783fb32d802.mount: Deactivated successfully.
Jan 13 20:18:00.622272 systemd[1]: run-netns-cni\x2d76534fe5\x2ddd9c\x2dbd2a\x2da99e\x2d1ca3ef854cc4.mount: Deactivated successfully.
Jan 13 20:18:00.622317 systemd[1]: run-netns-cni\x2d0b94d605\x2dc9d0\x2d1ebe\x2d438a\x2d0a0c3fe45e5f.mount: Deactivated successfully.
Jan 13 20:18:00.622376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217521087.mount: Deactivated successfully.
Jan 13 20:18:00.636754 kubelet[1780]: E0113 20:18:00.636709    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:00.890969 kubelet[1780]: I0113 20:18:00.890634    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263"
Jan 13 20:18:00.891603 containerd[1473]: time="2025-01-13T20:18:00.891350624Z" level=info msg="StopPodSandbox for \"6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263\""
Jan 13 20:18:00.891603 containerd[1473]: time="2025-01-13T20:18:00.891513364Z" level=info msg="Ensure that sandbox 6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263 in task-service has been cleanup successfully"
Jan 13 20:18:00.892504 containerd[1473]: time="2025-01-13T20:18:00.892454739Z" level=info msg="TearDown network for sandbox \"6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263\" successfully"
Jan 13 20:18:00.892504 containerd[1473]: time="2025-01-13T20:18:00.892477731Z" level=info msg="StopPodSandbox for \"6bd97f4c597ae8fd0431d0e0f3367719691bbea0bc480e90e559cb8a9a3e9263\" returns successfully"
Jan 13 20:18:00.893010 containerd[1473]: time="2025-01-13T20:18:00.892762187Z" level=info msg="StopPodSandbox for \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\""
Jan 13 20:18:00.893010 containerd[1473]: time="2025-01-13T20:18:00.892844557Z" level=info msg="TearDown network for sandbox \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\" successfully"
Jan 13 20:18:00.893010 containerd[1473]: time="2025-01-13T20:18:00.892853953Z" level=info msg="StopPodSandbox for \"815d9fe5e6d9cf502dd8bd5a0e703ee14f78c4af48da19502900b1d9f7c576ca\" returns successfully"
Jan 13 20:18:00.893746 systemd[1]: run-netns-cni\x2dc7d22042\x2d0fee\x2db77c\x2d4ea2\x2d989d02d0196a.mount: Deactivated successfully.
Jan 13 20:18:00.895229 containerd[1473]: time="2025-01-13T20:18:00.895065503Z" level=info msg="StopPodSandbox for \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\""
Jan 13 20:18:00.895229 containerd[1473]: time="2025-01-13T20:18:00.895155630Z" level=info msg="TearDown network for sandbox \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\" successfully"
Jan 13 20:18:00.895229 containerd[1473]: time="2025-01-13T20:18:00.895165866Z" level=info msg="StopPodSandbox for \"25a8f23d4d830f1edc2a03495d1d2f090c8ce24cb2e3737028ccc70f93540609\" returns successfully"
Jan 13 20:18:00.895568 containerd[1473]: time="2025-01-13T20:18:00.895541409Z" level=info msg="StopPodSandbox for \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\""
Jan 13 20:18:00.895638 containerd[1473]: time="2025-01-13T20:18:00.895627697Z" level=info msg="TearDown network for sandbox \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\" successfully"
Jan 13 20:18:00.895797 containerd[1473]: time="2025-01-13T20:18:00.895638053Z" level=info msg="StopPodSandbox for \"03fdabda2927447cabea1259ceb90b0604fbfe2c1552e32e43809c389be0e0a3\" returns successfully"
Jan 13 20:18:00.896126 containerd[1473]: time="2025-01-13T20:18:00.895974890Z" level=info msg="StopPodSandbox for \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\""
Jan 13 20:18:00.896126 containerd[1473]: time="2025-01-13T20:18:00.896055820Z" level=info msg="TearDown network for sandbox \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\" successfully"
Jan 13 20:18:00.896126 containerd[1473]: time="2025-01-13T20:18:00.896067216Z" level=info msg="StopPodSandbox for \"315d88e1b99b89f866f4e9ae4530603cd62a509331d6c36b628434c987315e2c\" returns successfully"
Jan 13 20:18:00.896499 kubelet[1780]: I0113 20:18:00.896370    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e"
Jan 13 20:18:00.896559 containerd[1473]: time="2025-01-13T20:18:00.896480345Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\""
Jan 13 20:18:00.896559 containerd[1473]: time="2025-01-13T20:18:00.896547960Z" level=info msg="TearDown network for sandbox \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" successfully"
Jan 13 20:18:00.896559 containerd[1473]: time="2025-01-13T20:18:00.896557437Z" level=info msg="StopPodSandbox for \"daf527e1da980ca98e691194f1474b128920430d45fd2f3e893168d0c14ee14b\" returns successfully"
Jan 13 20:18:00.897251 containerd[1473]: time="2025-01-13T20:18:00.896811783Z" level=info msg="StopPodSandbox for \"bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e\""
Jan 13 20:18:00.897251 containerd[1473]: time="2025-01-13T20:18:00.896952412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:6,}"
Jan 13 20:18:00.897251 containerd[1473]: time="2025-01-13T20:18:00.897012590Z" level=info msg="Ensure that sandbox bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e in task-service has been cleanup successfully"
Jan 13 20:18:00.897584 containerd[1473]: time="2025-01-13T20:18:00.897558510Z" level=info msg="TearDown network for sandbox \"bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e\" successfully"
Jan 13 20:18:00.897584 containerd[1473]: time="2025-01-13T20:18:00.897580902Z" level=info msg="StopPodSandbox for \"bd5b7797413b486ee58198901c40f14117455c7ccf6ca05eb0212f53fb72c68e\" returns successfully"
Jan 13 20:18:00.898585 containerd[1473]: time="2025-01-13T20:18:00.898467737Z" level=info msg="StopPodSandbox for \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\""
Jan 13 20:18:00.898585 containerd[1473]: time="2025-01-13T20:18:00.898548547Z" level=info msg="TearDown network for sandbox \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\" successfully"
Jan 13 20:18:00.898585 containerd[1473]: time="2025-01-13T20:18:00.898559343Z" level=info msg="StopPodSandbox for \"7a92a9c70a07382e4a5a5ba6869d687c69cba909a41503b73195d6e14ac06c84\" returns successfully"
Jan 13 20:18:00.899290 containerd[1473]: time="2025-01-13T20:18:00.899106703Z" level=info msg="StopPodSandbox for \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\""
Jan 13 20:18:00.899290 containerd[1473]: time="2025-01-13T20:18:00.899184554Z" level=info msg="TearDown network for sandbox \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\" successfully"
Jan 13 20:18:00.899290 containerd[1473]: time="2025-01-13T20:18:00.899195310Z" level=info msg="StopPodSandbox for \"918f1668165928b92210675b2dd8511f908d9d54f028e79f5ca7936e1424a985\" returns successfully"
Jan 13 20:18:00.900053 kubelet[1780]: E0113 20:18:00.900024    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:00.900298 systemd[1]: run-netns-cni\x2d5c7cba70\x2db856\x2da034\x2d501b\x2de2c900bd255b.mount: Deactivated successfully.
Jan 13 20:18:00.900867 containerd[1473]: time="2025-01-13T20:18:00.900777971Z" level=info msg="StopPodSandbox for \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\""
Jan 13 20:18:00.900936 containerd[1473]: time="2025-01-13T20:18:00.900858781Z" level=info msg="TearDown network for sandbox \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\" successfully"
Jan 13 20:18:00.900936 containerd[1473]: time="2025-01-13T20:18:00.900880413Z" level=info msg="StopPodSandbox for \"4ec260d75f72690ae599788db77380de2451a941ca7c56df90d06f831cdd1b46\" returns successfully"
Jan 13 20:18:00.901427 containerd[1473]: time="2025-01-13T20:18:00.901400902Z" level=info msg="StopPodSandbox for \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\""
Jan 13 20:18:00.901550 containerd[1473]: time="2025-01-13T20:18:00.901484232Z" level=info msg="TearDown network for sandbox \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\" successfully"
Jan 13 20:18:00.901550 containerd[1473]: time="2025-01-13T20:18:00.901496667Z" level=info msg="StopPodSandbox for \"b653ba6839484df37ac69696f43008c443a2c0c63e9cd8e5965e05ad26f234ed\" returns successfully"
Jan 13 20:18:00.903348 containerd[1473]: time="2025-01-13T20:18:00.903307324Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\""
Jan 13 20:18:00.903487 containerd[1473]: time="2025-01-13T20:18:00.903413325Z" level=info msg="TearDown network for sandbox \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" successfully"
Jan 13 20:18:00.903487 containerd[1473]: time="2025-01-13T20:18:00.903424041Z" level=info msg="StopPodSandbox for \"bdabeb42dd1483009e5f96cfa7bea1fa6501ba1802d30df3c36e56f8f38496c9\" returns successfully"
Jan 13 20:18:00.903631 kubelet[1780]: E0113 20:18:00.903577    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:00.904421 containerd[1473]: time="2025-01-13T20:18:00.904293123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:6,}"
Jan 13 20:18:00.909342 kubelet[1780]: I0113 20:18:00.908943    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2"
Jan 13 20:18:00.909627 containerd[1473]: time="2025-01-13T20:18:00.909601019Z" level=info msg="StopPodSandbox for \"6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2\""
Jan 13 20:18:00.909787 containerd[1473]: time="2025-01-13T20:18:00.909767838Z" level=info msg="Ensure that sandbox 6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2 in task-service has been cleanup successfully"
Jan 13 20:18:00.911883 systemd[1]: run-netns-cni\x2d80aab7a9\x2d417d\x2de9e9\x2da971\x2d1c1fc86acc52.mount: Deactivated successfully.
Jan 13 20:18:00.912383 containerd[1473]: time="2025-01-13T20:18:00.912265963Z" level=info msg="TearDown network for sandbox \"6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2\" successfully"
Jan 13 20:18:00.912383 containerd[1473]: time="2025-01-13T20:18:00.912294472Z" level=info msg="StopPodSandbox for \"6a8528ba1d7b420daa3902511398689313f537c74e24cf93d1ad5b918ec874f2\" returns successfully"
Jan 13 20:18:00.913736 containerd[1473]: time="2025-01-13T20:18:00.913702596Z" level=info msg="StopPodSandbox for \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\""
Jan 13 20:18:00.913938 containerd[1473]: time="2025-01-13T20:18:00.913877332Z" level=info msg="TearDown network for sandbox \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\" successfully"
Jan 13 20:18:00.914023 containerd[1473]: time="2025-01-13T20:18:00.914009324Z" level=info msg="StopPodSandbox for \"64328138be05b37e67373910e8b58f1f0126648abe853ebf3fa2ec3810f066a3\" returns successfully"
Jan 13 20:18:00.917643 containerd[1473]: time="2025-01-13T20:18:00.917615003Z" level=info msg="StopPodSandbox for \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\""
Jan 13 20:18:00.917936 containerd[1473]: time="2025-01-13T20:18:00.917807573Z" level=info msg="TearDown network for sandbox \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\" successfully"
Jan 13 20:18:00.917936 containerd[1473]: time="2025-01-13T20:18:00.917820728Z" level=info msg="StopPodSandbox for \"242ee98071502d646b96ae2945f099562880df95cb0e06643393ecd73617c850\" returns successfully"
Jan 13 20:18:00.919162 containerd[1473]: time="2025-01-13T20:18:00.919139525Z" level=info msg="StopPodSandbox for \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\""
Jan 13 20:18:00.920191 kubelet[1780]: I0113 20:18:00.920116    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ntvmq" podStartSLOduration=1.444979735 podStartE2EDuration="11.920101253s" podCreationTimestamp="2025-01-13 20:17:49 +0000 UTC" firstStartedPulling="2025-01-13 20:17:49.726937667 +0000 UTC m=+9.398086046" lastFinishedPulling="2025-01-13 20:18:00.202059185 +0000 UTC m=+19.873207564" observedRunningTime="2025-01-13 20:18:00.919090863 +0000 UTC m=+20.590239242" watchObservedRunningTime="2025-01-13 20:18:00.920101253 +0000 UTC m=+20.591249632"
Jan 13 20:18:00.920305 containerd[1473]: time="2025-01-13T20:18:00.919841548Z" level=info msg="TearDown network for sandbox \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\" successfully"
Jan 13 20:18:00.920525 containerd[1473]: time="2025-01-13T20:18:00.920381710Z" level=info msg="StopPodSandbox for \"19e910f6bc6ccf7a7e54982426467af81e9e6d05860d1de9162ea9b1a96f5839\" returns successfully"
Jan 13 20:18:00.920993 kubelet[1780]: I0113 20:18:00.920970    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b"
Jan 13 20:18:00.921781 containerd[1473]: time="2025-01-13T20:18:00.921279061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4sxhx,Uid:9744204a-04ef-4999-88e2-3d074458261a,Namespace:calico-system,Attempt:4,}"
Jan 13 20:18:00.922160 containerd[1473]: time="2025-01-13T20:18:00.921666679Z" level=info msg="StopPodSandbox for \"a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b\""
Jan 13 20:18:00.922480 containerd[1473]: time="2025-01-13T20:18:00.922454231Z" level=info msg="Ensure that sandbox a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b in task-service has been cleanup successfully"
Jan 13 20:18:00.922874 containerd[1473]: time="2025-01-13T20:18:00.922714216Z" level=info msg="TearDown network for sandbox \"a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b\" successfully"
Jan 13 20:18:00.922874 containerd[1473]: time="2025-01-13T20:18:00.922734768Z" level=info msg="StopPodSandbox for \"a3baffe25059c85a8476c48e2dd2b13b1b06a24df69f4fd05434aba73dca287b\" returns successfully"
Jan 13 20:18:00.923423 containerd[1473]: time="2025-01-13T20:18:00.923145618Z" level=info msg="StopPodSandbox for \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\""
Jan 13 20:18:00.923423 containerd[1473]: time="2025-01-13T20:18:00.923269372Z" level=info msg="TearDown network for sandbox \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\" successfully"
Jan 13 20:18:00.923423 containerd[1473]: time="2025-01-13T20:18:00.923282767Z" level=info msg="StopPodSandbox for \"3b349bbd6166e92f753e89fb8009ed8d9588ee5a2b92d8c6e09423856720908c\" returns successfully"
Jan 13 20:18:00.924109 containerd[1473]: time="2025-01-13T20:18:00.924079595Z" level=info msg="StopPodSandbox for \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\""
Jan 13 20:18:00.924470 kubelet[1780]: I0113 20:18:00.924444    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713"
Jan 13 20:18:00.924538 containerd[1473]: time="2025-01-13T20:18:00.924482528Z" level=info msg="TearDown network for sandbox \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\" successfully"
Jan 13 20:18:00.924602 containerd[1473]: time="2025-01-13T20:18:00.924498962Z" level=info msg="StopPodSandbox for \"b195951c7a8a6e16165760507d4cba625f5a98dd42527ff33de3aef049d8cd79\" returns successfully"
Jan 13 20:18:00.925142 containerd[1473]: time="2025-01-13T20:18:00.925066834Z" level=info msg="StopPodSandbox for \"c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713\""
Jan 13 20:18:00.925290 containerd[1473]: time="2025-01-13T20:18:00.925103141Z" level=info msg="StopPodSandbox for \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\""
Jan 13 20:18:00.925465 containerd[1473]: time="2025-01-13T20:18:00.925431380Z" level=info msg="Ensure that sandbox c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713 in task-service has been cleanup successfully"
Jan 13 20:18:00.925521 containerd[1473]: time="2025-01-13T20:18:00.925473085Z" level=info msg="TearDown network for sandbox \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\" successfully"
Jan 13 20:18:00.925521 containerd[1473]: time="2025-01-13T20:18:00.925493598Z" level=info msg="StopPodSandbox for \"235646e5fe8a1af5b8bafa615fc18d6c7d3275d9bd6dbfc058b6b909825d5142\" returns successfully"
Jan 13 20:18:00.925810 containerd[1473]: time="2025-01-13T20:18:00.925774255Z" level=info msg="TearDown network for sandbox \"c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713\" successfully"
Jan 13 20:18:00.925810 containerd[1473]: time="2025-01-13T20:18:00.925796207Z" level=info msg="StopPodSandbox for \"c88290310c15a32700a586e467d533ae8961c89c695bedb7d27beea26ad88713\" returns successfully"
Jan 13 20:18:00.925930 containerd[1473]: time="2025-01-13T20:18:00.925785091Z" level=info msg="StopPodSandbox for \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\""
Jan 13 20:18:00.925953 containerd[1473]: time="2025-01-13T20:18:00.925942753Z" level=info msg="TearDown network for sandbox \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\" successfully"
Jan 13 20:18:00.925982 containerd[1473]: time="2025-01-13T20:18:00.925954469Z" level=info msg="StopPodSandbox for \"9a0e82252297f2e4724d2bb1b4394a5b44431c85d389c6b271474a03814b0fb9\" returns successfully"
Jan 13 20:18:00.927312 containerd[1473]: time="2025-01-13T20:18:00.926235006Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\""
Jan 13 20:18:00.927312 containerd[1473]: time="2025-01-13T20:18:00.926317096Z" level=info msg="TearDown network for sandbox \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" successfully"
Jan 13 20:18:00.927312 containerd[1473]: time="2025-01-13T20:18:00.926368757Z" level=info msg="StopPodSandbox for \"0d5f1c49d1e7cfdc3f8ccc6710836b81c75a5199cf95bc016ea6c7f651e9c0bf\" returns successfully"
Jan 13 20:18:00.927312 containerd[1473]: time="2025-01-13T20:18:00.926335449Z" level=info msg="StopPodSandbox for \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\""
Jan 13 20:18:00.927312 containerd[1473]: time="2025-01-13T20:18:00.926466801Z" level=info msg="TearDown network for sandbox \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\" successfully"
Jan 13 20:18:00.927312 containerd[1473]: time="2025-01-13T20:18:00.926475398Z" level=info msg="StopPodSandbox for \"707b5d2db7b53768229d6fdfecc214ce4d24499ccfa730ac226cab2a18a09dd9\" returns successfully"
Jan 13 20:18:00.927984 kubelet[1780]: E0113 20:18:00.927796    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:00.928756 containerd[1473]: time="2025-01-13T20:18:00.928727013Z" level=info msg="StopPodSandbox for \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\""
Jan 13 20:18:00.928834 containerd[1473]: time="2025-01-13T20:18:00.928814821Z" level=info msg="TearDown network for sandbox \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\" successfully"
Jan 13 20:18:00.928880 containerd[1473]: time="2025-01-13T20:18:00.928730772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:6,}"
Jan 13 20:18:00.929020 containerd[1473]: time="2025-01-13T20:18:00.928830295Z" level=info msg="StopPodSandbox for \"ba2484e95e426c97aa8ef6d825258a77b2ed6879cb2d732e81e60b36b2f51d59\" returns successfully"
Jan 13 20:18:00.930125 containerd[1473]: time="2025-01-13T20:18:00.930088794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xdxbv,Uid:77a581ef-9403-4f14-b6fa-ada6a2a597bc,Namespace:default,Attempt:3,}"
Jan 13 20:18:00.931319 kubelet[1780]: I0113 20:18:00.931283    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609"
Jan 13 20:18:00.932627 containerd[1473]: time="2025-01-13T20:18:00.932596316Z" level=info msg="StopPodSandbox for \"4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609\""
Jan 13 20:18:00.932765 containerd[1473]: time="2025-01-13T20:18:00.932742062Z" level=info msg="Ensure that sandbox 4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609 in task-service has been cleanup successfully"
Jan 13 20:18:00.933087 containerd[1473]: time="2025-01-13T20:18:00.932986053Z" level=info msg="TearDown network for sandbox \"4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609\" successfully"
Jan 13 20:18:00.933087 containerd[1473]: time="2025-01-13T20:18:00.933009325Z" level=info msg="StopPodSandbox for \"4e59a5cefc000ce28a78851efa869e15aac1215dd9bc2379bf1d0fba211dc609\" returns successfully"
Jan 13 20:18:00.933345 containerd[1473]: time="2025-01-13T20:18:00.933261832Z" level=info msg="StopPodSandbox for \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\""
Jan 13 20:18:00.933696 containerd[1473]: time="2025-01-13T20:18:00.933540210Z" level=info msg="TearDown network for sandbox \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\" successfully"
Jan 13 20:18:00.933696 containerd[1473]: time="2025-01-13T20:18:00.933693114Z" level=info msg="StopPodSandbox for \"5644f19d7014c590e8d0d119b3c92d1ca905c8474809184c7474e8c4347a0b3e\" returns successfully"
Jan 13 20:18:00.934170 containerd[1473]: time="2025-01-13T20:18:00.934146068Z" level=info msg="StopPodSandbox for \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\""
Jan 13 20:18:00.934240 containerd[1473]: time="2025-01-13T20:18:00.934222160Z" level=info msg="TearDown network for sandbox \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\" successfully"
Jan 13 20:18:00.934240 containerd[1473]: time="2025-01-13T20:18:00.934235875Z" level=info msg="StopPodSandbox for \"a6c668e08753487d99d4a9b9455394eb54935c9952b725ecc75977777a31c1c9\" returns successfully"
Jan 13 20:18:00.936547 containerd[1473]: time="2025-01-13T20:18:00.936518639Z" level=info msg="StopPodSandbox for \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\""
Jan 13 20:18:00.936783 kubelet[1780]: I0113 20:18:00.936738    1780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21"
Jan 13 20:18:00.936830 containerd[1473]: time="2025-01-13T20:18:00.936800976Z" level=info msg="TearDown network for sandbox \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\" successfully"
Jan 13 20:18:00.936830 containerd[1473]: time="2025-01-13T20:18:00.936817370Z" level=info msg="StopPodSandbox for \"860fce1df426f89ad39ded3d6428e18add946e561e27d43191a5f65d15891ba4\" returns successfully"
Jan 13 20:18:00.939056 containerd[1473]: time="2025-01-13T20:18:00.939026800Z" level=info msg="StopPodSandbox for \"f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21\""
Jan 13 20:18:00.939377 containerd[1473]: time="2025-01-13T20:18:00.939302739Z" level=info msg="Ensure that sandbox f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21 in task-service has been cleanup successfully"
Jan 13 20:18:00.940601 containerd[1473]: time="2025-01-13T20:18:00.940551842Z" level=info msg="StopPodSandbox for \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\""
Jan 13 20:18:00.940852 containerd[1473]: time="2025-01-13T20:18:00.940832939Z" level=info msg="TearDown network for sandbox \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\" successfully"
Jan 13 20:18:00.940929 containerd[1473]: time="2025-01-13T20:18:00.940913909Z" level=info msg="StopPodSandbox for \"978f6d873f3a8f9ee3b27c3f2738cd59f30378dbc6ae28ee005c99077fc69bd7\" returns successfully"
Jan 13 20:18:00.941499 containerd[1473]: time="2025-01-13T20:18:00.941472505Z" level=info msg="TearDown network for sandbox \"f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21\" successfully"
Jan 13 20:18:00.941577 containerd[1473]: time="2025-01-13T20:18:00.941563551Z" level=info msg="StopPodSandbox for \"f0e7bdb20c8fe5032869c464fd555bd16a6bee6beb39744a2585a3a3c89bfe21\" returns successfully"
Jan 13 20:18:00.943621 containerd[1473]: time="2025-01-13T20:18:00.943590129Z" level=info msg="StopPodSandbox for \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\""
Jan 13 20:18:00.943694 containerd[1473]: time="2025-01-13T20:18:00.943674578Z" level=info msg="TearDown network for sandbox \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\" successfully"
Jan 13 20:18:00.943694 containerd[1473]: time="2025-01-13T20:18:00.943688093Z" level=info msg="StopPodSandbox for \"78162f4d3ef1435c707a0c505b8621e856a2454d84fbfc09026efe3e2ada5c3c\" returns successfully"
Jan 13 20:18:00.943780 containerd[1473]: time="2025-01-13T20:18:00.943758787Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\""
Jan 13 20:18:00.943885 containerd[1473]: time="2025-01-13T20:18:00.943829881Z" level=info msg="TearDown network for sandbox \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" successfully"
Jan 13 20:18:00.943927 containerd[1473]: time="2025-01-13T20:18:00.943911451Z" level=info msg="StopPodSandbox for \"81468e41b64a8b83e066db030a37cd05fecb18f01d72127a4a2df80bc0476f74\" returns successfully"
Jan 13 20:18:00.944196 containerd[1473]: time="2025-01-13T20:18:00.944172836Z" level=info msg="StopPodSandbox for \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\""
Jan 13 20:18:00.944757 containerd[1473]: time="2025-01-13T20:18:00.944736749Z" level=info msg="TearDown network for sandbox \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\" successfully"
Jan 13 20:18:00.944855 containerd[1473]: time="2025-01-13T20:18:00.944838592Z" level=info msg="StopPodSandbox for \"dadf413ee657e482f4ec1c323d3d1a9afcd15a6d4ddbebedd12ff885ead61932\" returns successfully"
Jan 13 20:18:00.944963 containerd[1473]: time="2025-01-13T20:18:00.944809802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:6,}"
Jan 13 20:18:00.945319 containerd[1473]: time="2025-01-13T20:18:00.945290266Z" level=info msg="StopPodSandbox for \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\""
Jan 13 20:18:00.945454 containerd[1473]: time="2025-01-13T20:18:00.945431375Z" level=info msg="TearDown network for sandbox \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\" successfully"
Jan 13 20:18:00.945454 containerd[1473]: time="2025-01-13T20:18:00.945446289Z" level=info msg="StopPodSandbox for \"f4d6fe04662eedde6263db52f5c47c71305c1bb7e4a4791bd16a2fdc4aab9aec\" returns successfully"
Jan 13 20:18:00.945897 containerd[1473]: time="2025-01-13T20:18:00.945846582Z" level=info msg="StopPodSandbox for \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\""
Jan 13 20:18:00.946166 containerd[1473]: time="2025-01-13T20:18:00.946132518Z" level=info msg="TearDown network for sandbox \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\" successfully"
Jan 13 20:18:00.946166 containerd[1473]: time="2025-01-13T20:18:00.946155909Z" level=info msg="StopPodSandbox for \"dfb929fa34510b3c4a518554101027224be4c73f72ccf7de3f1d79dd0ea4d73e\" returns successfully"
Jan 13 20:18:00.946747 containerd[1473]: time="2025-01-13T20:18:00.946719863Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\""
Jan 13 20:18:00.946834 containerd[1473]: time="2025-01-13T20:18:00.946811989Z" level=info msg="TearDown network for sandbox \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" successfully"
Jan 13 20:18:00.946834 containerd[1473]: time="2025-01-13T20:18:00.946827143Z" level=info msg="StopPodSandbox for \"9ec4f432acce09ae1a8ad882d9a86ecda461dc2f4d873a07b1ead1ada9ff7318\" returns successfully"
Jan 13 20:18:00.947336 containerd[1473]: time="2025-01-13T20:18:00.947296891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:6,}"
Jan 13 20:18:01.162649 systemd-networkd[1401]: caliec7f3c0d4c3: Link UP
Jan 13 20:18:01.163191 systemd-networkd[1401]: caliec7f3c0d4c3: Gained carrier
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:00.993 [INFO][3893] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.022 [INFO][3893] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0 coredns-7db6d8ff4d- kube-system  6589cd90-9a49-4dc8-928c-d4bfb9fedb8d 764 0 2025-01-13 20:17:29 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  10.0.0.86  coredns-7db6d8ff4d-x5ccf eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] caliec7f3c0d4c3  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x5ccf" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.022 [INFO][3893] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x5ccf" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.105 [INFO][3975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" HandleID="k8s-pod-network.b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" Workload="10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.121 [INFO][3975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" HandleID="k8s-pod-network.b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" Workload="10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002939a0), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.86", "pod":"coredns-7db6d8ff4d-x5ccf", "timestamp":"2025-01-13 20:18:01.105853287 +0000 UTC"}, Hostname:"10.0.0.86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.122 [INFO][3975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.123 [INFO][3975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.124 [INFO][3975] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.86'
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.126 [INFO][3975] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" host="10.0.0.86"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.135 [INFO][3975] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.86"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.139 [INFO][3975] ipam/ipam.go 489: Trying affinity for 192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.141 [INFO][3975] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.143 [INFO][3975] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.143 [INFO][3975] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.192/26 handle="k8s-pod-network.b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" host="10.0.0.86"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.144 [INFO][3975] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.147 [INFO][3975] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.192/26 handle="k8s-pod-network.b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" host="10.0.0.86"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.152 [INFO][3975] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.193/26] block=192.168.21.192/26 handle="k8s-pod-network.b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" host="10.0.0.86"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.152 [INFO][3975] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.193/26] handle="k8s-pod-network.b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" host="10.0.0.86"
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.152 [INFO][3975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:18:01.179477 containerd[1473]: 2025-01-13 20:18:01.152 [INFO][3975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.193/26] IPv6=[] ContainerID="b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" HandleID="k8s-pod-network.b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" Workload="10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0"
Jan 13 20:18:01.180009 containerd[1473]: 2025-01-13 20:18:01.154 [INFO][3893] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x5ccf" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6589cd90-9a49-4dc8-928c-d4bfb9fedb8d", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"", Pod:"coredns-7db6d8ff4d-x5ccf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec7f3c0d4c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.180009 containerd[1473]: 2025-01-13 20:18:01.154 [INFO][3893] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.193/32] ContainerID="b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x5ccf" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0"
Jan 13 20:18:01.180009 containerd[1473]: 2025-01-13 20:18:01.154 [INFO][3893] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec7f3c0d4c3 ContainerID="b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x5ccf" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0"
Jan 13 20:18:01.180009 containerd[1473]: 2025-01-13 20:18:01.163 [INFO][3893] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x5ccf" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0"
Jan 13 20:18:01.180009 containerd[1473]: 2025-01-13 20:18:01.167 [INFO][3893] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x5ccf" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6589cd90-9a49-4dc8-928c-d4bfb9fedb8d", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9", Pod:"coredns-7db6d8ff4d-x5ccf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec7f3c0d4c3", MAC:"de:c7:4c:9c:c4:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.180009 containerd[1473]: 2025-01-13 20:18:01.178 [INFO][3893] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x5ccf" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--x5ccf-eth0"
Jan 13 20:18:01.186108 systemd-networkd[1401]: cali653e9a64f7b: Link UP
Jan 13 20:18:01.186290 systemd-networkd[1401]: cali653e9a64f7b: Gained carrier
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.019 [INFO][3927] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.037 [INFO][3927] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0 nginx-deployment-85f456d6dd- default  77a581ef-9403-4f14-b6fa-ada6a2a597bc 853 0 2025-01-13 20:17:57 +0000 UTC <nil> <nil> map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  10.0.0.86  nginx-deployment-85f456d6dd-xdxbv eth0 default [] []   [kns.default ksa.default.default] cali653e9a64f7b  [] []}} ContainerID="aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" Namespace="default" Pod="nginx-deployment-85f456d6dd-xdxbv" WorkloadEndpoint="10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.037 [INFO][3927] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" Namespace="default" Pod="nginx-deployment-85f456d6dd-xdxbv" WorkloadEndpoint="10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.105 [INFO][3986] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" HandleID="k8s-pod-network.aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" Workload="10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.124 [INFO][3986] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" HandleID="k8s-pod-network.aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" Workload="10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031ee00), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.86", "pod":"nginx-deployment-85f456d6dd-xdxbv", "timestamp":"2025-01-13 20:18:01.105839172 +0000 UTC"}, Hostname:"10.0.0.86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.124 [INFO][3986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.152 [INFO][3986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.152 [INFO][3986] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.86'
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.155 [INFO][3986] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" host="10.0.0.86"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.158 [INFO][3986] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.86"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.165 [INFO][3986] ipam/ipam.go 489: Trying affinity for 192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.166 [INFO][3986] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.170 [INFO][3986] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.170 [INFO][3986] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.192/26 handle="k8s-pod-network.aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" host="10.0.0.86"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.171 [INFO][3986] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.176 [INFO][3986] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.192/26 handle="k8s-pod-network.aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" host="10.0.0.86"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.182 [INFO][3986] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.194/26] block=192.168.21.192/26 handle="k8s-pod-network.aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" host="10.0.0.86"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.182 [INFO][3986] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.194/26] handle="k8s-pod-network.aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" host="10.0.0.86"
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.182 [INFO][3986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:18:01.196962 containerd[1473]: 2025-01-13 20:18:01.182 [INFO][3986] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.194/26] IPv6=[] ContainerID="aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" HandleID="k8s-pod-network.aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" Workload="10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0"
Jan 13 20:18:01.197504 containerd[1473]: 2025-01-13 20:18:01.184 [INFO][3927] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" Namespace="default" Pod="nginx-deployment-85f456d6dd-xdxbv" WorkloadEndpoint="10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"77a581ef-9403-4f14-b6fa-ada6a2a597bc", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-xdxbv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.21.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali653e9a64f7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.197504 containerd[1473]: 2025-01-13 20:18:01.184 [INFO][3927] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.194/32] ContainerID="aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" Namespace="default" Pod="nginx-deployment-85f456d6dd-xdxbv" WorkloadEndpoint="10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0"
Jan 13 20:18:01.197504 containerd[1473]: 2025-01-13 20:18:01.184 [INFO][3927] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali653e9a64f7b ContainerID="aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" Namespace="default" Pod="nginx-deployment-85f456d6dd-xdxbv" WorkloadEndpoint="10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0"
Jan 13 20:18:01.197504 containerd[1473]: 2025-01-13 20:18:01.186 [INFO][3927] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" Namespace="default" Pod="nginx-deployment-85f456d6dd-xdxbv" WorkloadEndpoint="10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0"
Jan 13 20:18:01.197504 containerd[1473]: 2025-01-13 20:18:01.187 [INFO][3927] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" Namespace="default" Pod="nginx-deployment-85f456d6dd-xdxbv" WorkloadEndpoint="10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"77a581ef-9403-4f14-b6fa-ada6a2a597bc", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624", Pod:"nginx-deployment-85f456d6dd-xdxbv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.21.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali653e9a64f7b", MAC:"be:0a:bc:9c:40:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.197504 containerd[1473]: 2025-01-13 20:18:01.195 [INFO][3927] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624" Namespace="default" Pod="nginx-deployment-85f456d6dd-xdxbv" WorkloadEndpoint="10.0.0.86-k8s-nginx--deployment--85f456d6dd--xdxbv-eth0"
Jan 13 20:18:01.199649 containerd[1473]: time="2025-01-13T20:18:01.199354780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:18:01.199649 containerd[1473]: time="2025-01-13T20:18:01.199411960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:18:01.199649 containerd[1473]: time="2025-01-13T20:18:01.199426955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.199649 containerd[1473]: time="2025-01-13T20:18:01.199500330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.217603 systemd[1]: Started cri-containerd-b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9.scope - libcontainer container b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9.
Jan 13 20:18:01.219118 systemd-networkd[1401]: cali12db6cfce3d: Link UP
Jan 13 20:18:01.220461 systemd-networkd[1401]: cali12db6cfce3d: Gained carrier
Jan 13 20:18:01.230406 containerd[1473]: time="2025-01-13T20:18:01.230024088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:18:01.230406 containerd[1473]: time="2025-01-13T20:18:01.230195869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:18:01.230406 containerd[1473]: time="2025-01-13T20:18:01.230213783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.230612 containerd[1473]: time="2025-01-13T20:18:01.230452621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:00.964 [INFO][3873] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.002 [INFO][3873] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0 coredns-7db6d8ff4d- kube-system  b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7 771 0 2025-01-13 20:17:29 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  10.0.0.86  coredns-7db6d8ff4d-7h9bs eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali12db6cfce3d  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7h9bs" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.002 [INFO][3873] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7h9bs" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.114 [INFO][3966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" HandleID="k8s-pod-network.22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" Workload="10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.125 [INFO][3966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" HandleID="k8s-pod-network.22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" Workload="10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001f8730), Attrs:map[string]string{"namespace":"kube-system", "node":"10.0.0.86", "pod":"coredns-7db6d8ff4d-7h9bs", "timestamp":"2025-01-13 20:18:01.114163074 +0000 UTC"}, Hostname:"10.0.0.86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.125 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.182 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.182 [INFO][3966] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.86'
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.184 [INFO][3966] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" host="10.0.0.86"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.190 [INFO][3966] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.86"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.198 [INFO][3966] ipam/ipam.go 489: Trying affinity for 192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.200 [INFO][3966] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.202 [INFO][3966] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.202 [INFO][3966] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.192/26 handle="k8s-pod-network.22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" host="10.0.0.86"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.203 [INFO][3966] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.208 [INFO][3966] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.192/26 handle="k8s-pod-network.22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" host="10.0.0.86"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.213 [INFO][3966] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.195/26] block=192.168.21.192/26 handle="k8s-pod-network.22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" host="10.0.0.86"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.213 [INFO][3966] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.195/26] handle="k8s-pod-network.22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" host="10.0.0.86"
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.214 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:18:01.231227 containerd[1473]: 2025-01-13 20:18:01.214 [INFO][3966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.195/26] IPv6=[] ContainerID="22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" HandleID="k8s-pod-network.22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" Workload="10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0"
Jan 13 20:18:01.231731 containerd[1473]: 2025-01-13 20:18:01.216 [INFO][3873] cni-plugin/k8s.go 386: Populated endpoint ContainerID="22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7h9bs" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"", Pod:"coredns-7db6d8ff4d-7h9bs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12db6cfce3d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.231731 containerd[1473]: 2025-01-13 20:18:01.216 [INFO][3873] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.195/32] ContainerID="22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7h9bs" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0"
Jan 13 20:18:01.231731 containerd[1473]: 2025-01-13 20:18:01.216 [INFO][3873] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12db6cfce3d ContainerID="22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7h9bs" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0"
Jan 13 20:18:01.231731 containerd[1473]: 2025-01-13 20:18:01.220 [INFO][3873] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7h9bs" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0"
Jan 13 20:18:01.231731 containerd[1473]: 2025-01-13 20:18:01.220 [INFO][3873] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7h9bs" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 29, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f", Pod:"coredns-7db6d8ff4d-7h9bs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12db6cfce3d", MAC:"ee:e8:5d:70:67:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.231731 containerd[1473]: 2025-01-13 20:18:01.229 [INFO][3873] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7h9bs" WorkloadEndpoint="10.0.0.86-k8s-coredns--7db6d8ff4d--7h9bs-eth0"
Jan 13 20:18:01.233152 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:18:01.253234 systemd[1]: Started cri-containerd-aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624.scope - libcontainer container aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624.
Jan 13 20:18:01.257368 containerd[1473]: time="2025-01-13T20:18:01.256798414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:18:01.257368 containerd[1473]: time="2025-01-13T20:18:01.256859353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:18:01.257368 containerd[1473]: time="2025-01-13T20:18:01.256874188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.257368 containerd[1473]: time="2025-01-13T20:18:01.256944404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.257707 systemd-networkd[1401]: cali5bfb40fffe0: Link UP
Jan 13 20:18:01.257882 systemd-networkd[1401]: cali5bfb40fffe0: Gained carrier
Jan 13 20:18:01.263995 containerd[1473]: time="2025-01-13T20:18:01.263952918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x5ccf,Uid:6589cd90-9a49-4dc8-928c-d4bfb9fedb8d,Namespace:kube-system,Attempt:6,} returns sandbox id \"b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9\""
Jan 13 20:18:01.265486 kubelet[1780]: E0113 20:18:01.265462    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:01.267350 containerd[1473]: time="2025-01-13T20:18:01.267127827Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Jan 13 20:18:01.273547 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.019 [INFO][3933] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.044 [INFO][3933] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0 calico-apiserver-5d9cccdfc4- calico-apiserver  10648835-fde4-414e-8a1f-6abaa02ccacc 769 0 2025-01-13 20:17:48 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d9cccdfc4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  10.0.0.86  calico-apiserver-5d9cccdfc4-mpttr eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5bfb40fffe0  [] []}} ContainerID="b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-mpttr" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.044 [INFO][3933] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-mpttr" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.107 [INFO][3992] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" HandleID="k8s-pod-network.b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" Workload="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.126 [INFO][3992] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" HandleID="k8s-pod-network.b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" Workload="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027a3f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.86", "pod":"calico-apiserver-5d9cccdfc4-mpttr", "timestamp":"2025-01-13 20:18:01.106962626 +0000 UTC"}, Hostname:"10.0.0.86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.126 [INFO][3992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.214 [INFO][3992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.214 [INFO][3992] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.86'
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.216 [INFO][3992] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" host="10.0.0.86"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.222 [INFO][3992] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.86"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.231 [INFO][3992] ipam/ipam.go 489: Trying affinity for 192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.233 [INFO][3992] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.236 [INFO][3992] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.237 [INFO][3992] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.192/26 handle="k8s-pod-network.b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" host="10.0.0.86"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.239 [INFO][3992] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.245 [INFO][3992] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.192/26 handle="k8s-pod-network.b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" host="10.0.0.86"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.253 [INFO][3992] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.196/26] block=192.168.21.192/26 handle="k8s-pod-network.b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" host="10.0.0.86"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.253 [INFO][3992] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.196/26] handle="k8s-pod-network.b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" host="10.0.0.86"
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.253 [INFO][3992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:18:01.283592 containerd[1473]: 2025-01-13 20:18:01.253 [INFO][3992] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.196/26] IPv6=[] ContainerID="b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" HandleID="k8s-pod-network.b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" Workload="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0"
Jan 13 20:18:01.284125 containerd[1473]: 2025-01-13 20:18:01.255 [INFO][3933] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-mpttr" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0", GenerateName:"calico-apiserver-5d9cccdfc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"10648835-fde4-414e-8a1f-6abaa02ccacc", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cccdfc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"", Pod:"calico-apiserver-5d9cccdfc4-mpttr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bfb40fffe0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.284125 containerd[1473]: 2025-01-13 20:18:01.255 [INFO][3933] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.196/32] ContainerID="b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-mpttr" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0"
Jan 13 20:18:01.284125 containerd[1473]: 2025-01-13 20:18:01.255 [INFO][3933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bfb40fffe0 ContainerID="b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-mpttr" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0"
Jan 13 20:18:01.284125 containerd[1473]: 2025-01-13 20:18:01.258 [INFO][3933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-mpttr" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0"
Jan 13 20:18:01.284125 containerd[1473]: 2025-01-13 20:18:01.263 [INFO][3933] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-mpttr" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0", GenerateName:"calico-apiserver-5d9cccdfc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"10648835-fde4-414e-8a1f-6abaa02ccacc", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cccdfc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3", Pod:"calico-apiserver-5d9cccdfc4-mpttr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5bfb40fffe0", MAC:"8e:51:e2:9c:6b:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.284125 containerd[1473]: 2025-01-13 20:18:01.274 [INFO][3933] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-mpttr" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--mpttr-eth0"
Jan 13 20:18:01.286210 systemd[1]: Started cri-containerd-22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f.scope - libcontainer container 22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f.
Jan 13 20:18:01.300536 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:18:01.305753 systemd-networkd[1401]: calib1e5d75f85c: Link UP
Jan 13 20:18:01.306188 systemd-networkd[1401]: calib1e5d75f85c: Gained carrier
Jan 13 20:18:01.310748 containerd[1473]: time="2025-01-13T20:18:01.310519647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:18:01.310748 containerd[1473]: time="2025-01-13T20:18:01.310575068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:18:01.310748 containerd[1473]: time="2025-01-13T20:18:01.310591062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.310979 containerd[1473]: time="2025-01-13T20:18:01.310944341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xdxbv,Uid:77a581ef-9403-4f14-b6fa-ada6a2a597bc,Namespace:default,Attempt:3,} returns sandbox id \"aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624\""
Jan 13 20:18:01.312802 containerd[1473]: time="2025-01-13T20:18:01.312425872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:00.975 [INFO][3856] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.002 [INFO][3856] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0 calico-kube-controllers-f94864fc5- calico-system  ee272026-1034-439f-966c-6692ba8e0711 770 0 2025-01-13 20:17:49 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f94864fc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  10.0.0.86  calico-kube-controllers-f94864fc5-tqbwc eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] calib1e5d75f85c  [] []}} ContainerID="aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" Namespace="calico-system" Pod="calico-kube-controllers-f94864fc5-tqbwc" WorkloadEndpoint="10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.002 [INFO][3856] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" Namespace="calico-system" Pod="calico-kube-controllers-f94864fc5-tqbwc" WorkloadEndpoint="10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.106 [INFO][3961] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" HandleID="k8s-pod-network.aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" Workload="10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.126 [INFO][3961] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" HandleID="k8s-pod-network.aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" Workload="10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004f2db0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.86", "pod":"calico-kube-controllers-f94864fc5-tqbwc", "timestamp":"2025-01-13 20:18:01.10686522 +0000 UTC"}, Hostname:"10.0.0.86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.127 [INFO][3961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.255 [INFO][3961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.255 [INFO][3961] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.86'
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.257 [INFO][3961] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" host="10.0.0.86"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.263 [INFO][3961] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.86"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.269 [INFO][3961] ipam/ipam.go 489: Trying affinity for 192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.272 [INFO][3961] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.277 [INFO][3961] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.278 [INFO][3961] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.192/26 handle="k8s-pod-network.aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" host="10.0.0.86"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.283 [INFO][3961] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.288 [INFO][3961] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.192/26 handle="k8s-pod-network.aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" host="10.0.0.86"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.295 [INFO][3961] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.197/26] block=192.168.21.192/26 handle="k8s-pod-network.aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" host="10.0.0.86"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.297 [INFO][3961] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.197/26] handle="k8s-pod-network.aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" host="10.0.0.86"
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.297 [INFO][3961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:18:01.320249 containerd[1473]: 2025-01-13 20:18:01.297 [INFO][3961] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.197/26] IPv6=[] ContainerID="aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" HandleID="k8s-pod-network.aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" Workload="10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0"
Jan 13 20:18:01.320743 containerd[1473]: 2025-01-13 20:18:01.301 [INFO][3856] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" Namespace="calico-system" Pod="calico-kube-controllers-f94864fc5-tqbwc" WorkloadEndpoint="10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0", GenerateName:"calico-kube-controllers-f94864fc5-", Namespace:"calico-system", SelfLink:"", UID:"ee272026-1034-439f-966c-6692ba8e0711", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f94864fc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"", Pod:"calico-kube-controllers-f94864fc5-tqbwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib1e5d75f85c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.320743 containerd[1473]: 2025-01-13 20:18:01.302 [INFO][3856] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.197/32] ContainerID="aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" Namespace="calico-system" Pod="calico-kube-controllers-f94864fc5-tqbwc" WorkloadEndpoint="10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0"
Jan 13 20:18:01.320743 containerd[1473]: 2025-01-13 20:18:01.302 [INFO][3856] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1e5d75f85c ContainerID="aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" Namespace="calico-system" Pod="calico-kube-controllers-f94864fc5-tqbwc" WorkloadEndpoint="10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0"
Jan 13 20:18:01.320743 containerd[1473]: 2025-01-13 20:18:01.306 [INFO][3856] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" Namespace="calico-system" Pod="calico-kube-controllers-f94864fc5-tqbwc" WorkloadEndpoint="10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0"
Jan 13 20:18:01.320743 containerd[1473]: 2025-01-13 20:18:01.306 [INFO][3856] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" Namespace="calico-system" Pod="calico-kube-controllers-f94864fc5-tqbwc" WorkloadEndpoint="10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0", GenerateName:"calico-kube-controllers-f94864fc5-", Namespace:"calico-system", SelfLink:"", UID:"ee272026-1034-439f-966c-6692ba8e0711", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f94864fc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0", Pod:"calico-kube-controllers-f94864fc5-tqbwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib1e5d75f85c", MAC:"8e:87:ea:84:6e:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.320743 containerd[1473]: 2025-01-13 20:18:01.318 [INFO][3856] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0" Namespace="calico-system" Pod="calico-kube-controllers-f94864fc5-tqbwc" WorkloadEndpoint="10.0.0.86-k8s-calico--kube--controllers--f94864fc5--tqbwc-eth0"
Jan 13 20:18:01.330491 systemd[1]: Started cri-containerd-b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3.scope - libcontainer container b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3.
Jan 13 20:18:01.331696 containerd[1473]: time="2025-01-13T20:18:01.331665186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7h9bs,Uid:b6ad32c1-bf16-4af4-b2ca-d0d48cfec6e7,Namespace:kube-system,Attempt:6,} returns sandbox id \"22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f\""
Jan 13 20:18:01.333154 kubelet[1780]: E0113 20:18:01.333071    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:01.338981 systemd-networkd[1401]: cali9bbe1dc006c: Link UP
Jan 13 20:18:01.342638 systemd-networkd[1401]: cali9bbe1dc006c: Gained carrier
Jan 13 20:18:01.351904 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:18:01.352076 containerd[1473]: time="2025-01-13T20:18:01.351690189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:18:01.352076 containerd[1473]: time="2025-01-13T20:18:01.351989567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:18:01.352076 containerd[1473]: time="2025-01-13T20:18:01.352013638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.352267 containerd[1473]: time="2025-01-13T20:18:01.352163827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:00.998 [INFO][3892] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.020 [INFO][3892] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.86-k8s-csi--node--driver--4sxhx-eth0 csi-node-driver- calico-system  9744204a-04ef-4999-88e2-3d074458261a 675 0 2025-01-13 20:17:49 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  10.0.0.86  csi-node-driver-4sxhx eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] cali9bbe1dc006c  [] []}} ContainerID="e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" Namespace="calico-system" Pod="csi-node-driver-4sxhx" WorkloadEndpoint="10.0.0.86-k8s-csi--node--driver--4sxhx-"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.020 [INFO][3892] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" Namespace="calico-system" Pod="csi-node-driver-4sxhx" WorkloadEndpoint="10.0.0.86-k8s-csi--node--driver--4sxhx-eth0"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.114 [INFO][3976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" HandleID="k8s-pod-network.e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" Workload="10.0.0.86-k8s-csi--node--driver--4sxhx-eth0"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.128 [INFO][3976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" HandleID="k8s-pod-network.e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" Workload="10.0.0.86-k8s-csi--node--driver--4sxhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000174b30), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.86", "pod":"csi-node-driver-4sxhx", "timestamp":"2025-01-13 20:18:01.114683815 +0000 UTC"}, Hostname:"10.0.0.86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.128 [INFO][3976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.297 [INFO][3976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.297 [INFO][3976] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.86'
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.300 [INFO][3976] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" host="10.0.0.86"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.309 [INFO][3976] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.86"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.315 [INFO][3976] ipam/ipam.go 489: Trying affinity for 192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.317 [INFO][3976] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.320 [INFO][3976] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.320 [INFO][3976] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.192/26 handle="k8s-pod-network.e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" host="10.0.0.86"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.323 [INFO][3976] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.327 [INFO][3976] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.192/26 handle="k8s-pod-network.e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" host="10.0.0.86"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.334 [INFO][3976] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.198/26] block=192.168.21.192/26 handle="k8s-pod-network.e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" host="10.0.0.86"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.335 [INFO][3976] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.198/26] handle="k8s-pod-network.e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" host="10.0.0.86"
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.335 [INFO][3976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:18:01.353018 containerd[1473]: 2025-01-13 20:18:01.335 [INFO][3976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.198/26] IPv6=[] ContainerID="e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" HandleID="k8s-pod-network.e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" Workload="10.0.0.86-k8s-csi--node--driver--4sxhx-eth0"
Jan 13 20:18:01.353692 containerd[1473]: 2025-01-13 20:18:01.337 [INFO][3892] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" Namespace="calico-system" Pod="csi-node-driver-4sxhx" WorkloadEndpoint="10.0.0.86-k8s-csi--node--driver--4sxhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-csi--node--driver--4sxhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9744204a-04ef-4999-88e2-3d074458261a", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"", Pod:"csi-node-driver-4sxhx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9bbe1dc006c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.353692 containerd[1473]: 2025-01-13 20:18:01.337 [INFO][3892] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.198/32] ContainerID="e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" Namespace="calico-system" Pod="csi-node-driver-4sxhx" WorkloadEndpoint="10.0.0.86-k8s-csi--node--driver--4sxhx-eth0"
Jan 13 20:18:01.353692 containerd[1473]: 2025-01-13 20:18:01.337 [INFO][3892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9bbe1dc006c ContainerID="e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" Namespace="calico-system" Pod="csi-node-driver-4sxhx" WorkloadEndpoint="10.0.0.86-k8s-csi--node--driver--4sxhx-eth0"
Jan 13 20:18:01.353692 containerd[1473]: 2025-01-13 20:18:01.343 [INFO][3892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" Namespace="calico-system" Pod="csi-node-driver-4sxhx" WorkloadEndpoint="10.0.0.86-k8s-csi--node--driver--4sxhx-eth0"
Jan 13 20:18:01.353692 containerd[1473]: 2025-01-13 20:18:01.343 [INFO][3892] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" Namespace="calico-system" Pod="csi-node-driver-4sxhx" WorkloadEndpoint="10.0.0.86-k8s-csi--node--driver--4sxhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-csi--node--driver--4sxhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9744204a-04ef-4999-88e2-3d074458261a", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 49, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79", Pod:"csi-node-driver-4sxhx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9bbe1dc006c", MAC:"ca:43:44:2b:07:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.353692 containerd[1473]: 2025-01-13 20:18:01.351 [INFO][3892] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79" Namespace="calico-system" Pod="csi-node-driver-4sxhx" WorkloadEndpoint="10.0.0.86-k8s-csi--node--driver--4sxhx-eth0"
Jan 13 20:18:01.371302 systemd[1]: Started cri-containerd-aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0.scope - libcontainer container aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0.
Jan 13 20:18:01.376754 systemd-networkd[1401]: calie340b19899c: Link UP
Jan 13 20:18:01.376903 systemd-networkd[1401]: calie340b19899c: Gained carrier
Jan 13 20:18:01.382126 containerd[1473]: time="2025-01-13T20:18:01.382052483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-mpttr,Uid:10648835-fde4-414e-8a1f-6abaa02ccacc,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3\""
Jan 13 20:18:01.382412 containerd[1473]: time="2025-01-13T20:18:01.381862349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:18:01.385007 containerd[1473]: time="2025-01-13T20:18:01.382541196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:18:01.385007 containerd[1473]: time="2025-01-13T20:18:01.384863038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.385007 containerd[1473]: time="2025-01-13T20:18:01.384961684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.388676 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.033 [INFO][3944] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.060 [INFO][3944] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0 calico-apiserver-5d9cccdfc4- calico-apiserver  cdcf8404-fe95-41b4-a33a-7e988931f7a3 767 0 2025-01-13 20:17:48 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d9cccdfc4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  10.0.0.86  calico-apiserver-5d9cccdfc4-gpjkp eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie340b19899c  [] []}} ContainerID="851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-gpjkp" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.060 [INFO][3944] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-gpjkp" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.127 [INFO][4001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" HandleID="k8s-pod-network.851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" Workload="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.139 [INFO][4001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" HandleID="k8s-pod-network.851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" Workload="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dc7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.86", "pod":"calico-apiserver-5d9cccdfc4-gpjkp", "timestamp":"2025-01-13 20:18:01.127748689 +0000 UTC"}, Hostname:"10.0.0.86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.139 [INFO][4001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.335 [INFO][4001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.335 [INFO][4001] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.86'
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.339 [INFO][4001] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" host="10.0.0.86"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.345 [INFO][4001] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.86"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.351 [INFO][4001] ipam/ipam.go 489: Trying affinity for 192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.353 [INFO][4001] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.355 [INFO][4001] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.355 [INFO][4001] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.192/26 handle="k8s-pod-network.851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" host="10.0.0.86"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.357 [INFO][4001] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.360 [INFO][4001] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.192/26 handle="k8s-pod-network.851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" host="10.0.0.86"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.367 [INFO][4001] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.199/26] block=192.168.21.192/26 handle="k8s-pod-network.851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" host="10.0.0.86"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.368 [INFO][4001] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.199/26] handle="k8s-pod-network.851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" host="10.0.0.86"
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.368 [INFO][4001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:18:01.389448 containerd[1473]: 2025-01-13 20:18:01.368 [INFO][4001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.199/26] IPv6=[] ContainerID="851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" HandleID="k8s-pod-network.851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" Workload="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0"
Jan 13 20:18:01.389978 containerd[1473]: 2025-01-13 20:18:01.372 [INFO][3944] cni-plugin/k8s.go 386: Populated endpoint ContainerID="851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-gpjkp" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0", GenerateName:"calico-apiserver-5d9cccdfc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"cdcf8404-fe95-41b4-a33a-7e988931f7a3", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cccdfc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"", Pod:"calico-apiserver-5d9cccdfc4-gpjkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie340b19899c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.389978 containerd[1473]: 2025-01-13 20:18:01.373 [INFO][3944] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.199/32] ContainerID="851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-gpjkp" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0"
Jan 13 20:18:01.389978 containerd[1473]: 2025-01-13 20:18:01.373 [INFO][3944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie340b19899c ContainerID="851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-gpjkp" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0"
Jan 13 20:18:01.389978 containerd[1473]: 2025-01-13 20:18:01.375 [INFO][3944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-gpjkp" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0"
Jan 13 20:18:01.389978 containerd[1473]: 2025-01-13 20:18:01.375 [INFO][3944] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-gpjkp" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0", GenerateName:"calico-apiserver-5d9cccdfc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"cdcf8404-fe95-41b4-a33a-7e988931f7a3", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d9cccdfc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc", Pod:"calico-apiserver-5d9cccdfc4-gpjkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie340b19899c", MAC:"66:1a:fa:ba:ae:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:01.389978 containerd[1473]: 2025-01-13 20:18:01.386 [INFO][3944] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc" Namespace="calico-apiserver" Pod="calico-apiserver-5d9cccdfc4-gpjkp" WorkloadEndpoint="10.0.0.86-k8s-calico--apiserver--5d9cccdfc4--gpjkp-eth0"
Jan 13 20:18:01.404532 systemd[1]: Started cri-containerd-e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79.scope - libcontainer container e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79.
Jan 13 20:18:01.416761 containerd[1473]: time="2025-01-13T20:18:01.416654441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f94864fc5-tqbwc,Uid:ee272026-1034-439f-966c-6692ba8e0711,Namespace:calico-system,Attempt:6,} returns sandbox id \"aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0\""
Jan 13 20:18:01.421539 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:18:01.421735 containerd[1473]: time="2025-01-13T20:18:01.421394934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:18:01.421735 containerd[1473]: time="2025-01-13T20:18:01.421456432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:18:01.421735 containerd[1473]: time="2025-01-13T20:18:01.421469308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.421735 containerd[1473]: time="2025-01-13T20:18:01.421546282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:01.433084 containerd[1473]: time="2025-01-13T20:18:01.433020062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4sxhx,Uid:9744204a-04ef-4999-88e2-3d074458261a,Namespace:calico-system,Attempt:4,} returns sandbox id \"e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79\""
Jan 13 20:18:01.457488 systemd[1]: Started cri-containerd-851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc.scope - libcontainer container 851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc.
Jan 13 20:18:01.468425 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:18:01.484669 containerd[1473]: time="2025-01-13T20:18:01.484633338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d9cccdfc4-gpjkp,Uid:cdcf8404-fe95-41b4-a33a-7e988931f7a3,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc\""
Jan 13 20:18:01.620106 kubelet[1780]: E0113 20:18:01.619951    1780 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:01.629553 systemd[1]: run-netns-cni\x2d18b90be1\x2d5704\x2de40f\x2d18b5\x2dd6318de29d74.mount: Deactivated successfully.
Jan 13 20:18:01.629644 systemd[1]: run-netns-cni\x2df7d93736\x2d3b3b\x2dc145\x2d3503\x2dbf703d2a7da7.mount: Deactivated successfully.
Jan 13 20:18:01.629701 systemd[1]: run-netns-cni\x2d981f6211\x2d5792\x2d7012\x2d757a\x2d1e0038f457ac.mount: Deactivated successfully.
Jan 13 20:18:01.629744 systemd[1]: run-netns-cni\x2dc77066d9\x2dc422\x2d3020\x2d7d87\x2d11004dbeb968.mount: Deactivated successfully.
Jan 13 20:18:01.637349 kubelet[1780]: E0113 20:18:01.637304    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:02.003833 kubelet[1780]: E0113 20:18:02.003797    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:02.289644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004760119.mount: Deactivated successfully.
Jan 13 20:18:02.431473 systemd-networkd[1401]: caliec7f3c0d4c3: Gained IPv6LL
Jan 13 20:18:02.495475 systemd-networkd[1401]: cali9bbe1dc006c: Gained IPv6LL
Jan 13 20:18:02.638110 kubelet[1780]: E0113 20:18:02.637990    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:02.815532 systemd-networkd[1401]: calib1e5d75f85c: Gained IPv6LL
Jan 13 20:18:02.880026 systemd-networkd[1401]: cali653e9a64f7b: Gained IPv6LL
Jan 13 20:18:02.881412 systemd-networkd[1401]: cali5bfb40fffe0: Gained IPv6LL
Jan 13 20:18:03.005795 kubelet[1780]: E0113 20:18:03.005763    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:03.199510 systemd-networkd[1401]: calie340b19899c: Gained IPv6LL
Jan 13 20:18:03.199782 systemd-networkd[1401]: cali12db6cfce3d: Gained IPv6LL
Jan 13 20:18:03.382920 containerd[1473]: time="2025-01-13T20:18:03.382785608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:03.384877 containerd[1473]: time="2025-01-13T20:18:03.384815515Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383"
Jan 13 20:18:03.386265 containerd[1473]: time="2025-01-13T20:18:03.386158710Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:03.401185 containerd[1473]: time="2025-01-13T20:18:03.401130352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:03.402306 containerd[1473]: time="2025-01-13T20:18:03.402256132Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.135086958s"
Jan 13 20:18:03.402306 containerd[1473]: time="2025-01-13T20:18:03.402298759Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Jan 13 20:18:03.403772 containerd[1473]: time="2025-01-13T20:18:03.403680262Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Jan 13 20:18:03.405037 containerd[1473]: time="2025-01-13T20:18:03.405008661Z" level=info msg="CreateContainer within sandbox \"b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 13 20:18:03.415161 containerd[1473]: time="2025-01-13T20:18:03.415030477Z" level=info msg="CreateContainer within sandbox \"b45c5085825b9c280925917983aff9b119c27557c0e188f561fd990c3ffcf6a9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae74588b7c3f06c7d5a794cdbe0fb2fab077a636b6b74a8d26de5d284b5da0ba\""
Jan 13 20:18:03.415479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2769253265.mount: Deactivated successfully.
Jan 13 20:18:03.416249 containerd[1473]: time="2025-01-13T20:18:03.416217838Z" level=info msg="StartContainer for \"ae74588b7c3f06c7d5a794cdbe0fb2fab077a636b6b74a8d26de5d284b5da0ba\""
Jan 13 20:18:03.529477 systemd[1]: Started cri-containerd-ae74588b7c3f06c7d5a794cdbe0fb2fab077a636b6b74a8d26de5d284b5da0ba.scope - libcontainer container ae74588b7c3f06c7d5a794cdbe0fb2fab077a636b6b74a8d26de5d284b5da0ba.
Jan 13 20:18:03.550810 containerd[1473]: time="2025-01-13T20:18:03.550710449Z" level=info msg="StartContainer for \"ae74588b7c3f06c7d5a794cdbe0fb2fab077a636b6b74a8d26de5d284b5da0ba\" returns successfully"
Jan 13 20:18:03.638915 kubelet[1780]: E0113 20:18:03.638801    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:04.011002 kubelet[1780]: E0113 20:18:04.010266    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:04.639833 kubelet[1780]: E0113 20:18:04.639797    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:05.013106 kubelet[1780]: E0113 20:18:05.013067    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:05.640765 kubelet[1780]: E0113 20:18:05.640723    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:05.753674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3928402907.mount: Deactivated successfully.
Jan 13 20:18:06.017052 kubelet[1780]: E0113 20:18:06.017027    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:06.576536 containerd[1473]: time="2025-01-13T20:18:06.576488167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:06.577063 containerd[1473]: time="2025-01-13T20:18:06.577025833Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67697045"
Jan 13 20:18:06.577713 containerd[1473]: time="2025-01-13T20:18:06.577659116Z" level=info msg="ImageCreate event name:\"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:06.580359 containerd[1473]: time="2025-01-13T20:18:06.580293301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:06.582405 containerd[1473]: time="2025-01-13T20:18:06.581881306Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 3.178168973s"
Jan 13 20:18:06.582405 containerd[1473]: time="2025-01-13T20:18:06.581916417Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\""
Jan 13 20:18:06.583592 containerd[1473]: time="2025-01-13T20:18:06.583561688Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Jan 13 20:18:06.584492 containerd[1473]: time="2025-01-13T20:18:06.584408317Z" level=info msg="CreateContainer within sandbox \"aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Jan 13 20:18:06.597191 containerd[1473]: time="2025-01-13T20:18:06.597142631Z" level=info msg="CreateContainer within sandbox \"aaa35e551613422cc274df21f059b440415bae9da97e39a4a1d8f0a031afc624\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3c27b1108474098a10d9a69bd1a509e1fe6d87a43bacff85dcf6d0d484594cff\""
Jan 13 20:18:06.597677 containerd[1473]: time="2025-01-13T20:18:06.597623311Z" level=info msg="StartContainer for \"3c27b1108474098a10d9a69bd1a509e1fe6d87a43bacff85dcf6d0d484594cff\""
Jan 13 20:18:06.635506 systemd[1]: Started cri-containerd-3c27b1108474098a10d9a69bd1a509e1fe6d87a43bacff85dcf6d0d484594cff.scope - libcontainer container 3c27b1108474098a10d9a69bd1a509e1fe6d87a43bacff85dcf6d0d484594cff.
Jan 13 20:18:06.641967 kubelet[1780]: E0113 20:18:06.641910    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:06.699840 containerd[1473]: time="2025-01-13T20:18:06.699786068Z" level=info msg="StartContainer for \"3c27b1108474098a10d9a69bd1a509e1fe6d87a43bacff85dcf6d0d484594cff\" returns successfully"
Jan 13 20:18:06.735364 containerd[1473]: time="2025-01-13T20:18:06.732346092Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:06.735364 containerd[1473]: time="2025-01-13T20:18:06.732938624Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=0"
Jan 13 20:18:06.737085 containerd[1473]: time="2025-01-13T20:18:06.737049962Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 153.306439ms"
Jan 13 20:18:06.737229 containerd[1473]: time="2025-01-13T20:18:06.737209362Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Jan 13 20:18:06.738280 containerd[1473]: time="2025-01-13T20:18:06.738252343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Jan 13 20:18:06.740246 containerd[1473]: time="2025-01-13T20:18:06.740215255Z" level=info msg="CreateContainer within sandbox \"22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 13 20:18:06.751880 containerd[1473]: time="2025-01-13T20:18:06.751844203Z" level=info msg="CreateContainer within sandbox \"22a40a7e7a1784df4dd5a5d7bb19ae09a529abaf7f954aa1eaca33571f42ae2f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"496e7dbcabd673ca04efcc54018b257d3fef7dc2eec987fae32eaa1d09cecfca\""
Jan 13 20:18:06.752467 containerd[1473]: time="2025-01-13T20:18:06.752428898Z" level=info msg="StartContainer for \"496e7dbcabd673ca04efcc54018b257d3fef7dc2eec987fae32eaa1d09cecfca\""
Jan 13 20:18:06.779498 systemd[1]: Started cri-containerd-496e7dbcabd673ca04efcc54018b257d3fef7dc2eec987fae32eaa1d09cecfca.scope - libcontainer container 496e7dbcabd673ca04efcc54018b257d3fef7dc2eec987fae32eaa1d09cecfca.
Jan 13 20:18:06.808011 containerd[1473]: time="2025-01-13T20:18:06.807971167Z" level=info msg="StartContainer for \"496e7dbcabd673ca04efcc54018b257d3fef7dc2eec987fae32eaa1d09cecfca\" returns successfully"
Jan 13 20:18:07.021632 kubelet[1780]: E0113 20:18:07.021607    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:07.031022 kubelet[1780]: I0113 20:18:07.030916    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x5ccf" podStartSLOduration=35.894284405 podStartE2EDuration="38.030869821s" podCreationTimestamp="2025-01-13 20:17:29 +0000 UTC" firstStartedPulling="2025-01-13 20:18:01.26691486 +0000 UTC m=+20.938063239" lastFinishedPulling="2025-01-13 20:18:03.403500276 +0000 UTC m=+23.074648655" observedRunningTime="2025-01-13 20:18:04.023345226 +0000 UTC m=+23.694493605" watchObservedRunningTime="2025-01-13 20:18:07.030869821 +0000 UTC m=+26.702018200"
Jan 13 20:18:07.031177 kubelet[1780]: I0113 20:18:07.031050    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7h9bs" podStartSLOduration=32.627254684 podStartE2EDuration="38.031030383s" podCreationTimestamp="2025-01-13 20:17:29 +0000 UTC" firstStartedPulling="2025-01-13 20:18:01.334230145 +0000 UTC m=+21.005378524" lastFinishedPulling="2025-01-13 20:18:06.738005844 +0000 UTC m=+26.409154223" observedRunningTime="2025-01-13 20:18:07.030855984 +0000 UTC m=+26.702004403" watchObservedRunningTime="2025-01-13 20:18:07.031030383 +0000 UTC m=+26.702178762"
Jan 13 20:18:07.642099 kubelet[1780]: E0113 20:18:07.642053    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:08.033110 kubelet[1780]: E0113 20:18:08.033079    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:08.324012 containerd[1473]: time="2025-01-13T20:18:08.323021171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:08.324012 containerd[1473]: time="2025-01-13T20:18:08.323709221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409"
Jan 13 20:18:08.324493 containerd[1473]: time="2025-01-13T20:18:08.324461816Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:08.326455 containerd[1473]: time="2025-01-13T20:18:08.326418469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:08.327846 containerd[1473]: time="2025-01-13T20:18:08.327814524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.589377067s"
Jan 13 20:18:08.327926 containerd[1473]: time="2025-01-13T20:18:08.327850196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\""
Jan 13 20:18:08.329346 containerd[1473]: time="2025-01-13T20:18:08.329213058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\""
Jan 13 20:18:08.330089 containerd[1473]: time="2025-01-13T20:18:08.329904907Z" level=info msg="CreateContainer within sandbox \"b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Jan 13 20:18:08.339691 containerd[1473]: time="2025-01-13T20:18:08.339580352Z" level=info msg="CreateContainer within sandbox \"b0af417634753c1e54e4f84be1d4c61d03d5e1b4be1748d819884922ec987bb3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1d2017153e64556440d1660c86039a989de445341fcd27ca6124ac9a24a1064c\""
Jan 13 20:18:08.340149 containerd[1473]: time="2025-01-13T20:18:08.340064327Z" level=info msg="StartContainer for \"1d2017153e64556440d1660c86039a989de445341fcd27ca6124ac9a24a1064c\""
Jan 13 20:18:08.394529 systemd[1]: Started cri-containerd-1d2017153e64556440d1660c86039a989de445341fcd27ca6124ac9a24a1064c.scope - libcontainer container 1d2017153e64556440d1660c86039a989de445341fcd27ca6124ac9a24a1064c.
Jan 13 20:18:08.497895 containerd[1473]: time="2025-01-13T20:18:08.497832448Z" level=info msg="StartContainer for \"1d2017153e64556440d1660c86039a989de445341fcd27ca6124ac9a24a1064c\" returns successfully"
Jan 13 20:18:08.643289 kubelet[1780]: E0113 20:18:08.643149    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:09.040786 kubelet[1780]: E0113 20:18:09.040746    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:09.052988 kubelet[1780]: I0113 20:18:09.052831    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-mpttr" podStartSLOduration=14.110111004 podStartE2EDuration="21.052816305s" podCreationTimestamp="2025-01-13 20:17:48 +0000 UTC" firstStartedPulling="2025-01-13 20:18:01.385838503 +0000 UTC m=+21.056986882" lastFinishedPulling="2025-01-13 20:18:08.328543804 +0000 UTC m=+27.999692183" observedRunningTime="2025-01-13 20:18:09.05278935 +0000 UTC m=+28.723937769" watchObservedRunningTime="2025-01-13 20:18:09.052816305 +0000 UTC m=+28.723964684"
Jan 13 20:18:09.053179 kubelet[1780]: I0113 20:18:09.053145    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-xdxbv" podStartSLOduration=6.782640291 podStartE2EDuration="12.053136519s" podCreationTimestamp="2025-01-13 20:17:57 +0000 UTC" firstStartedPulling="2025-01-13 20:18:01.312562505 +0000 UTC m=+20.983710884" lastFinishedPulling="2025-01-13 20:18:06.583058733 +0000 UTC m=+26.254207112" observedRunningTime="2025-01-13 20:18:07.039310413 +0000 UTC m=+26.710458792" watchObservedRunningTime="2025-01-13 20:18:09.053136519 +0000 UTC m=+28.724284898"
Jan 13 20:18:09.643753 kubelet[1780]: E0113 20:18:09.643711    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:09.926631 containerd[1473]: time="2025-01-13T20:18:09.926519182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:09.927774 containerd[1473]: time="2025-01-13T20:18:09.927728894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828"
Jan 13 20:18:09.928442 containerd[1473]: time="2025-01-13T20:18:09.928388559Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:09.931073 containerd[1473]: time="2025-01-13T20:18:09.931038096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:09.932032 containerd[1473]: time="2025-01-13T20:18:09.931998739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.602760046s"
Jan 13 20:18:09.932075 containerd[1473]: time="2025-01-13T20:18:09.932031812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\""
Jan 13 20:18:09.933719 containerd[1473]: time="2025-01-13T20:18:09.933631564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Jan 13 20:18:09.947728 containerd[1473]: time="2025-01-13T20:18:09.947537435Z" level=info msg="CreateContainer within sandbox \"aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Jan 13 20:18:09.957942 containerd[1473]: time="2025-01-13T20:18:09.957903632Z" level=info msg="CreateContainer within sandbox \"aa9ef12153fe4a4ddb2d4305acbe0175480cd86b8ab630b9ef5ea89bddb247d0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e2e131ea41c3bd77db64948c8cab19c657693bbfa2e0d0ed432a7b8268fa9791\""
Jan 13 20:18:09.958555 containerd[1473]: time="2025-01-13T20:18:09.958537782Z" level=info msg="StartContainer for \"e2e131ea41c3bd77db64948c8cab19c657693bbfa2e0d0ed432a7b8268fa9791\""
Jan 13 20:18:09.987491 systemd[1]: Started cri-containerd-e2e131ea41c3bd77db64948c8cab19c657693bbfa2e0d0ed432a7b8268fa9791.scope - libcontainer container e2e131ea41c3bd77db64948c8cab19c657693bbfa2e0d0ed432a7b8268fa9791.
Jan 13 20:18:10.019854 containerd[1473]: time="2025-01-13T20:18:10.019814997Z" level=info msg="StartContainer for \"e2e131ea41c3bd77db64948c8cab19c657693bbfa2e0d0ed432a7b8268fa9791\" returns successfully"
Jan 13 20:18:10.644148 kubelet[1780]: E0113 20:18:10.644098    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:10.670210 kubelet[1780]: I0113 20:18:10.669204    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f94864fc5-tqbwc" podStartSLOduration=13.15422011 podStartE2EDuration="21.669186431s" podCreationTimestamp="2025-01-13 20:17:49 +0000 UTC" firstStartedPulling="2025-01-13 20:18:01.4179975 +0000 UTC m=+21.089145879" lastFinishedPulling="2025-01-13 20:18:09.932963821 +0000 UTC m=+29.604112200" observedRunningTime="2025-01-13 20:18:10.067318313 +0000 UTC m=+29.738466692" watchObservedRunningTime="2025-01-13 20:18:10.669186431 +0000 UTC m=+30.340334810"
Jan 13 20:18:10.670210 kubelet[1780]: I0113 20:18:10.669624    1780 topology_manager.go:215] "Topology Admit Handler" podUID="472948d9-0538-4184-8205-5ea1cf5dd748" podNamespace="default" podName="nfs-server-provisioner-0"
Jan 13 20:18:10.676582 systemd[1]: Created slice kubepods-besteffort-pod472948d9_0538_4184_8205_5ea1cf5dd748.slice - libcontainer container kubepods-besteffort-pod472948d9_0538_4184_8205_5ea1cf5dd748.slice.
Jan 13 20:18:10.829228 kubelet[1780]: I0113 20:18:10.829146    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv6n4\" (UniqueName: \"kubernetes.io/projected/472948d9-0538-4184-8205-5ea1cf5dd748-kube-api-access-vv6n4\") pod \"nfs-server-provisioner-0\" (UID: \"472948d9-0538-4184-8205-5ea1cf5dd748\") " pod="default/nfs-server-provisioner-0"
Jan 13 20:18:10.829375 kubelet[1780]: I0113 20:18:10.829289    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/472948d9-0538-4184-8205-5ea1cf5dd748-data\") pod \"nfs-server-provisioner-0\" (UID: \"472948d9-0538-4184-8205-5ea1cf5dd748\") " pod="default/nfs-server-provisioner-0"
Jan 13 20:18:10.965922 containerd[1473]: time="2025-01-13T20:18:10.965789302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:10.967837 containerd[1473]: time="2025-01-13T20:18:10.967783439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730"
Jan 13 20:18:10.968795 containerd[1473]: time="2025-01-13T20:18:10.968759292Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:10.971730 containerd[1473]: time="2025-01-13T20:18:10.971681530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:10.973119 containerd[1473]: time="2025-01-13T20:18:10.973083661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.039418384s"
Jan 13 20:18:10.973161 containerd[1473]: time="2025-01-13T20:18:10.973131572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\""
Jan 13 20:18:10.974423 containerd[1473]: time="2025-01-13T20:18:10.974395489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Jan 13 20:18:10.975668 containerd[1473]: time="2025-01-13T20:18:10.975540429Z" level=info msg="CreateContainer within sandbox \"e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Jan 13 20:18:10.981121 containerd[1473]: time="2025-01-13T20:18:10.981074606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:472948d9-0538-4184-8205-5ea1cf5dd748,Namespace:default,Attempt:0,}"
Jan 13 20:18:11.024864 containerd[1473]: time="2025-01-13T20:18:11.024815859Z" level=info msg="CreateContainer within sandbox \"e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3fbf413240df774f75c3134c78d82cd66092c7c05e3d8451bb409a5844f4a5a8\""
Jan 13 20:18:11.025570 containerd[1473]: time="2025-01-13T20:18:11.025541608Z" level=info msg="StartContainer for \"3fbf413240df774f75c3134c78d82cd66092c7c05e3d8451bb409a5844f4a5a8\""
Jan 13 20:18:11.061518 systemd[1]: Started cri-containerd-3fbf413240df774f75c3134c78d82cd66092c7c05e3d8451bb409a5844f4a5a8.scope - libcontainer container 3fbf413240df774f75c3134c78d82cd66092c7c05e3d8451bb409a5844f4a5a8.
Jan 13 20:18:11.180551 systemd-networkd[1401]: cali60e51b789ff: Link UP
Jan 13 20:18:11.180729 systemd-networkd[1401]: cali60e51b789ff: Gained carrier
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.043 [INFO][5070] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.060 [INFO][5070] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.86-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default  472948d9-0538-4184-8205-5ea1cf5dd748 1055 0 2025-01-13 20:18:10 +0000 UTC <nil> <nil> map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s  10.0.0.86  nfs-server-provisioner-0 eth0 nfs-server-provisioner [] []   [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff  [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.86-k8s-nfs--server--provisioner--0-"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.060 [INFO][5070] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.86-k8s-nfs--server--provisioner--0-eth0"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.096 [INFO][5118] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" HandleID="k8s-pod-network.937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" Workload="10.0.0.86-k8s-nfs--server--provisioner--0-eth0"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.111 [INFO][5118] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" HandleID="k8s-pod-network.937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" Workload="10.0.0.86-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d710), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.86", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 20:18:11.096094984 +0000 UTC"}, Hostname:"10.0.0.86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.112 [INFO][5118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.112 [INFO][5118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.112 [INFO][5118] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.86'
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.114 [INFO][5118] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" host="10.0.0.86"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.118 [INFO][5118] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.86"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.123 [INFO][5118] ipam/ipam.go 489: Trying affinity for 192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.125 [INFO][5118] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.128 [INFO][5118] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.128 [INFO][5118] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.192/26 handle="k8s-pod-network.937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" host="10.0.0.86"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.130 [INFO][5118] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.141 [INFO][5118] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.192/26 handle="k8s-pod-network.937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" host="10.0.0.86"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.175 [INFO][5118] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.200/26] block=192.168.21.192/26 handle="k8s-pod-network.937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" host="10.0.0.86"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.175 [INFO][5118] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.200/26] handle="k8s-pod-network.937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" host="10.0.0.86"
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.175 [INFO][5118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:18:11.193086 containerd[1473]: 2025-01-13 20:18:11.175 [INFO][5118] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.200/26] IPv6=[] ContainerID="937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" HandleID="k8s-pod-network.937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" Workload="10.0.0.86-k8s-nfs--server--provisioner--0-eth0"
Jan 13 20:18:11.193973 containerd[1473]: 2025-01-13 20:18:11.176 [INFO][5070] cni-plugin/k8s.go 386: Populated endpoint ContainerID="937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.86-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"472948d9-0538-4184-8205-5ea1cf5dd748", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.21.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:11.193973 containerd[1473]: 2025-01-13 20:18:11.177 [INFO][5070] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.200/32] ContainerID="937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.86-k8s-nfs--server--provisioner--0-eth0"
Jan 13 20:18:11.193973 containerd[1473]: 2025-01-13 20:18:11.177 [INFO][5070] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.86-k8s-nfs--server--provisioner--0-eth0"
Jan 13 20:18:11.193973 containerd[1473]: 2025-01-13 20:18:11.180 [INFO][5070] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.86-k8s-nfs--server--provisioner--0-eth0"
Jan 13 20:18:11.194202 containerd[1473]: 2025-01-13 20:18:11.181 [INFO][5070] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.86-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"472948d9-0538-4184-8205-5ea1cf5dd748", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.21.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"3e:9c:4a:97:8e:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:11.194202 containerd[1473]: 2025-01-13 20:18:11.191 [INFO][5070] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.86-k8s-nfs--server--provisioner--0-eth0"
Jan 13 20:18:11.196971 containerd[1473]: time="2025-01-13T20:18:11.196917509Z" level=info msg="StartContainer for \"3fbf413240df774f75c3134c78d82cd66092c7c05e3d8451bb409a5844f4a5a8\" returns successfully"
Jan 13 20:18:11.219096 containerd[1473]: time="2025-01-13T20:18:11.218635759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:18:11.219096 containerd[1473]: time="2025-01-13T20:18:11.218993014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:18:11.219096 containerd[1473]: time="2025-01-13T20:18:11.219015970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:11.219304 containerd[1473]: time="2025-01-13T20:18:11.219098355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:11.242565 systemd[1]: Started cri-containerd-937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588.scope - libcontainer container 937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588.
Jan 13 20:18:11.253507 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:18:11.262804 containerd[1473]: time="2025-01-13T20:18:11.262763213Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:11.264864 containerd[1473]: time="2025-01-13T20:18:11.263641775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77"
Jan 13 20:18:11.265779 containerd[1473]: time="2025-01-13T20:18:11.265751555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 291.219892ms"
Jan 13 20:18:11.266063 containerd[1473]: time="2025-01-13T20:18:11.265782429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\""
Jan 13 20:18:11.267483 containerd[1473]: time="2025-01-13T20:18:11.267450449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Jan 13 20:18:11.268489 containerd[1473]: time="2025-01-13T20:18:11.268300456Z" level=info msg="CreateContainer within sandbox \"851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Jan 13 20:18:11.271708 containerd[1473]: time="2025-01-13T20:18:11.271651212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:472948d9-0538-4184-8205-5ea1cf5dd748,Namespace:default,Attempt:0,} returns sandbox id \"937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588\""
Jan 13 20:18:11.282485 containerd[1473]: time="2025-01-13T20:18:11.282343647Z" level=info msg="CreateContainer within sandbox \"851c9aeb74b79e3af121ab2cb45f9670cead7f8de7266c6a5443d50cea8cabfc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b65fa8a78855eda898878cee84b00b43e2df89e64683c8ce10ca09431d7245fd\""
Jan 13 20:18:11.283431 containerd[1473]: time="2025-01-13T20:18:11.282783208Z" level=info msg="StartContainer for \"b65fa8a78855eda898878cee84b00b43e2df89e64683c8ce10ca09431d7245fd\""
Jan 13 20:18:11.307509 systemd[1]: Started cri-containerd-b65fa8a78855eda898878cee84b00b43e2df89e64683c8ce10ca09431d7245fd.scope - libcontainer container b65fa8a78855eda898878cee84b00b43e2df89e64683c8ce10ca09431d7245fd.
Jan 13 20:18:11.371209 containerd[1473]: time="2025-01-13T20:18:11.371151256Z" level=info msg="StartContainer for \"b65fa8a78855eda898878cee84b00b43e2df89e64683c8ce10ca09431d7245fd\" returns successfully"
Jan 13 20:18:11.644516 kubelet[1780]: E0113 20:18:11.644464    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:12.387031 containerd[1473]: time="2025-01-13T20:18:12.386965964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:12.390048 containerd[1473]: time="2025-01-13T20:18:12.389950221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368"
Jan 13 20:18:12.391059 containerd[1473]: time="2025-01-13T20:18:12.391024039Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:12.395548 containerd[1473]: time="2025-01-13T20:18:12.395419817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:12.396266 containerd[1473]: time="2025-01-13T20:18:12.396096743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.12861026s"
Jan 13 20:18:12.396266 containerd[1473]: time="2025-01-13T20:18:12.396143415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\""
Jan 13 20:18:12.398354 containerd[1473]: time="2025-01-13T20:18:12.398318168Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Jan 13 20:18:12.403692 containerd[1473]: time="2025-01-13T20:18:12.403657707Z" level=info msg="CreateContainer within sandbox \"e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Jan 13 20:18:12.423739 containerd[1473]: time="2025-01-13T20:18:12.423691405Z" level=info msg="CreateContainer within sandbox \"e257807b969dd8ef0f8d56ba2af71ce5e2ffb0729ce0c8b2a847008229488f79\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9ace8fd6350c4db0b809fee0aa67c21c1e667f6b6aeb662b4ed8ba2f4777b17d\""
Jan 13 20:18:12.425025 containerd[1473]: time="2025-01-13T20:18:12.424855808Z" level=info msg="StartContainer for \"9ace8fd6350c4db0b809fee0aa67c21c1e667f6b6aeb662b4ed8ba2f4777b17d\""
Jan 13 20:18:12.456524 systemd[1]: Started cri-containerd-9ace8fd6350c4db0b809fee0aa67c21c1e667f6b6aeb662b4ed8ba2f4777b17d.scope - libcontainer container 9ace8fd6350c4db0b809fee0aa67c21c1e667f6b6aeb662b4ed8ba2f4777b17d.
Jan 13 20:18:12.480867 containerd[1473]: time="2025-01-13T20:18:12.480786727Z" level=info msg="StartContainer for \"9ace8fd6350c4db0b809fee0aa67c21c1e667f6b6aeb662b4ed8ba2f4777b17d\" returns successfully"
Jan 13 20:18:12.644791 kubelet[1780]: E0113 20:18:12.644685    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:12.814166 kubelet[1780]: I0113 20:18:12.814114    1780 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Jan 13 20:18:12.816494 kubelet[1780]: I0113 20:18:12.816465    1780 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Jan 13 20:18:13.055497 systemd-networkd[1401]: cali60e51b789ff: Gained IPv6LL
Jan 13 20:18:13.074354 kubelet[1780]: I0113 20:18:13.074282    1780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 13 20:18:13.085041 kubelet[1780]: I0113 20:18:13.084986    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d9cccdfc4-gpjkp" podStartSLOduration=15.304300426 podStartE2EDuration="25.084970452s" podCreationTimestamp="2025-01-13 20:17:48 +0000 UTC" firstStartedPulling="2025-01-13 20:18:01.485913219 +0000 UTC m=+21.157061598" lastFinishedPulling="2025-01-13 20:18:11.266583245 +0000 UTC m=+30.937731624" observedRunningTime="2025-01-13 20:18:12.081276847 +0000 UTC m=+31.752425226" watchObservedRunningTime="2025-01-13 20:18:13.084970452 +0000 UTC m=+32.756118831"
Jan 13 20:18:13.645494 kubelet[1780]: E0113 20:18:13.645432    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:14.231589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902503674.mount: Deactivated successfully.
Jan 13 20:18:14.645818 kubelet[1780]: E0113 20:18:14.645541    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:15.646301 kubelet[1780]: E0113 20:18:15.646254    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:15.769280 containerd[1473]: time="2025-01-13T20:18:15.769216183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:15.770228 containerd[1473]: time="2025-01-13T20:18:15.769952640Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625"
Jan 13 20:18:15.771036 containerd[1473]: time="2025-01-13T20:18:15.770971139Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:15.774014 containerd[1473]: time="2025-01-13T20:18:15.773945925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:15.775196 containerd[1473]: time="2025-01-13T20:18:15.775069529Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.376708648s"
Jan 13 20:18:15.775196 containerd[1473]: time="2025-01-13T20:18:15.775105644Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\""
Jan 13 20:18:15.777559 containerd[1473]: time="2025-01-13T20:18:15.777522588Z" level=info msg="CreateContainer within sandbox \"937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Jan 13 20:18:15.797046 containerd[1473]: time="2025-01-13T20:18:15.796896293Z" level=info msg="CreateContainer within sandbox \"937c2dc47c7d90f1aeac3e05c6311584bb1ea2e720e4641cf14a525692eb5588\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"1172f92b3522c050de19bdffcfe23de57dbc510fd02528f833c2cf87492cd9ef\""
Jan 13 20:18:15.797549 containerd[1473]: time="2025-01-13T20:18:15.797521526Z" level=info msg="StartContainer for \"1172f92b3522c050de19bdffcfe23de57dbc510fd02528f833c2cf87492cd9ef\""
Jan 13 20:18:15.831529 systemd[1]: Started cri-containerd-1172f92b3522c050de19bdffcfe23de57dbc510fd02528f833c2cf87492cd9ef.scope - libcontainer container 1172f92b3522c050de19bdffcfe23de57dbc510fd02528f833c2cf87492cd9ef.
Jan 13 20:18:15.860398 containerd[1473]: time="2025-01-13T20:18:15.860028992Z" level=info msg="StartContainer for \"1172f92b3522c050de19bdffcfe23de57dbc510fd02528f833c2cf87492cd9ef\" returns successfully"
Jan 13 20:18:16.093648 kubelet[1780]: I0113 20:18:16.093592    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.59585165 podStartE2EDuration="6.093574996s" podCreationTimestamp="2025-01-13 20:18:10 +0000 UTC" firstStartedPulling="2025-01-13 20:18:11.278404396 +0000 UTC m=+30.949552775" lastFinishedPulling="2025-01-13 20:18:15.776127742 +0000 UTC m=+35.447276121" observedRunningTime="2025-01-13 20:18:16.093433254 +0000 UTC m=+35.764581633" watchObservedRunningTime="2025-01-13 20:18:16.093574996 +0000 UTC m=+35.764723375"
Jan 13 20:18:16.093832 kubelet[1780]: I0113 20:18:16.093717    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4sxhx" podStartSLOduration=16.13043996 podStartE2EDuration="27.093712018s" podCreationTimestamp="2025-01-13 20:17:49 +0000 UTC" firstStartedPulling="2025-01-13 20:18:01.434222969 +0000 UTC m=+21.105371308" lastFinishedPulling="2025-01-13 20:18:12.397495027 +0000 UTC m=+32.068643366" observedRunningTime="2025-01-13 20:18:13.085292881 +0000 UTC m=+32.756441260" watchObservedRunningTime="2025-01-13 20:18:16.093712018 +0000 UTC m=+35.764860357"
Jan 13 20:18:16.646976 kubelet[1780]: E0113 20:18:16.646921    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:16.787606 kubelet[1780]: I0113 20:18:16.787559    1780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 13 20:18:16.788298 kubelet[1780]: E0113 20:18:16.788277    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:17.085527 kubelet[1780]: E0113 20:18:17.085500    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:17.153339 update_engine[1455]: I20250113 20:18:17.152364  1455 update_attempter.cc:509] Updating boot flags...
Jan 13 20:18:17.204226 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (5576)
Jan 13 20:18:17.235006 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (5576)
Jan 13 20:18:17.272356 kernel: bpftool[5591]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Jan 13 20:18:17.285750 kubelet[1780]: I0113 20:18:17.285710    1780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 13 20:18:17.442976 systemd-networkd[1401]: vxlan.calico: Link UP
Jan 13 20:18:17.442984 systemd-networkd[1401]: vxlan.calico: Gained carrier
Jan 13 20:18:17.647852 kubelet[1780]: E0113 20:18:17.647783    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:18.559500 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL
Jan 13 20:18:18.648225 kubelet[1780]: E0113 20:18:18.648176    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:19.649138 kubelet[1780]: E0113 20:18:19.649084    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:20.649720 kubelet[1780]: E0113 20:18:20.649663    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:21.620099 kubelet[1780]: E0113 20:18:21.620049    1780 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:21.650418 kubelet[1780]: E0113 20:18:21.650388    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:22.650711 kubelet[1780]: E0113 20:18:22.650665    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:23.651625 kubelet[1780]: E0113 20:18:23.651559    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:24.652744 kubelet[1780]: E0113 20:18:24.652685    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:25.563900 kubelet[1780]: E0113 20:18:25.563701    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:25.564033 kubelet[1780]: E0113 20:18:25.563943    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:25.653442 kubelet[1780]: E0113 20:18:25.653382    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:26.654120 kubelet[1780]: E0113 20:18:26.654071    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:27.654701 kubelet[1780]: E0113 20:18:27.654655    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:28.655050 kubelet[1780]: E0113 20:18:28.655006    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:29.655386 kubelet[1780]: E0113 20:18:29.655319    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:30.041155 kubelet[1780]: I0113 20:18:30.041104    1780 topology_manager.go:215] "Topology Admit Handler" podUID="46dd06b9-81d2-4834-897c-c0c8c02835af" podNamespace="default" podName="test-pod-1"
Jan 13 20:18:30.048482 systemd[1]: Created slice kubepods-besteffort-pod46dd06b9_81d2_4834_897c_c0c8c02835af.slice - libcontainer container kubepods-besteffort-pod46dd06b9_81d2_4834_897c_c0c8c02835af.slice.
Jan 13 20:18:30.145545 kubelet[1780]: I0113 20:18:30.145442    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5btm\" (UniqueName: \"kubernetes.io/projected/46dd06b9-81d2-4834-897c-c0c8c02835af-kube-api-access-s5btm\") pod \"test-pod-1\" (UID: \"46dd06b9-81d2-4834-897c-c0c8c02835af\") " pod="default/test-pod-1"
Jan 13 20:18:30.145545 kubelet[1780]: I0113 20:18:30.145487    1780 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b733095f-151c-4da8-8672-98952e2906cf\" (UniqueName: \"kubernetes.io/nfs/46dd06b9-81d2-4834-897c-c0c8c02835af-pvc-b733095f-151c-4da8-8672-98952e2906cf\") pod \"test-pod-1\" (UID: \"46dd06b9-81d2-4834-897c-c0c8c02835af\") " pod="default/test-pod-1"
Jan 13 20:18:30.270357 kernel: FS-Cache: Loaded
Jan 13 20:18:30.294623 kernel: RPC: Registered named UNIX socket transport module.
Jan 13 20:18:30.294769 kernel: RPC: Registered udp transport module.
Jan 13 20:18:30.294799 kernel: RPC: Registered tcp transport module.
Jan 13 20:18:30.294815 kernel: RPC: Registered tcp-with-tls transport module.
Jan 13 20:18:30.294831 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 13 20:18:30.478509 kernel: NFS: Registering the id_resolver key type
Jan 13 20:18:30.478703 kernel: Key type id_resolver registered
Jan 13 20:18:30.478721 kernel: Key type id_legacy registered
Jan 13 20:18:30.501018 nfsidmap[5754]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Jan 13 20:18:30.504532 nfsidmap[5757]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Jan 13 20:18:30.579936 kubelet[1780]: E0113 20:18:30.579897    1780 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:18:30.652837 containerd[1473]: time="2025-01-13T20:18:30.652781469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:46dd06b9-81d2-4834-897c-c0c8c02835af,Namespace:default,Attempt:0,}"
Jan 13 20:18:30.655951 kubelet[1780]: E0113 20:18:30.655925    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:30.760533 systemd-networkd[1401]: cali5ec59c6bf6e: Link UP
Jan 13 20:18:30.760825 systemd-networkd[1401]: cali5ec59c6bf6e: Gained carrier
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.693 [INFO][5784] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.86-k8s-test--pod--1-eth0  default  46dd06b9-81d2-4834-897c-c0c8c02835af 1181 0 2025-01-13 20:18:10 +0000 UTC <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  10.0.0.86  test-pod-1 eth0 default [] []   [kns.default ksa.default.default] cali5ec59c6bf6e  [] []}} ContainerID="845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.86-k8s-test--pod--1-"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.693 [INFO][5784] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.86-k8s-test--pod--1-eth0"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.717 [INFO][5798] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" HandleID="k8s-pod-network.845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" Workload="10.0.0.86-k8s-test--pod--1-eth0"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.728 [INFO][5798] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" HandleID="k8s-pod-network.845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" Workload="10.0.0.86-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002940a0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.86", "pod":"test-pod-1", "timestamp":"2025-01-13 20:18:30.717684921 +0000 UTC"}, Hostname:"10.0.0.86", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.728 [INFO][5798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.728 [INFO][5798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.728 [INFO][5798] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.86'
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.730 [INFO][5798] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" host="10.0.0.86"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.734 [INFO][5798] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.86"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.738 [INFO][5798] ipam/ipam.go 489: Trying affinity for 192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.740 [INFO][5798] ipam/ipam.go 155: Attempting to load block cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.742 [INFO][5798] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.21.192/26 host="10.0.0.86"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.743 [INFO][5798] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.21.192/26 handle="k8s-pod-network.845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" host="10.0.0.86"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.744 [INFO][5798] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.750 [INFO][5798] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.21.192/26 handle="k8s-pod-network.845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" host="10.0.0.86"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.755 [INFO][5798] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.21.201/26] block=192.168.21.192/26 handle="k8s-pod-network.845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" host="10.0.0.86"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.756 [INFO][5798] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.21.201/26] handle="k8s-pod-network.845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" host="10.0.0.86"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.756 [INFO][5798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.756 [INFO][5798] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.21.201/26] IPv6=[] ContainerID="845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" HandleID="k8s-pod-network.845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" Workload="10.0.0.86-k8s-test--pod--1-eth0"
Jan 13 20:18:30.775371 containerd[1473]: 2025-01-13 20:18:30.757 [INFO][5784] cni-plugin/k8s.go 386: Populated endpoint ContainerID="845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.86-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"46dd06b9-81d2-4834-897c-c0c8c02835af", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.21.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:30.776092 containerd[1473]: 2025-01-13 20:18:30.758 [INFO][5784] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.21.201/32] ContainerID="845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.86-k8s-test--pod--1-eth0"
Jan 13 20:18:30.776092 containerd[1473]: 2025-01-13 20:18:30.758 [INFO][5784] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.86-k8s-test--pod--1-eth0"
Jan 13 20:18:30.776092 containerd[1473]: 2025-01-13 20:18:30.760 [INFO][5784] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.86-k8s-test--pod--1-eth0"
Jan 13 20:18:30.776092 containerd[1473]: 2025-01-13 20:18:30.762 [INFO][5784] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.86-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.86-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"46dd06b9-81d2-4834-897c-c0c8c02835af", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 18, 10, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.86", ContainerID:"845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.21.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"62:04:da:c1:16:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:18:30.776092 containerd[1473]: 2025-01-13 20:18:30.768 [INFO][5784] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.86-k8s-test--pod--1-eth0"
Jan 13 20:18:30.790944 containerd[1473]: time="2025-01-13T20:18:30.790846296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:18:30.790944 containerd[1473]: time="2025-01-13T20:18:30.790907933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:18:30.790944 containerd[1473]: time="2025-01-13T20:18:30.790922972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:30.791207 containerd[1473]: time="2025-01-13T20:18:30.791025527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:18:30.812489 systemd[1]: Started cri-containerd-845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930.scope - libcontainer container 845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930.
Jan 13 20:18:30.822366 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:18:30.839678 containerd[1473]: time="2025-01-13T20:18:30.839632039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:46dd06b9-81d2-4834-897c-c0c8c02835af,Namespace:default,Attempt:0,} returns sandbox id \"845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930\""
Jan 13 20:18:30.841086 containerd[1473]: time="2025-01-13T20:18:30.841054924Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Jan 13 20:18:31.102759 containerd[1473]: time="2025-01-13T20:18:31.102644682Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:18:31.103365 containerd[1473]: time="2025-01-13T20:18:31.103209414Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Jan 13 20:18:31.106412 containerd[1473]: time="2025-01-13T20:18:31.106382657Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 265.114744ms"
Jan 13 20:18:31.106412 containerd[1473]: time="2025-01-13T20:18:31.106407376Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\""
Jan 13 20:18:31.108180 containerd[1473]: time="2025-01-13T20:18:31.108148169Z" level=info msg="CreateContainer within sandbox \"845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Jan 13 20:18:31.130975 containerd[1473]: time="2025-01-13T20:18:31.130930521Z" level=info msg="CreateContainer within sandbox \"845eabd13abbf61623b413fbbd5de9246036117063ca67c2d497e11d95dbe930\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e7efd1b55118e55caa701101ce53ec4d9c8baee2d82dce8c9710cdea467cca73\""
Jan 13 20:18:31.131474 containerd[1473]: time="2025-01-13T20:18:31.131401178Z" level=info msg="StartContainer for \"e7efd1b55118e55caa701101ce53ec4d9c8baee2d82dce8c9710cdea467cca73\""
Jan 13 20:18:31.152476 systemd[1]: Started cri-containerd-e7efd1b55118e55caa701101ce53ec4d9c8baee2d82dce8c9710cdea467cca73.scope - libcontainer container e7efd1b55118e55caa701101ce53ec4d9c8baee2d82dce8c9710cdea467cca73.
Jan 13 20:18:31.173534 containerd[1473]: time="2025-01-13T20:18:31.173498893Z" level=info msg="StartContainer for \"e7efd1b55118e55caa701101ce53ec4d9c8baee2d82dce8c9710cdea467cca73\" returns successfully"
Jan 13 20:18:31.656184 kubelet[1780]: E0113 20:18:31.656129    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:31.935508 systemd-networkd[1401]: cali5ec59c6bf6e: Gained IPv6LL
Jan 13 20:18:32.656659 kubelet[1780]: E0113 20:18:32.656617    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:33.656832 kubelet[1780]: E0113 20:18:33.656770    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:34.367257 kubelet[1780]: I0113 20:18:34.367116    1780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=24.100931601 podStartE2EDuration="24.367097492s" podCreationTimestamp="2025-01-13 20:18:10 +0000 UTC" firstStartedPulling="2025-01-13 20:18:30.840818856 +0000 UTC m=+50.511967235" lastFinishedPulling="2025-01-13 20:18:31.106984747 +0000 UTC m=+50.778133126" observedRunningTime="2025-01-13 20:18:32.128185968 +0000 UTC m=+51.799334307" watchObservedRunningTime="2025-01-13 20:18:34.367097492 +0000 UTC m=+54.038245871"
Jan 13 20:18:34.657430 kubelet[1780]: E0113 20:18:34.657281    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:35.658395 kubelet[1780]: E0113 20:18:35.658351    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:36.658765 kubelet[1780]: E0113 20:18:36.658702    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:37.660769 kubelet[1780]: E0113 20:18:37.660714    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 13 20:18:38.661020 kubelet[1780]: E0113 20:18:38.660849    1780 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"