Nov 12 17:42:42.880014 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Nov 12 17:42:42.880036 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Nov 12 16:24:35 -00 2024
Nov 12 17:42:42.880045 kernel: KASLR enabled
Nov 12 17:42:42.880051 kernel: efi: EFI v2.7 by EDK II
Nov 12 17:42:42.880057 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 
Nov 12 17:42:42.880063 kernel: random: crng init done
Nov 12 17:42:42.880070 kernel: ACPI: Early table checksum verification disabled
Nov 12 17:42:42.880076 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS )
Nov 12 17:42:42.880082 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS  BXPC     00000001      01000013)
Nov 12 17:42:42.880089 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 12 17:42:42.880095 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 12 17:42:42.880101 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 12 17:42:42.880107 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 12 17:42:42.880113 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 12 17:42:42.880120 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 12 17:42:42.880128 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 12 17:42:42.880134 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 12 17:42:42.880141 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 12 17:42:42.880147 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Nov 12 17:42:42.880153 kernel: NUMA: Failed to initialise from firmware
Nov 12 17:42:42.880159 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Nov 12 17:42:42.880166 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff]
Nov 12 17:42:42.880172 kernel: Zone ranges:
Nov 12 17:42:42.880178 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Nov 12 17:42:42.880185 kernel:   DMA32    empty
Nov 12 17:42:42.880192 kernel:   Normal   empty
Nov 12 17:42:42.880198 kernel: Movable zone start for each node
Nov 12 17:42:42.880205 kernel: Early memory node ranges
Nov 12 17:42:42.880211 kernel:   node   0: [mem 0x0000000040000000-0x00000000d976ffff]
Nov 12 17:42:42.880217 kernel:   node   0: [mem 0x00000000d9770000-0x00000000d9b3ffff]
Nov 12 17:42:42.880224 kernel:   node   0: [mem 0x00000000d9b40000-0x00000000dce1ffff]
Nov 12 17:42:42.880230 kernel:   node   0: [mem 0x00000000dce20000-0x00000000dceaffff]
Nov 12 17:42:42.880244 kernel:   node   0: [mem 0x00000000dceb0000-0x00000000dcebffff]
Nov 12 17:42:42.880251 kernel:   node   0: [mem 0x00000000dcec0000-0x00000000dcfdffff]
Nov 12 17:42:42.880257 kernel:   node   0: [mem 0x00000000dcfe0000-0x00000000dcffffff]
Nov 12 17:42:42.880263 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Nov 12 17:42:42.880270 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Nov 12 17:42:42.880277 kernel: psci: probing for conduit method from ACPI.
Nov 12 17:42:42.880284 kernel: psci: PSCIv1.1 detected in firmware.
Nov 12 17:42:42.880290 kernel: psci: Using standard PSCI v0.2 function IDs
Nov 12 17:42:42.880299 kernel: psci: Trusted OS migration not required
Nov 12 17:42:42.880306 kernel: psci: SMC Calling Convention v1.1
Nov 12 17:42:42.880313 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Nov 12 17:42:42.880321 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Nov 12 17:42:42.880328 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Nov 12 17:42:42.880335 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Nov 12 17:42:42.880341 kernel: Detected PIPT I-cache on CPU0
Nov 12 17:42:42.880348 kernel: CPU features: detected: GIC system register CPU interface
Nov 12 17:42:42.880355 kernel: CPU features: detected: Hardware dirty bit management
Nov 12 17:42:42.880361 kernel: CPU features: detected: Spectre-v4
Nov 12 17:42:42.880368 kernel: CPU features: detected: Spectre-BHB
Nov 12 17:42:42.880375 kernel: CPU features: kernel page table isolation forced ON by KASLR
Nov 12 17:42:42.880382 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Nov 12 17:42:42.880390 kernel: CPU features: detected: ARM erratum 1418040
Nov 12 17:42:42.880396 kernel: CPU features: detected: SSBS not fully self-synchronizing
Nov 12 17:42:42.880403 kernel: alternatives: applying boot alternatives
Nov 12 17:42:42.880411 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e
Nov 12 17:42:42.880418 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Nov 12 17:42:42.880425 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 12 17:42:42.880432 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Nov 12 17:42:42.880439 kernel: Fallback order for Node 0: 0 
Nov 12 17:42:42.880445 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Nov 12 17:42:42.880452 kernel: Policy zone: DMA
Nov 12 17:42:42.880459 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 12 17:42:42.880467 kernel: software IO TLB: area num 4.
Nov 12 17:42:42.880473 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB)
Nov 12 17:42:42.880481 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved)
Nov 12 17:42:42.880487 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Nov 12 17:42:42.880494 kernel: trace event string verifier disabled
Nov 12 17:42:42.880501 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 12 17:42:42.880508 kernel: rcu:         RCU event tracing is enabled.
Nov 12 17:42:42.880515 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Nov 12 17:42:42.880522 kernel:         Trampoline variant of Tasks RCU enabled.
Nov 12 17:42:42.880529 kernel:         Tracing variant of Tasks RCU enabled.
Nov 12 17:42:42.880536 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 12 17:42:42.880543 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Nov 12 17:42:42.880551 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Nov 12 17:42:42.880557 kernel: GICv3: 256 SPIs implemented
Nov 12 17:42:42.880564 kernel: GICv3: 0 Extended SPIs implemented
Nov 12 17:42:42.880571 kernel: Root IRQ handler: gic_handle_irq
Nov 12 17:42:42.880577 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Nov 12 17:42:42.880584 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Nov 12 17:42:42.880591 kernel: ITS [mem 0x08080000-0x0809ffff]
Nov 12 17:42:42.880598 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1)
Nov 12 17:42:42.880605 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1)
Nov 12 17:42:42.880612 kernel: GICv3: using LPI property table @0x00000000400f0000
Nov 12 17:42:42.880618 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000
Nov 12 17:42:42.880626 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 12 17:42:42.880633 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Nov 12 17:42:42.880640 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Nov 12 17:42:42.880647 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Nov 12 17:42:42.880654 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Nov 12 17:42:42.880661 kernel: arm-pv: using stolen time PV
Nov 12 17:42:42.880667 kernel: Console: colour dummy device 80x25
Nov 12 17:42:42.880674 kernel: ACPI: Core revision 20230628
Nov 12 17:42:42.880682 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Nov 12 17:42:42.880688 kernel: pid_max: default: 32768 minimum: 301
Nov 12 17:42:42.880697 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Nov 12 17:42:42.880704 kernel: landlock: Up and running.
Nov 12 17:42:42.880720 kernel: SELinux:  Initializing.
Nov 12 17:42:42.880729 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Nov 12 17:42:42.880736 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Nov 12 17:42:42.880743 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 12 17:42:42.880750 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Nov 12 17:42:42.880757 kernel: rcu: Hierarchical SRCU implementation.
Nov 12 17:42:42.880764 kernel: rcu:         Max phase no-delay instances is 400.
Nov 12 17:42:42.880773 kernel: Platform MSI: ITS@0x8080000 domain created
Nov 12 17:42:42.880780 kernel: PCI/MSI: ITS@0x8080000 domain created
Nov 12 17:42:42.880787 kernel: Remapping and enabling EFI services.
Nov 12 17:42:42.880793 kernel: smp: Bringing up secondary CPUs ...
Nov 12 17:42:42.880800 kernel: Detected PIPT I-cache on CPU1
Nov 12 17:42:42.880807 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Nov 12 17:42:42.880814 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000
Nov 12 17:42:42.880821 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Nov 12 17:42:42.880828 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Nov 12 17:42:42.880835 kernel: Detected PIPT I-cache on CPU2
Nov 12 17:42:42.880843 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Nov 12 17:42:42.880851 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000
Nov 12 17:42:42.880862 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Nov 12 17:42:42.880871 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Nov 12 17:42:42.880878 kernel: Detected PIPT I-cache on CPU3
Nov 12 17:42:42.880885 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Nov 12 17:42:42.880892 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000
Nov 12 17:42:42.880900 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Nov 12 17:42:42.880907 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Nov 12 17:42:42.880916 kernel: smp: Brought up 1 node, 4 CPUs
Nov 12 17:42:42.880923 kernel: SMP: Total of 4 processors activated.
Nov 12 17:42:42.880930 kernel: CPU features: detected: 32-bit EL0 Support
Nov 12 17:42:42.880938 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Nov 12 17:42:42.880945 kernel: CPU features: detected: Common not Private translations
Nov 12 17:42:42.880953 kernel: CPU features: detected: CRC32 instructions
Nov 12 17:42:42.880960 kernel: CPU features: detected: Enhanced Virtualization Traps
Nov 12 17:42:42.880967 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Nov 12 17:42:42.880976 kernel: CPU features: detected: LSE atomic instructions
Nov 12 17:42:42.880983 kernel: CPU features: detected: Privileged Access Never
Nov 12 17:42:42.880991 kernel: CPU features: detected: RAS Extension Support
Nov 12 17:42:42.880998 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Nov 12 17:42:42.881005 kernel: CPU: All CPU(s) started at EL1
Nov 12 17:42:42.881012 kernel: alternatives: applying system-wide alternatives
Nov 12 17:42:42.881019 kernel: devtmpfs: initialized
Nov 12 17:42:42.881027 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 12 17:42:42.881034 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Nov 12 17:42:42.881043 kernel: pinctrl core: initialized pinctrl subsystem
Nov 12 17:42:42.881051 kernel: SMBIOS 3.0.0 present.
Nov 12 17:42:42.881058 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023
Nov 12 17:42:42.881065 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 12 17:42:42.881073 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Nov 12 17:42:42.881080 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 12 17:42:42.881088 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 12 17:42:42.881095 kernel: audit: initializing netlink subsys (disabled)
Nov 12 17:42:42.881102 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1
Nov 12 17:42:42.881111 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 12 17:42:42.881118 kernel: cpuidle: using governor menu
Nov 12 17:42:42.881125 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Nov 12 17:42:42.881133 kernel: ASID allocator initialised with 32768 entries
Nov 12 17:42:42.881140 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 12 17:42:42.881147 kernel: Serial: AMBA PL011 UART driver
Nov 12 17:42:42.881155 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Nov 12 17:42:42.881162 kernel: Modules: 0 pages in range for non-PLT usage
Nov 12 17:42:42.881169 kernel: Modules: 509040 pages in range for PLT usage
Nov 12 17:42:42.881177 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 12 17:42:42.881185 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Nov 12 17:42:42.881192 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Nov 12 17:42:42.881199 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Nov 12 17:42:42.881207 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 12 17:42:42.881214 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Nov 12 17:42:42.881221 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Nov 12 17:42:42.881228 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Nov 12 17:42:42.881239 kernel: ACPI: Added _OSI(Module Device)
Nov 12 17:42:42.881249 kernel: ACPI: Added _OSI(Processor Device)
Nov 12 17:42:42.881256 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 12 17:42:42.881263 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 12 17:42:42.881270 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 12 17:42:42.881278 kernel: ACPI: Interpreter enabled
Nov 12 17:42:42.881285 kernel: ACPI: Using GIC for interrupt routing
Nov 12 17:42:42.881292 kernel: ACPI: MCFG table detected, 1 entries
Nov 12 17:42:42.881299 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Nov 12 17:42:42.881307 kernel: printk: console [ttyAMA0] enabled
Nov 12 17:42:42.881314 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 12 17:42:42.881445 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Nov 12 17:42:42.881519 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Nov 12 17:42:42.881600 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Nov 12 17:42:42.881665 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Nov 12 17:42:42.881744 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Nov 12 17:42:42.881755 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Nov 12 17:42:42.881766 kernel: PCI host bridge to bus 0000:00
Nov 12 17:42:42.881840 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Nov 12 17:42:42.881899 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Nov 12 17:42:42.881956 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Nov 12 17:42:42.882012 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 12 17:42:42.882090 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Nov 12 17:42:42.882163 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Nov 12 17:42:42.882232 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Nov 12 17:42:42.882310 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Nov 12 17:42:42.882378 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Nov 12 17:42:42.882452 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Nov 12 17:42:42.882517 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Nov 12 17:42:42.882583 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Nov 12 17:42:42.882641 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Nov 12 17:42:42.882701 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Nov 12 17:42:42.882785 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Nov 12 17:42:42.882796 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Nov 12 17:42:42.882804 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Nov 12 17:42:42.882811 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Nov 12 17:42:42.882818 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Nov 12 17:42:42.882825 kernel: iommu: Default domain type: Translated
Nov 12 17:42:42.882833 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Nov 12 17:42:42.882843 kernel: efivars: Registered efivars operations
Nov 12 17:42:42.882850 kernel: vgaarb: loaded
Nov 12 17:42:42.882858 kernel: clocksource: Switched to clocksource arch_sys_counter
Nov 12 17:42:42.882865 kernel: VFS: Disk quotas dquot_6.6.0
Nov 12 17:42:42.882872 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 12 17:42:42.882879 kernel: pnp: PnP ACPI init
Nov 12 17:42:42.882970 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Nov 12 17:42:42.882981 kernel: pnp: PnP ACPI: found 1 devices
Nov 12 17:42:42.882991 kernel: NET: Registered PF_INET protocol family
Nov 12 17:42:42.882998 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 12 17:42:42.883006 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Nov 12 17:42:42.883013 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 12 17:42:42.883021 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Nov 12 17:42:42.883028 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Nov 12 17:42:42.883036 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Nov 12 17:42:42.883043 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Nov 12 17:42:42.883050 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Nov 12 17:42:42.883059 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 12 17:42:42.883066 kernel: PCI: CLS 0 bytes, default 64
Nov 12 17:42:42.883073 kernel: kvm [1]: HYP mode not available
Nov 12 17:42:42.883080 kernel: Initialise system trusted keyrings
Nov 12 17:42:42.883088 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Nov 12 17:42:42.883095 kernel: Key type asymmetric registered
Nov 12 17:42:42.883102 kernel: Asymmetric key parser 'x509' registered
Nov 12 17:42:42.883109 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Nov 12 17:42:42.883117 kernel: io scheduler mq-deadline registered
Nov 12 17:42:42.883125 kernel: io scheduler kyber registered
Nov 12 17:42:42.883132 kernel: io scheduler bfq registered
Nov 12 17:42:42.883140 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Nov 12 17:42:42.883147 kernel: ACPI: button: Power Button [PWRB]
Nov 12 17:42:42.883155 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Nov 12 17:42:42.883221 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Nov 12 17:42:42.883231 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 12 17:42:42.883244 kernel: thunder_xcv, ver 1.0
Nov 12 17:42:42.883252 kernel: thunder_bgx, ver 1.0
Nov 12 17:42:42.883259 kernel: nicpf, ver 1.0
Nov 12 17:42:42.883269 kernel: nicvf, ver 1.0
Nov 12 17:42:42.883353 kernel: rtc-efi rtc-efi.0: registered as rtc0
Nov 12 17:42:42.883417 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T17:42:42 UTC (1731433362)
Nov 12 17:42:42.883427 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 12 17:42:42.883434 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Nov 12 17:42:42.883442 kernel: watchdog: Delayed init of the lockup detector failed: -19
Nov 12 17:42:42.883449 kernel: watchdog: Hard watchdog permanently disabled
Nov 12 17:42:42.883459 kernel: NET: Registered PF_INET6 protocol family
Nov 12 17:42:42.883466 kernel: Segment Routing with IPv6
Nov 12 17:42:42.883473 kernel: In-situ OAM (IOAM) with IPv6
Nov 12 17:42:42.883480 kernel: NET: Registered PF_PACKET protocol family
Nov 12 17:42:42.883487 kernel: Key type dns_resolver registered
Nov 12 17:42:42.883494 kernel: registered taskstats version 1
Nov 12 17:42:42.883502 kernel: Loading compiled-in X.509 certificates
Nov 12 17:42:42.883509 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 277bea35d8d47c9841f307ab609d4271c3622dcb'
Nov 12 17:42:42.883516 kernel: Key type .fscrypt registered
Nov 12 17:42:42.883523 kernel: Key type fscrypt-provisioning registered
Nov 12 17:42:42.883532 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 12 17:42:42.883539 kernel: ima: Allocated hash algorithm: sha1
Nov 12 17:42:42.883547 kernel: ima: No architecture policies found
Nov 12 17:42:42.883554 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Nov 12 17:42:42.883561 kernel: clk: Disabling unused clocks
Nov 12 17:42:42.883569 kernel: Freeing unused kernel memory: 39360K
Nov 12 17:42:42.883576 kernel: Run /init as init process
Nov 12 17:42:42.883583 kernel:   with arguments:
Nov 12 17:42:42.883591 kernel:     /init
Nov 12 17:42:42.883598 kernel:   with environment:
Nov 12 17:42:42.883605 kernel:     HOME=/
Nov 12 17:42:42.883612 kernel:     TERM=linux
Nov 12 17:42:42.883619 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Nov 12 17:42:42.883628 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Nov 12 17:42:42.883638 systemd[1]: Detected virtualization kvm.
Nov 12 17:42:42.883646 systemd[1]: Detected architecture arm64.
Nov 12 17:42:42.883655 systemd[1]: Running in initrd.
Nov 12 17:42:42.883662 systemd[1]: No hostname configured, using default hostname.
Nov 12 17:42:42.883670 systemd[1]: Hostname set to <localhost>.
Nov 12 17:42:42.883678 systemd[1]: Initializing machine ID from VM UUID.
Nov 12 17:42:42.883686 systemd[1]: Queued start job for default target initrd.target.
Nov 12 17:42:42.883694 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Nov 12 17:42:42.883701 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Nov 12 17:42:42.883710 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Nov 12 17:42:42.883835 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Nov 12 17:42:42.883844 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Nov 12 17:42:42.883852 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Nov 12 17:42:42.883861 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Nov 12 17:42:42.883870 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Nov 12 17:42:42.883878 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Nov 12 17:42:42.883885 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Nov 12 17:42:42.883895 systemd[1]: Reached target paths.target - Path Units.
Nov 12 17:42:42.883903 systemd[1]: Reached target slices.target - Slice Units.
Nov 12 17:42:42.883910 systemd[1]: Reached target swap.target - Swaps.
Nov 12 17:42:42.883918 systemd[1]: Reached target timers.target - Timer Units.
Nov 12 17:42:42.883926 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Nov 12 17:42:42.883934 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Nov 12 17:42:42.883942 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Nov 12 17:42:42.883950 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Nov 12 17:42:42.883960 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Nov 12 17:42:42.883968 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Nov 12 17:42:42.883976 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Nov 12 17:42:42.883984 systemd[1]: Reached target sockets.target - Socket Units.
Nov 12 17:42:42.883992 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Nov 12 17:42:42.884000 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Nov 12 17:42:42.884008 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Nov 12 17:42:42.884015 systemd[1]: Starting systemd-fsck-usr.service...
Nov 12 17:42:42.884023 systemd[1]: Starting systemd-journald.service - Journal Service...
Nov 12 17:42:42.884032 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Nov 12 17:42:42.884040 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Nov 12 17:42:42.884048 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Nov 12 17:42:42.884056 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Nov 12 17:42:42.884064 systemd[1]: Finished systemd-fsck-usr.service.
Nov 12 17:42:42.884072 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Nov 12 17:42:42.884082 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Nov 12 17:42:42.884116 systemd-journald[237]: Collecting audit messages is disabled.
Nov 12 17:42:42.884138 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Nov 12 17:42:42.884146 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Nov 12 17:42:42.884155 systemd-journald[237]: Journal started
Nov 12 17:42:42.884173 systemd-journald[237]: Runtime Journal (/run/log/journal/9610c8aa2a564eb6b1eb9db2f29d735c) is 5.9M, max 47.3M, 41.4M free.
Nov 12 17:42:42.875263 systemd-modules-load[239]: Inserted module 'overlay'
Nov 12 17:42:42.889572 systemd[1]: Started systemd-journald.service - Journal Service.
Nov 12 17:42:42.889613 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 12 17:42:42.890725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Nov 12 17:42:42.892904 systemd-modules-load[239]: Inserted module 'br_netfilter'
Nov 12 17:42:42.893735 kernel: Bridge firewalling registered
Nov 12 17:42:42.893894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Nov 12 17:42:42.898878 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Nov 12 17:42:42.900855 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Nov 12 17:42:42.902136 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Nov 12 17:42:42.909658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Nov 12 17:42:42.911219 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Nov 12 17:42:42.918931 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Nov 12 17:42:42.919872 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Nov 12 17:42:42.922564 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Nov 12 17:42:42.936107 dracut-cmdline[278]: dracut-dracut-053
Nov 12 17:42:42.938611 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e
Nov 12 17:42:42.951207 systemd-resolved[273]: Positive Trust Anchors:
Nov 12 17:42:42.951224 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Nov 12 17:42:42.951264 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Nov 12 17:42:42.956062 systemd-resolved[273]: Defaulting to hostname 'linux'.
Nov 12 17:42:42.957013 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Nov 12 17:42:42.958089 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Nov 12 17:42:43.005745 kernel: SCSI subsystem initialized
Nov 12 17:42:43.010735 kernel: Loading iSCSI transport class v2.0-870.
Nov 12 17:42:43.017730 kernel: iscsi: registered transport (tcp)
Nov 12 17:42:43.032762 kernel: iscsi: registered transport (qla4xxx)
Nov 12 17:42:43.032805 kernel: QLogic iSCSI HBA Driver
Nov 12 17:42:43.073155 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Nov 12 17:42:43.084882 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Nov 12 17:42:43.101382 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 12 17:42:43.102362 kernel: device-mapper: uevent: version 1.0.3
Nov 12 17:42:43.102381 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Nov 12 17:42:43.146734 kernel: raid6: neonx8   gen() 15783 MB/s
Nov 12 17:42:43.163726 kernel: raid6: neonx4   gen() 15651 MB/s
Nov 12 17:42:43.180724 kernel: raid6: neonx2   gen() 13243 MB/s
Nov 12 17:42:43.197724 kernel: raid6: neonx1   gen() 10485 MB/s
Nov 12 17:42:43.214723 kernel: raid6: int64x8  gen()  6966 MB/s
Nov 12 17:42:43.231724 kernel: raid6: int64x4  gen()  7356 MB/s
Nov 12 17:42:43.248726 kernel: raid6: int64x2  gen()  6131 MB/s
Nov 12 17:42:43.265724 kernel: raid6: int64x1  gen()  5056 MB/s
Nov 12 17:42:43.265750 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s
Nov 12 17:42:43.282749 kernel: raid6: .... xor() 11920 MB/s, rmw enabled
Nov 12 17:42:43.282783 kernel: raid6: using neon recovery algorithm
Nov 12 17:42:43.289788 kernel: xor: measuring software checksum speed
Nov 12 17:42:43.289807 kernel:    8regs           : 19702 MB/sec
Nov 12 17:42:43.289816 kernel:    32regs          : 19664 MB/sec
Nov 12 17:42:43.290728 kernel:    arm64_neon      : 26874 MB/sec
Nov 12 17:42:43.290745 kernel: xor: using function: arm64_neon (26874 MB/sec)
Nov 12 17:42:43.339743 kernel: Btrfs loaded, zoned=no, fsverity=no
Nov 12 17:42:43.350590 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Nov 12 17:42:43.367861 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Nov 12 17:42:43.382386 systemd-udevd[461]: Using default interface naming scheme 'v255'.
Nov 12 17:42:43.385477 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Nov 12 17:42:43.390869 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Nov 12 17:42:43.402513 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation
Nov 12 17:42:43.429777 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Nov 12 17:42:43.442859 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Nov 12 17:42:43.481750 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Nov 12 17:42:43.493197 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Nov 12 17:42:43.503742 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Nov 12 17:42:43.505251 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Nov 12 17:42:43.507744 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Nov 12 17:42:43.508537 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Nov 12 17:42:43.519958 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Nov 12 17:42:43.523445 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues
Nov 12 17:42:43.533484 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Nov 12 17:42:43.533587 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Nov 12 17:42:43.533598 kernel: GPT:9289727 != 19775487
Nov 12 17:42:43.533608 kernel: GPT:Alternate GPT header not at the end of the disk.
Nov 12 17:42:43.533617 kernel: GPT:9289727 != 19775487
Nov 12 17:42:43.533629 kernel: GPT: Use GNU Parted to correct GPT errors.
Nov 12 17:42:43.533639 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Nov 12 17:42:43.527868 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Nov 12 17:42:43.537091 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Nov 12 17:42:43.537204 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Nov 12 17:42:43.540300 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Nov 12 17:42:43.546014 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 12 17:42:43.546162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Nov 12 17:42:43.548918 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Nov 12 17:42:43.557741 kernel: BTRFS: device fsid 93a9d474-e751-47b7-a65f-e39ca9abd47a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (520)
Nov 12 17:42:43.560734 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (507)
Nov 12 17:42:43.563012 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Nov 12 17:42:43.575784 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Nov 12 17:42:43.576873 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Nov 12 17:42:43.582276 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Nov 12 17:42:43.589368 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Nov 12 17:42:43.590324 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Nov 12 17:42:43.595637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Nov 12 17:42:43.609882 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Nov 12 17:42:43.611430 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Nov 12 17:42:43.615692 disk-uuid[551]: Primary Header is updated.
Nov 12 17:42:43.615692 disk-uuid[551]: Secondary Entries is updated.
Nov 12 17:42:43.615692 disk-uuid[551]: Secondary Header is updated.
Nov 12 17:42:43.620605 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Nov 12 17:42:43.636812 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Nov 12 17:42:44.628772 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Nov 12 17:42:44.629197 disk-uuid[552]: The operation has completed successfully.
Nov 12 17:42:44.648459 systemd[1]: disk-uuid.service: Deactivated successfully.
Nov 12 17:42:44.648583 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Nov 12 17:42:44.669860 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Nov 12 17:42:44.674140 sh[575]: Success
Nov 12 17:42:44.684748 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Nov 12 17:42:44.714030 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Nov 12 17:42:44.735063 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Nov 12 17:42:44.736816 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Nov 12 17:42:44.747242 kernel: BTRFS info (device dm-0): first mount of filesystem 93a9d474-e751-47b7-a65f-e39ca9abd47a
Nov 12 17:42:44.747275 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Nov 12 17:42:44.747286 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Nov 12 17:42:44.748010 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Nov 12 17:42:44.749017 kernel: BTRFS info (device dm-0): using free space tree
Nov 12 17:42:44.752129 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Nov 12 17:42:44.753172 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Nov 12 17:42:44.763883 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Nov 12 17:42:44.765219 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Nov 12 17:42:44.772988 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b
Nov 12 17:42:44.773028 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Nov 12 17:42:44.773040 kernel: BTRFS info (device vda6): using free space tree
Nov 12 17:42:44.774746 kernel: BTRFS info (device vda6): auto enabling async discard
Nov 12 17:42:44.784061 systemd[1]: mnt-oem.mount: Deactivated successfully.
Nov 12 17:42:44.785351 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b
Nov 12 17:42:44.790326 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Nov 12 17:42:44.796891 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Nov 12 17:42:44.856295 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Nov 12 17:42:44.866869 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Nov 12 17:42:44.894773 ignition[668]: Ignition 2.19.0
Nov 12 17:42:44.894782 ignition[668]: Stage: fetch-offline
Nov 12 17:42:44.894822 ignition[668]: no configs at "/usr/lib/ignition/base.d"
Nov 12 17:42:44.894831 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Nov 12 17:42:44.896688 systemd-networkd[767]: lo: Link UP
Nov 12 17:42:44.895058 ignition[668]: parsed url from cmdline: ""
Nov 12 17:42:44.896692 systemd-networkd[767]: lo: Gained carrier
Nov 12 17:42:44.895062 ignition[668]: no config URL provided
Nov 12 17:42:44.897476 systemd-networkd[767]: Enumeration completed
Nov 12 17:42:44.895066 ignition[668]: reading system config file "/usr/lib/ignition/user.ign"
Nov 12 17:42:44.898152 systemd[1]: Started systemd-networkd.service - Network Configuration.
Nov 12 17:42:44.895073 ignition[668]: no config at "/usr/lib/ignition/user.ign"
Nov 12 17:42:44.899490 systemd[1]: Reached target network.target - Network.
Nov 12 17:42:44.895095 ignition[668]: op(1): [started]  loading QEMU firmware config module
Nov 12 17:42:44.900022 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Nov 12 17:42:44.895100 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg"
Nov 12 17:42:44.900026 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Nov 12 17:42:44.910922 ignition[668]: op(1): [finished] loading QEMU firmware config module
Nov 12 17:42:44.900833 systemd-networkd[767]: eth0: Link UP
Nov 12 17:42:44.900837 systemd-networkd[767]: eth0: Gained carrier
Nov 12 17:42:44.900843 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Nov 12 17:42:44.920760 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1
Nov 12 17:42:44.956462 ignition[668]: parsing config with SHA512: 944ddce0658dbd806cd17c4176e9fa0eba819ae7d78ebc2d47b98cc10acf3d13f4b2ad89fff7a06c791dad21776e6e9c28ca2a4b6b8357681d8974d73fd51d52
Nov 12 17:42:44.960513 unknown[668]: fetched base config from "system"
Nov 12 17:42:44.960523 unknown[668]: fetched user config from "qemu"
Nov 12 17:42:44.960911 ignition[668]: fetch-offline: fetch-offline passed
Nov 12 17:42:44.960967 ignition[668]: Ignition finished successfully
Nov 12 17:42:44.963051 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Nov 12 17:42:44.964740 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Nov 12 17:42:44.970948 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Nov 12 17:42:44.981209 ignition[773]: Ignition 2.19.0
Nov 12 17:42:44.981219 ignition[773]: Stage: kargs
Nov 12 17:42:44.981380 ignition[773]: no configs at "/usr/lib/ignition/base.d"
Nov 12 17:42:44.981389 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Nov 12 17:42:44.982249 ignition[773]: kargs: kargs passed
Nov 12 17:42:44.982289 ignition[773]: Ignition finished successfully
Nov 12 17:42:44.986852 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Nov 12 17:42:44.988383 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Nov 12 17:42:45.000769 ignition[780]: Ignition 2.19.0
Nov 12 17:42:45.000780 ignition[780]: Stage: disks
Nov 12 17:42:45.000936 ignition[780]: no configs at "/usr/lib/ignition/base.d"
Nov 12 17:42:45.000945 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Nov 12 17:42:45.003090 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Nov 12 17:42:45.001775 ignition[780]: disks: disks passed
Nov 12 17:42:45.004937 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Nov 12 17:42:45.001819 ignition[780]: Ignition finished successfully
Nov 12 17:42:45.006244 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Nov 12 17:42:45.007915 systemd[1]: Reached target local-fs.target - Local File Systems.
Nov 12 17:42:45.009192 systemd[1]: Reached target sysinit.target - System Initialization.
Nov 12 17:42:45.010760 systemd[1]: Reached target basic.target - Basic System.
Nov 12 17:42:45.023865 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Nov 12 17:42:45.033821 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Nov 12 17:42:45.037508 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Nov 12 17:42:45.040675 systemd[1]: Mounting sysroot.mount - /sysroot...
Nov 12 17:42:45.084590 systemd[1]: Mounted sysroot.mount - /sysroot.
Nov 12 17:42:45.085738 kernel: EXT4-fs (vda9): mounted filesystem b3af0fd7-3c7c-4cdc-9b88-dae3d10ea922 r/w with ordered data mode. Quota mode: none.
Nov 12 17:42:45.085627 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Nov 12 17:42:45.096829 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Nov 12 17:42:45.098271 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Nov 12 17:42:45.099440 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Nov 12 17:42:45.099560 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Nov 12 17:42:45.099591 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Nov 12 17:42:45.106771 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799)
Nov 12 17:42:45.106793 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b
Nov 12 17:42:45.106803 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Nov 12 17:42:45.106813 kernel: BTRFS info (device vda6): using free space tree
Nov 12 17:42:45.105472 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Nov 12 17:42:45.108622 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Nov 12 17:42:45.110202 kernel: BTRFS info (device vda6): auto enabling async discard
Nov 12 17:42:45.112463 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Nov 12 17:42:45.153038 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory
Nov 12 17:42:45.157824 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory
Nov 12 17:42:45.161192 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory
Nov 12 17:42:45.166215 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory
Nov 12 17:42:45.240639 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Nov 12 17:42:45.250831 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Nov 12 17:42:45.261853 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Nov 12 17:42:45.264732 kernel: BTRFS info (device vda6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b
Nov 12 17:42:45.284742 ignition[911]: INFO     : Ignition 2.19.0
Nov 12 17:42:45.284742 ignition[911]: INFO     : Stage: mount
Nov 12 17:42:45.284742 ignition[911]: INFO     : no configs at "/usr/lib/ignition/base.d"
Nov 12 17:42:45.284742 ignition[911]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Nov 12 17:42:45.288832 ignition[911]: INFO     : mount: mount passed
Nov 12 17:42:45.288832 ignition[911]: INFO     : Ignition finished successfully
Nov 12 17:42:45.287768 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Nov 12 17:42:45.292000 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Nov 12 17:42:45.303832 systemd[1]: Starting ignition-files.service - Ignition (files)...
Nov 12 17:42:45.746033 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Nov 12 17:42:45.761900 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Nov 12 17:42:45.769450 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926)
Nov 12 17:42:45.769488 kernel: BTRFS info (device vda6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b
Nov 12 17:42:45.769506 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Nov 12 17:42:45.769517 kernel: BTRFS info (device vda6): using free space tree
Nov 12 17:42:45.771737 kernel: BTRFS info (device vda6): auto enabling async discard
Nov 12 17:42:45.772513 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Nov 12 17:42:45.792097 ignition[943]: INFO     : Ignition 2.19.0
Nov 12 17:42:45.792097 ignition[943]: INFO     : Stage: files
Nov 12 17:42:45.793560 ignition[943]: INFO     : no configs at "/usr/lib/ignition/base.d"
Nov 12 17:42:45.793560 ignition[943]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Nov 12 17:42:45.793560 ignition[943]: DEBUG    : files: compiled without relabeling support, skipping
Nov 12 17:42:45.796814 ignition[943]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Nov 12 17:42:45.796814 ignition[943]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Nov 12 17:42:45.796814 ignition[943]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Nov 12 17:42:45.800448 ignition[943]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Nov 12 17:42:45.800448 ignition[943]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Nov 12 17:42:45.799081 unknown[943]: wrote ssh authorized keys file for user: core
Nov 12 17:42:45.803271 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Nov 12 17:42:45.803271 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Nov 12 17:42:45.850344 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Nov 12 17:42:46.010974 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Nov 12 17:42:46.010974 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Nov 12 17:42:46.013787 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1
Nov 12 17:42:46.313953 systemd-networkd[767]: eth0: Gained IPv6LL
Nov 12 17:42:46.342861 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Nov 12 17:42:46.609406 ignition[943]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Nov 12 17:42:46.609406 ignition[943]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Nov 12 17:42:46.612976 ignition[943]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Nov 12 17:42:46.612976 ignition[943]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Nov 12 17:42:46.612976 ignition[943]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Nov 12 17:42:46.612976 ignition[943]: INFO     : files: op(d): [started]  processing unit "coreos-metadata.service"
Nov 12 17:42:46.612976 ignition[943]: INFO     : files: op(d): op(e): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Nov 12 17:42:46.612976 ignition[943]: INFO     : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Nov 12 17:42:46.612976 ignition[943]: INFO     : files: op(d): [finished] processing unit "coreos-metadata.service"
Nov 12 17:42:46.612976 ignition[943]: INFO     : files: op(f): [started]  setting preset to disabled for "coreos-metadata.service"
Nov 12 17:42:46.631767 ignition[943]: INFO     : files: op(f): op(10): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Nov 12 17:42:46.635924 ignition[943]: INFO     : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Nov 12 17:42:46.637415 ignition[943]: INFO     : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service"
Nov 12 17:42:46.637415 ignition[943]: INFO     : files: op(11): [started]  setting preset to enabled for "prepare-helm.service"
Nov 12 17:42:46.637415 ignition[943]: INFO     : files: op(11): [finished] setting preset to enabled for "prepare-helm.service"
Nov 12 17:42:46.637415 ignition[943]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Nov 12 17:42:46.637415 ignition[943]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Nov 12 17:42:46.637415 ignition[943]: INFO     : files: files passed
Nov 12 17:42:46.637415 ignition[943]: INFO     : Ignition finished successfully
Nov 12 17:42:46.638015 systemd[1]: Finished ignition-files.service - Ignition (files).
Nov 12 17:42:46.646876 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Nov 12 17:42:46.648885 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Nov 12 17:42:46.651102 systemd[1]: ignition-quench.service: Deactivated successfully.
Nov 12 17:42:46.651186 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Nov 12 17:42:46.656018 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory
Nov 12 17:42:46.658274 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Nov 12 17:42:46.658274 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Nov 12 17:42:46.661197 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Nov 12 17:42:46.660046 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Nov 12 17:42:46.662518 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Nov 12 17:42:46.670957 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Nov 12 17:42:46.691353 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 12 17:42:46.691466 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Nov 12 17:42:46.694981 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Nov 12 17:42:46.696433 systemd[1]: Reached target initrd.target - Initrd Default Target.
Nov 12 17:42:46.698091 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Nov 12 17:42:46.698858 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Nov 12 17:42:46.714185 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Nov 12 17:42:46.735982 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Nov 12 17:42:46.743767 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Nov 12 17:42:46.744673 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Nov 12 17:42:46.746423 systemd[1]: Stopped target timers.target - Timer Units.
Nov 12 17:42:46.747951 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 12 17:42:46.748065 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Nov 12 17:42:46.750301 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Nov 12 17:42:46.751995 systemd[1]: Stopped target basic.target - Basic System.
Nov 12 17:42:46.753403 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Nov 12 17:42:46.754805 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Nov 12 17:42:46.756431 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Nov 12 17:42:46.758136 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Nov 12 17:42:46.759665 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Nov 12 17:42:46.761318 systemd[1]: Stopped target sysinit.target - System Initialization.
Nov 12 17:42:46.762983 systemd[1]: Stopped target local-fs.target - Local File Systems.
Nov 12 17:42:46.764463 systemd[1]: Stopped target swap.target - Swaps.
Nov 12 17:42:46.765771 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 12 17:42:46.765888 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Nov 12 17:42:46.768011 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Nov 12 17:42:46.768903 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Nov 12 17:42:46.770588 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Nov 12 17:42:46.773774 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Nov 12 17:42:46.774770 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 12 17:42:46.774894 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Nov 12 17:42:46.777433 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Nov 12 17:42:46.777544 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Nov 12 17:42:46.779389 systemd[1]: Stopped target paths.target - Path Units.
Nov 12 17:42:46.780748 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 12 17:42:46.784778 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Nov 12 17:42:46.785819 systemd[1]: Stopped target slices.target - Slice Units.
Nov 12 17:42:46.787700 systemd[1]: Stopped target sockets.target - Socket Units.
Nov 12 17:42:46.789079 systemd[1]: iscsid.socket: Deactivated successfully.
Nov 12 17:42:46.789175 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Nov 12 17:42:46.790532 systemd[1]: iscsiuio.socket: Deactivated successfully.
Nov 12 17:42:46.790611 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Nov 12 17:42:46.791890 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Nov 12 17:42:46.791996 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Nov 12 17:42:46.793480 systemd[1]: ignition-files.service: Deactivated successfully.
Nov 12 17:42:46.793581 systemd[1]: Stopped ignition-files.service - Ignition (files).
Nov 12 17:42:46.804882 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Nov 12 17:42:46.805583 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 12 17:42:46.805733 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Nov 12 17:42:46.810965 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Nov 12 17:42:46.811669 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 12 17:42:46.811950 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Nov 12 17:42:46.813404 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 12 17:42:46.819377 ignition[998]: INFO     : Ignition 2.19.0
Nov 12 17:42:46.819377 ignition[998]: INFO     : Stage: umount
Nov 12 17:42:46.819377 ignition[998]: INFO     : no configs at "/usr/lib/ignition/base.d"
Nov 12 17:42:46.819377 ignition[998]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Nov 12 17:42:46.819377 ignition[998]: INFO     : umount: umount passed
Nov 12 17:42:46.819377 ignition[998]: INFO     : Ignition finished successfully
Nov 12 17:42:46.813503 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Nov 12 17:42:46.819975 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 12 17:42:46.820061 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Nov 12 17:42:46.821438 systemd[1]: ignition-mount.service: Deactivated successfully.
Nov 12 17:42:46.821526 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Nov 12 17:42:46.825436 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Nov 12 17:42:46.825945 systemd[1]: Stopped target network.target - Network.
Nov 12 17:42:46.827170 systemd[1]: ignition-disks.service: Deactivated successfully.
Nov 12 17:42:46.827235 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Nov 12 17:42:46.828815 systemd[1]: ignition-kargs.service: Deactivated successfully.
Nov 12 17:42:46.828861 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Nov 12 17:42:46.830415 systemd[1]: ignition-setup.service: Deactivated successfully.
Nov 12 17:42:46.830458 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Nov 12 17:42:46.832265 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Nov 12 17:42:46.832312 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Nov 12 17:42:46.834116 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Nov 12 17:42:46.835415 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Nov 12 17:42:46.842750 systemd-networkd[767]: eth0: DHCPv6 lease lost
Nov 12 17:42:46.842954 systemd[1]: systemd-resolved.service: Deactivated successfully.
Nov 12 17:42:46.843095 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Nov 12 17:42:46.845084 systemd[1]: systemd-networkd.service: Deactivated successfully.
Nov 12 17:42:46.845202 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Nov 12 17:42:46.847412 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Nov 12 17:42:46.847477 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Nov 12 17:42:46.856835 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Nov 12 17:42:46.857482 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Nov 12 17:42:46.857535 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Nov 12 17:42:46.859246 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 12 17:42:46.859287 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Nov 12 17:42:46.861081 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 12 17:42:46.861121 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Nov 12 17:42:46.862903 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 12 17:42:46.862957 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Nov 12 17:42:46.864809 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Nov 12 17:42:46.867947 systemd[1]: sysroot-boot.service: Deactivated successfully.
Nov 12 17:42:46.868494 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Nov 12 17:42:46.871218 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Nov 12 17:42:46.871283 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Nov 12 17:42:46.876420 systemd[1]: network-cleanup.service: Deactivated successfully.
Nov 12 17:42:46.876511 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Nov 12 17:42:46.881539 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 12 17:42:46.881674 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Nov 12 17:42:46.883028 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 12 17:42:46.883075 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Nov 12 17:42:46.884349 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 12 17:42:46.884376 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Nov 12 17:42:46.885875 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 12 17:42:46.885916 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Nov 12 17:42:46.888417 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 12 17:42:46.888457 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Nov 12 17:42:46.891091 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Nov 12 17:42:46.891133 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Nov 12 17:42:46.906900 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Nov 12 17:42:46.907752 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 12 17:42:46.907812 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Nov 12 17:42:46.909767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 12 17:42:46.909816 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Nov 12 17:42:46.912078 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 12 17:42:46.913761 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Nov 12 17:42:46.915296 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Nov 12 17:42:46.917553 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Nov 12 17:42:46.927876 systemd[1]: Switching root.
Nov 12 17:42:46.955562 systemd-journald[237]: Journal stopped
Nov 12 17:42:47.616008 systemd-journald[237]: Received SIGTERM from PID 1 (systemd).
Nov 12 17:42:47.616088 kernel: SELinux:  policy capability network_peer_controls=1
Nov 12 17:42:47.616101 kernel: SELinux:  policy capability open_perms=1
Nov 12 17:42:47.616111 kernel: SELinux:  policy capability extended_socket_class=1
Nov 12 17:42:47.616121 kernel: SELinux:  policy capability always_check_network=0
Nov 12 17:42:47.616134 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 12 17:42:47.616145 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 12 17:42:47.616155 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Nov 12 17:42:47.616169 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Nov 12 17:42:47.616179 kernel: audit: type=1403 audit(1731433367.092:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 12 17:42:47.616190 systemd[1]: Successfully loaded SELinux policy in 31.651ms.
Nov 12 17:42:47.616207 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.305ms.
Nov 12 17:42:47.616219 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Nov 12 17:42:47.616240 systemd[1]: Detected virtualization kvm.
Nov 12 17:42:47.616258 systemd[1]: Detected architecture arm64.
Nov 12 17:42:47.616268 systemd[1]: Detected first boot.
Nov 12 17:42:47.616279 systemd[1]: Initializing machine ID from VM UUID.
Nov 12 17:42:47.616290 zram_generator::config[1043]: No configuration found.
Nov 12 17:42:47.616301 systemd[1]: Populated /etc with preset unit settings.
Nov 12 17:42:47.616311 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 12 17:42:47.616322 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Nov 12 17:42:47.616333 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 12 17:42:47.616346 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Nov 12 17:42:47.616357 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Nov 12 17:42:47.616367 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Nov 12 17:42:47.616378 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Nov 12 17:42:47.616388 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Nov 12 17:42:47.616399 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Nov 12 17:42:47.616410 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Nov 12 17:42:47.616425 systemd[1]: Created slice user.slice - User and Session Slice.
Nov 12 17:42:47.616437 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Nov 12 17:42:47.616448 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Nov 12 17:42:47.616459 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Nov 12 17:42:47.616469 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Nov 12 17:42:47.616480 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Nov 12 17:42:47.616492 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Nov 12 17:42:47.616503 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Nov 12 17:42:47.616514 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Nov 12 17:42:47.616528 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Nov 12 17:42:47.616541 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Nov 12 17:42:47.616552 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Nov 12 17:42:47.616563 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Nov 12 17:42:47.616574 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Nov 12 17:42:47.616584 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Nov 12 17:42:47.616595 systemd[1]: Reached target slices.target - Slice Units.
Nov 12 17:42:47.616605 systemd[1]: Reached target swap.target - Swaps.
Nov 12 17:42:47.616616 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Nov 12 17:42:47.616629 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Nov 12 17:42:47.616639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Nov 12 17:42:47.616650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Nov 12 17:42:47.616661 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Nov 12 17:42:47.616671 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Nov 12 17:42:47.616682 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Nov 12 17:42:47.616692 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Nov 12 17:42:47.616703 systemd[1]: Mounting media.mount - External Media Directory...
Nov 12 17:42:47.616723 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Nov 12 17:42:47.616745 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Nov 12 17:42:47.616757 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Nov 12 17:42:47.616768 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 12 17:42:47.616779 systemd[1]: Reached target machines.target - Containers.
Nov 12 17:42:47.616789 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Nov 12 17:42:47.616801 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Nov 12 17:42:47.616812 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Nov 12 17:42:47.616822 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Nov 12 17:42:47.616833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Nov 12 17:42:47.616846 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Nov 12 17:42:47.616856 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Nov 12 17:42:47.616867 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Nov 12 17:42:47.616878 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Nov 12 17:42:47.616889 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Nov 12 17:42:47.616899 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 12 17:42:47.616910 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Nov 12 17:42:47.616921 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Nov 12 17:42:47.616934 systemd[1]: Stopped systemd-fsck-usr.service.
Nov 12 17:42:47.616944 kernel: fuse: init (API version 7.39)
Nov 12 17:42:47.616954 systemd[1]: Starting systemd-journald.service - Journal Service...
Nov 12 17:42:47.616965 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Nov 12 17:42:47.616977 kernel: ACPI: bus type drm_connector registered
Nov 12 17:42:47.616988 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Nov 12 17:42:47.616999 kernel: loop: module loaded
Nov 12 17:42:47.617008 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Nov 12 17:42:47.617019 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Nov 12 17:42:47.617054 systemd-journald[1114]: Collecting audit messages is disabled.
Nov 12 17:42:47.617077 systemd[1]: verity-setup.service: Deactivated successfully.
Nov 12 17:42:47.617088 systemd[1]: Stopped verity-setup.service.
Nov 12 17:42:47.617099 systemd-journald[1114]: Journal started
Nov 12 17:42:47.617120 systemd-journald[1114]: Runtime Journal (/run/log/journal/9610c8aa2a564eb6b1eb9db2f29d735c) is 5.9M, max 47.3M, 41.4M free.
Nov 12 17:42:47.439627 systemd[1]: Queued start job for default target multi-user.target.
Nov 12 17:42:47.458783 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Nov 12 17:42:47.459120 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 12 17:42:47.619121 systemd[1]: Started systemd-journald.service - Journal Service.
Nov 12 17:42:47.619745 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Nov 12 17:42:47.620615 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Nov 12 17:42:47.621662 systemd[1]: Mounted media.mount - External Media Directory.
Nov 12 17:42:47.622526 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Nov 12 17:42:47.623462 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Nov 12 17:42:47.624443 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Nov 12 17:42:47.626744 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Nov 12 17:42:47.627846 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Nov 12 17:42:47.629006 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 12 17:42:47.629145 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Nov 12 17:42:47.630374 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Nov 12 17:42:47.630511 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Nov 12 17:42:47.631647 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 12 17:42:47.631796 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Nov 12 17:42:47.632868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 12 17:42:47.633004 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Nov 12 17:42:47.634218 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 12 17:42:47.634358 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Nov 12 17:42:47.635426 systemd[1]: modprobe@loop.service: Deactivated successfully.
Nov 12 17:42:47.635555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Nov 12 17:42:47.636646 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Nov 12 17:42:47.637808 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Nov 12 17:42:47.639196 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Nov 12 17:42:47.651383 systemd[1]: Reached target network-pre.target - Preparation for Network.
Nov 12 17:42:47.660824 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Nov 12 17:42:47.662769 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Nov 12 17:42:47.663586 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Nov 12 17:42:47.663619 systemd[1]: Reached target local-fs.target - Local File Systems.
Nov 12 17:42:47.665460 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Nov 12 17:42:47.667518 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Nov 12 17:42:47.669443 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Nov 12 17:42:47.670407 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 12 17:42:47.671951 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Nov 12 17:42:47.673875 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Nov 12 17:42:47.677867 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 12 17:42:47.678863 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Nov 12 17:42:47.680507 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 12 17:42:47.688524 systemd-journald[1114]: Time spent on flushing to /var/log/journal/9610c8aa2a564eb6b1eb9db2f29d735c is 29.656ms for 852 entries.
Nov 12 17:42:47.688524 systemd-journald[1114]: System Journal (/var/log/journal/9610c8aa2a564eb6b1eb9db2f29d735c) is 8.0M, max 195.6M, 187.6M free.
Nov 12 17:42:47.737766 systemd-journald[1114]: Received client request to flush runtime journal.
Nov 12 17:42:47.737917 kernel: loop0: detected capacity change from 0 to 114328
Nov 12 17:42:47.737950 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Nov 12 17:42:47.688990 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Nov 12 17:42:47.693916 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Nov 12 17:42:47.697286 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Nov 12 17:42:47.700700 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Nov 12 17:42:47.701963 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Nov 12 17:42:47.703324 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Nov 12 17:42:47.704566 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Nov 12 17:42:47.708097 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Nov 12 17:42:47.713070 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Nov 12 17:42:47.723100 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Nov 12 17:42:47.725432 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Nov 12 17:42:47.740364 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Nov 12 17:42:47.741839 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Nov 12 17:42:47.744743 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Nov 12 17:42:47.746432 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Nov 12 17:42:47.755667 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Nov 12 17:42:47.762053 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Nov 12 17:42:47.763740 kernel: loop1: detected capacity change from 0 to 114432
Nov 12 17:42:47.764888 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Nov 12 17:42:47.783807 systemd-tmpfiles[1173]: ACLs are not supported, ignoring.
Nov 12 17:42:47.783825 systemd-tmpfiles[1173]: ACLs are not supported, ignoring.
Nov 12 17:42:47.788030 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Nov 12 17:42:47.798024 kernel: loop2: detected capacity change from 0 to 189592
Nov 12 17:42:47.827734 kernel: loop3: detected capacity change from 0 to 114328
Nov 12 17:42:47.835101 kernel: loop4: detected capacity change from 0 to 114432
Nov 12 17:42:47.841745 kernel: loop5: detected capacity change from 0 to 189592
Nov 12 17:42:47.845424 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Nov 12 17:42:47.845876 (sd-merge)[1180]: Merged extensions into '/usr'.
Nov 12 17:42:47.849257 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)...
Nov 12 17:42:47.849433 systemd[1]: Reloading...
Nov 12 17:42:47.901191 zram_generator::config[1202]: No configuration found.
Nov 12 17:42:47.966900 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Nov 12 17:42:48.000237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Nov 12 17:42:48.035531 systemd[1]: Reloading finished in 185 ms.
Nov 12 17:42:48.065318 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Nov 12 17:42:48.066581 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Nov 12 17:42:48.078935 systemd[1]: Starting ensure-sysext.service...
Nov 12 17:42:48.080682 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Nov 12 17:42:48.092043 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)...
Nov 12 17:42:48.092059 systemd[1]: Reloading...
Nov 12 17:42:48.108542 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Nov 12 17:42:48.109176 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Nov 12 17:42:48.109864 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Nov 12 17:42:48.110071 systemd-tmpfiles[1241]: ACLs are not supported, ignoring.
Nov 12 17:42:48.110119 systemd-tmpfiles[1241]: ACLs are not supported, ignoring.
Nov 12 17:42:48.113053 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot.
Nov 12 17:42:48.113065 systemd-tmpfiles[1241]: Skipping /boot
Nov 12 17:42:48.123049 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot.
Nov 12 17:42:48.123064 systemd-tmpfiles[1241]: Skipping /boot
Nov 12 17:42:48.134760 zram_generator::config[1265]: No configuration found.
Nov 12 17:42:48.217961 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Nov 12 17:42:48.252992 systemd[1]: Reloading finished in 160 ms.
Nov 12 17:42:48.268513 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Nov 12 17:42:48.281171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Nov 12 17:42:48.286912 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Nov 12 17:42:48.288924 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Nov 12 17:42:48.290730 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Nov 12 17:42:48.295029 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Nov 12 17:42:48.306168 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Nov 12 17:42:48.307967 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Nov 12 17:42:48.315270 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Nov 12 17:42:48.321460 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Nov 12 17:42:48.322991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Nov 12 17:42:48.327787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Nov 12 17:42:48.330971 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Nov 12 17:42:48.331886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 12 17:42:48.333765 systemd-udevd[1315]: Using default interface naming scheme 'v255'.
Nov 12 17:42:48.334782 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Nov 12 17:42:48.340600 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Nov 12 17:42:48.344300 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Nov 12 17:42:48.345764 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Nov 12 17:42:48.345893 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Nov 12 17:42:48.347323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 12 17:42:48.347440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Nov 12 17:42:48.348967 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Nov 12 17:42:48.350372 systemd[1]: modprobe@loop.service: Deactivated successfully.
Nov 12 17:42:48.350490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Nov 12 17:42:48.351880 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Nov 12 17:42:48.357763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Nov 12 17:42:48.360317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Nov 12 17:42:48.371890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Nov 12 17:42:48.373998 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Nov 12 17:42:48.375806 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Nov 12 17:42:48.376863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 12 17:42:48.378913 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Nov 12 17:42:48.379794 augenrules[1349]: No rules
Nov 12 17:42:48.379946 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Nov 12 17:42:48.381785 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Nov 12 17:42:48.387865 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Nov 12 17:42:48.388003 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Nov 12 17:42:48.398029 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1353)
Nov 12 17:42:48.397973 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Nov 12 17:42:48.404266 systemd[1]: Finished ensure-sysext.service.
Nov 12 17:42:48.406008 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1353)
Nov 12 17:42:48.405874 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 12 17:42:48.406018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Nov 12 17:42:48.407278 systemd[1]: modprobe@loop.service: Deactivated successfully.
Nov 12 17:42:48.407408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Nov 12 17:42:48.413957 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Nov 12 17:42:48.415024 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Nov 12 17:42:48.416799 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1340)
Nov 12 17:42:48.426997 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Nov 12 17:42:48.427832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 12 17:42:48.427873 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 12 17:42:48.427915 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 12 17:42:48.432679 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Nov 12 17:42:48.433672 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Nov 12 17:42:48.434155 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 12 17:42:48.436400 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Nov 12 17:42:48.447297 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Nov 12 17:42:48.452024 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Nov 12 17:42:48.502762 systemd-networkd[1362]: lo: Link UP
Nov 12 17:42:48.502769 systemd-networkd[1362]: lo: Gained carrier
Nov 12 17:42:48.503462 systemd-networkd[1362]: Enumeration completed
Nov 12 17:42:48.503573 systemd[1]: Started systemd-networkd.service - Network Configuration.
Nov 12 17:42:48.506820 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Nov 12 17:42:48.506831 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Nov 12 17:42:48.508983 systemd-networkd[1362]: eth0: Link UP
Nov 12 17:42:48.508994 systemd-networkd[1362]: eth0: Gained carrier
Nov 12 17:42:48.509008 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Nov 12 17:42:48.510682 systemd-resolved[1308]: Positive Trust Anchors:
Nov 12 17:42:48.510698 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Nov 12 17:42:48.510768 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Nov 12 17:42:48.516935 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Nov 12 17:42:48.518544 systemd-resolved[1308]: Defaulting to hostname 'linux'.
Nov 12 17:42:48.520238 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Nov 12 17:42:48.521540 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Nov 12 17:42:48.522590 systemd[1]: Reached target network.target - Network.
Nov 12 17:42:48.523336 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Nov 12 17:42:48.527780 systemd-networkd[1362]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1
Nov 12 17:42:48.531079 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Nov 12 17:42:48.532293 systemd[1]: Reached target time-set.target - System Time Set.
Nov 12 17:42:48.533643 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Nov 12 17:42:48.534147 systemd-timesyncd[1380]: Initial clock synchronization to Tue 2024-11-12 17:42:48.928844 UTC.
Nov 12 17:42:48.556996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Nov 12 17:42:48.575162 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Nov 12 17:42:48.592145 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Nov 12 17:42:48.600271 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Nov 12 17:42:48.603251 lvm[1396]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Nov 12 17:42:48.632109 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Nov 12 17:42:48.633250 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Nov 12 17:42:48.634891 systemd[1]: Reached target sysinit.target - System Initialization.
Nov 12 17:42:48.635710 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Nov 12 17:42:48.636584 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Nov 12 17:42:48.637650 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Nov 12 17:42:48.638570 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Nov 12 17:42:48.639517 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Nov 12 17:42:48.640416 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Nov 12 17:42:48.640447 systemd[1]: Reached target paths.target - Path Units.
Nov 12 17:42:48.641082 systemd[1]: Reached target timers.target - Timer Units.
Nov 12 17:42:48.642509 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Nov 12 17:42:48.644427 systemd[1]: Starting docker.socket - Docker Socket for the API...
Nov 12 17:42:48.655659 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Nov 12 17:42:48.657558 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Nov 12 17:42:48.658831 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Nov 12 17:42:48.659699 systemd[1]: Reached target sockets.target - Socket Units.
Nov 12 17:42:48.660393 systemd[1]: Reached target basic.target - Basic System.
Nov 12 17:42:48.661096 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Nov 12 17:42:48.661125 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Nov 12 17:42:48.661968 systemd[1]: Starting containerd.service - containerd container runtime...
Nov 12 17:42:48.663614 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Nov 12 17:42:48.666904 lvm[1403]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Nov 12 17:42:48.667766 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Nov 12 17:42:48.674352 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Nov 12 17:42:48.674849 jq[1406]: false
Nov 12 17:42:48.675095 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Nov 12 17:42:48.676064 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Nov 12 17:42:48.680831 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Nov 12 17:42:48.684929 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Nov 12 17:42:48.686611 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Nov 12 17:42:48.689908 systemd[1]: Starting systemd-logind.service - User Login Management...
Nov 12 17:42:48.693927 extend-filesystems[1407]: Found loop3
Nov 12 17:42:48.693927 extend-filesystems[1407]: Found loop4
Nov 12 17:42:48.693927 extend-filesystems[1407]: Found loop5
Nov 12 17:42:48.693927 extend-filesystems[1407]: Found vda
Nov 12 17:42:48.700428 extend-filesystems[1407]: Found vda1
Nov 12 17:42:48.700428 extend-filesystems[1407]: Found vda2
Nov 12 17:42:48.700428 extend-filesystems[1407]: Found vda3
Nov 12 17:42:48.700428 extend-filesystems[1407]: Found usr
Nov 12 17:42:48.700428 extend-filesystems[1407]: Found vda4
Nov 12 17:42:48.700428 extend-filesystems[1407]: Found vda6
Nov 12 17:42:48.700428 extend-filesystems[1407]: Found vda7
Nov 12 17:42:48.700428 extend-filesystems[1407]: Found vda9
Nov 12 17:42:48.700428 extend-filesystems[1407]: Checking size of /dev/vda9
Nov 12 17:42:48.695960 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Nov 12 17:42:48.702144 dbus-daemon[1405]: [system] SELinux support is enabled
Nov 12 17:42:48.715691 extend-filesystems[1407]: Resized partition /dev/vda9
Nov 12 17:42:48.696329 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Nov 12 17:42:48.697429 systemd[1]: Starting update-engine.service - Update Engine...
Nov 12 17:42:48.700834 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Nov 12 17:42:48.718195 jq[1423]: true
Nov 12 17:42:48.702344 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Nov 12 17:42:48.706527 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Nov 12 17:42:48.710094 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Nov 12 17:42:48.710248 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Nov 12 17:42:48.710482 systemd[1]: motdgen.service: Deactivated successfully.
Nov 12 17:42:48.710606 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Nov 12 17:42:48.714020 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Nov 12 17:42:48.715657 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Nov 12 17:42:48.723026 extend-filesystems[1429]: resize2fs 1.47.1 (20-May-2024)
Nov 12 17:42:48.725398 (ntainerd)[1432]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Nov 12 17:42:48.731062 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Nov 12 17:42:48.731115 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Nov 12 17:42:48.734176 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Nov 12 17:42:48.732903 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Nov 12 17:42:48.732936 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Nov 12 17:42:48.735135 tar[1430]: linux-arm64/helm
Nov 12 17:42:48.735617 jq[1431]: true
Nov 12 17:42:48.749158 update_engine[1421]: I20241112 17:42:48.748828  1421 main.cc:92] Flatcar Update Engine starting
Nov 12 17:42:48.760102 systemd[1]: Started update-engine.service - Update Engine.
Nov 12 17:42:48.761735 update_engine[1421]: I20241112 17:42:48.761628  1421 update_check_scheduler.cc:74] Next update check in 5m34s
Nov 12 17:42:48.764231 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Nov 12 17:42:48.771771 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1351)
Nov 12 17:42:48.802581 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Nov 12 17:42:48.803296 extend-filesystems[1429]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Nov 12 17:42:48.803296 extend-filesystems[1429]: old_desc_blocks = 1, new_desc_blocks = 1
Nov 12 17:42:48.803296 extend-filesystems[1429]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Nov 12 17:42:48.804853 systemd-logind[1415]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 12 17:42:48.806305 systemd-logind[1415]: New seat seat0.
Nov 12 17:42:48.806985 systemd[1]: Started systemd-logind.service - User Login Management.
Nov 12 17:42:48.809533 extend-filesystems[1407]: Resized filesystem in /dev/vda9
Nov 12 17:42:48.811255 systemd[1]: extend-filesystems.service: Deactivated successfully.
Nov 12 17:42:48.811431 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Nov 12 17:42:48.825211 bash[1458]: Updated "/home/core/.ssh/authorized_keys"
Nov 12 17:42:48.828025 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Nov 12 17:42:48.830028 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Nov 12 17:42:48.843407 locksmithd[1451]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Nov 12 17:42:48.897268 sshd_keygen[1424]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Nov 12 17:42:48.917526 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Nov 12 17:42:48.930937 systemd[1]: Starting issuegen.service - Generate /run/issue...
Nov 12 17:42:48.933954 systemd[1]: issuegen.service: Deactivated successfully.
Nov 12 17:42:48.934124 systemd[1]: Finished issuegen.service - Generate /run/issue.
Nov 12 17:42:48.936982 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Nov 12 17:42:48.952406 containerd[1432]: time="2024-11-12T17:42:48.952325480Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21
Nov 12 17:42:48.957176 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Nov 12 17:42:48.967105 systemd[1]: Started getty@tty1.service - Getty on tty1.
Nov 12 17:42:48.969571 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Nov 12 17:42:48.971081 systemd[1]: Reached target getty.target - Login Prompts.
Nov 12 17:42:48.979894 containerd[1432]: time="2024-11-12T17:42:48.979847120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981082 containerd[1432]: time="2024-11-12T17:42:48.981048760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981082 containerd[1432]: time="2024-11-12T17:42:48.981078240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Nov 12 17:42:48.981138 containerd[1432]: time="2024-11-12T17:42:48.981093200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Nov 12 17:42:48.981288 containerd[1432]: time="2024-11-12T17:42:48.981262000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Nov 12 17:42:48.981313 containerd[1432]: time="2024-11-12T17:42:48.981286040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981355 containerd[1432]: time="2024-11-12T17:42:48.981338520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981378 containerd[1432]: time="2024-11-12T17:42:48.981354080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981526 containerd[1432]: time="2024-11-12T17:42:48.981506160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981551 containerd[1432]: time="2024-11-12T17:42:48.981526000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981551 containerd[1432]: time="2024-11-12T17:42:48.981538760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981551 containerd[1432]: time="2024-11-12T17:42:48.981547880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981628 containerd[1432]: time="2024-11-12T17:42:48.981613240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981837 containerd[1432]: time="2024-11-12T17:42:48.981819240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981937 containerd[1432]: time="2024-11-12T17:42:48.981919440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Nov 12 17:42:48.981962 containerd[1432]: time="2024-11-12T17:42:48.981936360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Nov 12 17:42:48.982025 containerd[1432]: time="2024-11-12T17:42:48.982010680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Nov 12 17:42:48.982075 containerd[1432]: time="2024-11-12T17:42:48.982063080Z" level=info msg="metadata content store policy set" policy=shared
Nov 12 17:42:48.985212 containerd[1432]: time="2024-11-12T17:42:48.985184680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Nov 12 17:42:48.985268 containerd[1432]: time="2024-11-12T17:42:48.985240680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Nov 12 17:42:48.985268 containerd[1432]: time="2024-11-12T17:42:48.985258000Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Nov 12 17:42:48.985310 containerd[1432]: time="2024-11-12T17:42:48.985277920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Nov 12 17:42:48.985310 containerd[1432]: time="2024-11-12T17:42:48.985291480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Nov 12 17:42:48.985438 containerd[1432]: time="2024-11-12T17:42:48.985407680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985647160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985802880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985820920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985834040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985849120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985862480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985875240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985889320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985903120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985915000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985926360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985938080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985957280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987625 containerd[1432]: time="2024-11-12T17:42:48.985971120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.985988360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986000040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986012320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986025880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986037760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986050160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986061800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986075480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986087280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986104560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986116360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986130880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986150040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986162120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.987923 containerd[1432]: time="2024-11-12T17:42:48.986172000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Nov 12 17:42:48.988162 containerd[1432]: time="2024-11-12T17:42:48.986301400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Nov 12 17:42:48.988162 containerd[1432]: time="2024-11-12T17:42:48.986320760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Nov 12 17:42:48.988162 containerd[1432]: time="2024-11-12T17:42:48.986333160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Nov 12 17:42:48.988162 containerd[1432]: time="2024-11-12T17:42:48.986344560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Nov 12 17:42:48.988162 containerd[1432]: time="2024-11-12T17:42:48.986353920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.988162 containerd[1432]: time="2024-11-12T17:42:48.986366720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Nov 12 17:42:48.988162 containerd[1432]: time="2024-11-12T17:42:48.986376440Z" level=info msg="NRI interface is disabled by configuration."
Nov 12 17:42:48.988162 containerd[1432]: time="2024-11-12T17:42:48.986386560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Nov 12 17:42:48.988322 containerd[1432]: time="2024-11-12T17:42:48.986644120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Nov 12 17:42:48.988322 containerd[1432]: time="2024-11-12T17:42:48.986698920Z" level=info msg="Connect containerd service"
Nov 12 17:42:48.988322 containerd[1432]: time="2024-11-12T17:42:48.986744680Z" level=info msg="using legacy CRI server"
Nov 12 17:42:48.988322 containerd[1432]: time="2024-11-12T17:42:48.986753200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Nov 12 17:42:48.988322 containerd[1432]: time="2024-11-12T17:42:48.986831720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Nov 12 17:42:48.988322 containerd[1432]: time="2024-11-12T17:42:48.988284440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Nov 12 17:42:48.988695 containerd[1432]: time="2024-11-12T17:42:48.988654360Z" level=info msg="Start subscribing containerd event"
Nov 12 17:42:48.988793 containerd[1432]: time="2024-11-12T17:42:48.988686680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Nov 12 17:42:48.988825 containerd[1432]: time="2024-11-12T17:42:48.988818680Z" level=info msg=serving... address=/run/containerd/containerd.sock
Nov 12 17:42:48.988845 containerd[1432]: time="2024-11-12T17:42:48.988776280Z" level=info msg="Start recovering state"
Nov 12 17:42:48.988905 containerd[1432]: time="2024-11-12T17:42:48.988891120Z" level=info msg="Start event monitor"
Nov 12 17:42:48.988941 containerd[1432]: time="2024-11-12T17:42:48.988906440Z" level=info msg="Start snapshots syncer"
Nov 12 17:42:48.988941 containerd[1432]: time="2024-11-12T17:42:48.988915040Z" level=info msg="Start cni network conf syncer for default"
Nov 12 17:42:48.988941 containerd[1432]: time="2024-11-12T17:42:48.988921600Z" level=info msg="Start streaming server"
Nov 12 17:42:48.989093 containerd[1432]: time="2024-11-12T17:42:48.989036200Z" level=info msg="containerd successfully booted in 0.040472s"
Nov 12 17:42:48.989106 systemd[1]: Started containerd.service - containerd container runtime.
Nov 12 17:42:49.138306 tar[1430]: linux-arm64/LICENSE
Nov 12 17:42:49.138419 tar[1430]: linux-arm64/README.md
Nov 12 17:42:49.151125 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Nov 12 17:42:49.723810 systemd-networkd[1362]: eth0: Gained IPv6LL
Nov 12 17:42:49.726273 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Nov 12 17:42:49.728463 systemd[1]: Reached target network-online.target - Network is Online.
Nov 12 17:42:49.741140 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Nov 12 17:42:49.743595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Nov 12 17:42:49.745716 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Nov 12 17:42:49.759915 systemd[1]: coreos-metadata.service: Deactivated successfully.
Nov 12 17:42:49.761827 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Nov 12 17:42:49.763925 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Nov 12 17:42:49.765847 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Nov 12 17:42:50.246587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Nov 12 17:42:50.248193 systemd[1]: Reached target multi-user.target - Multi-User System.
Nov 12 17:42:50.250107 (kubelet)[1516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Nov 12 17:42:50.255530 systemd[1]: Startup finished in 546ms (kernel) + 4.388s (initrd) + 3.197s (userspace) = 8.132s.
Nov 12 17:42:50.694466 kubelet[1516]: E1112 17:42:50.694344    1516 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Nov 12 17:42:50.697144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Nov 12 17:42:50.697312 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Nov 12 17:42:55.994424 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Nov 12 17:42:55.995550 systemd[1]: Started sshd@0-10.0.0.44:22-10.0.0.1:52888.service - OpenSSH per-connection server daemon (10.0.0.1:52888).
Nov 12 17:42:56.058085 sshd[1529]: Accepted publickey for core from 10.0.0.1 port 52888 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:42:56.059938 sshd[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:42:56.078263 systemd-logind[1415]: New session 1 of user core.
Nov 12 17:42:56.079287 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Nov 12 17:42:56.089011 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Nov 12 17:42:56.098900 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Nov 12 17:42:56.101292 systemd[1]: Starting user@500.service - User Manager for UID 500...
Nov 12 17:42:56.108394 (systemd)[1533]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Nov 12 17:42:56.194578 systemd[1533]: Queued start job for default target default.target.
Nov 12 17:42:56.204672 systemd[1533]: Created slice app.slice - User Application Slice.
Nov 12 17:42:56.204716 systemd[1533]: Reached target paths.target - Paths.
Nov 12 17:42:56.204728 systemd[1533]: Reached target timers.target - Timers.
Nov 12 17:42:56.206000 systemd[1533]: Starting dbus.socket - D-Bus User Message Bus Socket...
Nov 12 17:42:56.216356 systemd[1533]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Nov 12 17:42:56.216425 systemd[1533]: Reached target sockets.target - Sockets.
Nov 12 17:42:56.216438 systemd[1533]: Reached target basic.target - Basic System.
Nov 12 17:42:56.216474 systemd[1533]: Reached target default.target - Main User Target.
Nov 12 17:42:56.216501 systemd[1533]: Startup finished in 102ms.
Nov 12 17:42:56.216805 systemd[1]: Started user@500.service - User Manager for UID 500.
Nov 12 17:42:56.218191 systemd[1]: Started session-1.scope - Session 1 of User core.
Nov 12 17:42:56.284607 systemd[1]: Started sshd@1-10.0.0.44:22-10.0.0.1:52890.service - OpenSSH per-connection server daemon (10.0.0.1:52890).
Nov 12 17:42:56.345346 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 52890 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:42:56.347116 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:42:56.351757 systemd-logind[1415]: New session 2 of user core.
Nov 12 17:42:56.360894 systemd[1]: Started session-2.scope - Session 2 of User core.
Nov 12 17:42:56.414124 sshd[1544]: pam_unix(sshd:session): session closed for user core
Nov 12 17:42:56.422071 systemd[1]: sshd@1-10.0.0.44:22-10.0.0.1:52890.service: Deactivated successfully.
Nov 12 17:42:56.425186 systemd[1]: session-2.scope: Deactivated successfully.
Nov 12 17:42:56.427522 systemd-logind[1415]: Session 2 logged out. Waiting for processes to exit.
Nov 12 17:42:56.428708 systemd[1]: Started sshd@2-10.0.0.44:22-10.0.0.1:52906.service - OpenSSH per-connection server daemon (10.0.0.1:52906).
Nov 12 17:42:56.429498 systemd-logind[1415]: Removed session 2.
Nov 12 17:42:56.467436 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 52906 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:42:56.468790 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:42:56.473089 systemd-logind[1415]: New session 3 of user core.
Nov 12 17:42:56.483904 systemd[1]: Started session-3.scope - Session 3 of User core.
Nov 12 17:42:56.533081 sshd[1551]: pam_unix(sshd:session): session closed for user core
Nov 12 17:42:56.544310 systemd[1]: sshd@2-10.0.0.44:22-10.0.0.1:52906.service: Deactivated successfully.
Nov 12 17:42:56.547964 systemd[1]: session-3.scope: Deactivated successfully.
Nov 12 17:42:56.549253 systemd-logind[1415]: Session 3 logged out. Waiting for processes to exit.
Nov 12 17:42:56.556044 systemd[1]: Started sshd@3-10.0.0.44:22-10.0.0.1:52912.service - OpenSSH per-connection server daemon (10.0.0.1:52912).
Nov 12 17:42:56.557105 systemd-logind[1415]: Removed session 3.
Nov 12 17:42:56.589925 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 52912 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:42:56.591250 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:42:56.595485 systemd-logind[1415]: New session 4 of user core.
Nov 12 17:42:56.605880 systemd[1]: Started session-4.scope - Session 4 of User core.
Nov 12 17:42:56.659083 sshd[1558]: pam_unix(sshd:session): session closed for user core
Nov 12 17:42:56.668019 systemd[1]: sshd@3-10.0.0.44:22-10.0.0.1:52912.service: Deactivated successfully.
Nov 12 17:42:56.669379 systemd[1]: session-4.scope: Deactivated successfully.
Nov 12 17:42:56.670593 systemd-logind[1415]: Session 4 logged out. Waiting for processes to exit.
Nov 12 17:42:56.671710 systemd[1]: Started sshd@4-10.0.0.44:22-10.0.0.1:52920.service - OpenSSH per-connection server daemon (10.0.0.1:52920).
Nov 12 17:42:56.672590 systemd-logind[1415]: Removed session 4.
Nov 12 17:42:56.707646 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 52920 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:42:56.708914 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:42:56.713047 systemd-logind[1415]: New session 5 of user core.
Nov 12 17:42:56.724907 systemd[1]: Started session-5.scope - Session 5 of User core.
Nov 12 17:42:56.784277 sudo[1568]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Nov 12 17:42:56.784559 sudo[1568]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Nov 12 17:42:56.797536 sudo[1568]: pam_unix(sudo:session): session closed for user root
Nov 12 17:42:56.799443 sshd[1565]: pam_unix(sshd:session): session closed for user core
Nov 12 17:42:56.811312 systemd[1]: sshd@4-10.0.0.44:22-10.0.0.1:52920.service: Deactivated successfully.
Nov 12 17:42:56.814123 systemd[1]: session-5.scope: Deactivated successfully.
Nov 12 17:42:56.815460 systemd-logind[1415]: Session 5 logged out. Waiting for processes to exit.
Nov 12 17:42:56.818057 systemd[1]: Started sshd@5-10.0.0.44:22-10.0.0.1:52922.service - OpenSSH per-connection server daemon (10.0.0.1:52922).
Nov 12 17:42:56.818829 systemd-logind[1415]: Removed session 5.
Nov 12 17:42:56.853853 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 52922 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:42:56.855193 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:42:56.859373 systemd-logind[1415]: New session 6 of user core.
Nov 12 17:42:56.869912 systemd[1]: Started session-6.scope - Session 6 of User core.
Nov 12 17:42:56.922255 sudo[1577]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Nov 12 17:42:56.922543 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Nov 12 17:42:56.926204 sudo[1577]: pam_unix(sudo:session): session closed for user root
Nov 12 17:42:56.931197 sudo[1576]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Nov 12 17:42:56.931489 sudo[1576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Nov 12 17:42:56.945992 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Nov 12 17:42:56.947167 auditctl[1580]: No rules
Nov 12 17:42:56.948014 systemd[1]: audit-rules.service: Deactivated successfully.
Nov 12 17:42:56.949776 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Nov 12 17:42:56.951500 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Nov 12 17:42:56.975297 augenrules[1598]: No rules
Nov 12 17:42:56.976499 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Nov 12 17:42:56.977940 sudo[1576]: pam_unix(sudo:session): session closed for user root
Nov 12 17:42:56.979454 sshd[1573]: pam_unix(sshd:session): session closed for user core
Nov 12 17:42:56.992220 systemd[1]: sshd@5-10.0.0.44:22-10.0.0.1:52922.service: Deactivated successfully.
Nov 12 17:42:56.993634 systemd[1]: session-6.scope: Deactivated successfully.
Nov 12 17:42:56.994900 systemd-logind[1415]: Session 6 logged out. Waiting for processes to exit.
Nov 12 17:42:56.996036 systemd[1]: Started sshd@6-10.0.0.44:22-10.0.0.1:52938.service - OpenSSH per-connection server daemon (10.0.0.1:52938).
Nov 12 17:42:56.997102 systemd-logind[1415]: Removed session 6.
Nov 12 17:42:57.032372 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 52938 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:42:57.033589 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:42:57.037994 systemd-logind[1415]: New session 7 of user core.
Nov 12 17:42:57.047895 systemd[1]: Started session-7.scope - Session 7 of User core.
Nov 12 17:42:57.100002 sudo[1609]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Nov 12 17:42:57.100264 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Nov 12 17:42:57.420998 systemd[1]: Starting docker.service - Docker Application Container Engine...
Nov 12 17:42:57.421114 (dockerd)[1628]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Nov 12 17:42:57.677275 dockerd[1628]: time="2024-11-12T17:42:57.677131893Z" level=info msg="Starting up"
Nov 12 17:42:57.776649 systemd[1]: var-lib-docker-metacopy\x2dcheck1638174683-merged.mount: Deactivated successfully.
Nov 12 17:42:57.786091 dockerd[1628]: time="2024-11-12T17:42:57.786045658Z" level=info msg="Loading containers: start."
Nov 12 17:42:57.874763 kernel: Initializing XFRM netlink socket
Nov 12 17:42:57.937555 systemd-networkd[1362]: docker0: Link UP
Nov 12 17:42:57.964074 dockerd[1628]: time="2024-11-12T17:42:57.964032281Z" level=info msg="Loading containers: done."
Nov 12 17:42:57.980092 dockerd[1628]: time="2024-11-12T17:42:57.980037734Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Nov 12 17:42:57.980242 dockerd[1628]: time="2024-11-12T17:42:57.980143459Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0
Nov 12 17:42:57.980300 dockerd[1628]: time="2024-11-12T17:42:57.980241943Z" level=info msg="Daemon has completed initialization"
Nov 12 17:42:58.017627 dockerd[1628]: time="2024-11-12T17:42:58.017491960Z" level=info msg="API listen on /run/docker.sock"
Nov 12 17:42:58.018435 systemd[1]: Started docker.service - Docker Application Container Engine.
Nov 12 17:42:58.430908 containerd[1432]: time="2024-11-12T17:42:58.430694369Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\""
Nov 12 17:42:59.130719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount879292227.mount: Deactivated successfully.
Nov 12 17:43:00.250036 containerd[1432]: time="2024-11-12T17:43:00.249972360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:00.250479 containerd[1432]: time="2024-11-12T17:43:00.250436298Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.2: active requests=0, bytes read=25616007"
Nov 12 17:43:00.251487 containerd[1432]: time="2024-11-12T17:43:00.251424815Z" level=info msg="ImageCreate event name:\"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:00.254389 containerd[1432]: time="2024-11-12T17:43:00.254353391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:00.255764 containerd[1432]: time="2024-11-12T17:43:00.255562833Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.2\" with image id \"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\", size \"25612805\" in 1.824811421s"
Nov 12 17:43:00.255764 containerd[1432]: time="2024-11-12T17:43:00.255602115Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\" returns image reference \"sha256:f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270\""
Nov 12 17:43:00.256702 containerd[1432]: time="2024-11-12T17:43:00.256672837Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\""
Nov 12 17:43:00.947918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Nov 12 17:43:00.956999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Nov 12 17:43:01.049982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Nov 12 17:43:01.053899 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Nov 12 17:43:01.088893 kubelet[1838]: E1112 17:43:01.088839    1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Nov 12 17:43:01.091773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Nov 12 17:43:01.091916 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Nov 12 17:43:02.020275 containerd[1432]: time="2024-11-12T17:43:02.020217951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:02.020786 containerd[1432]: time="2024-11-12T17:43:02.020744860Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.2: active requests=0, bytes read=22469649"
Nov 12 17:43:02.021662 containerd[1432]: time="2024-11-12T17:43:02.021609646Z" level=info msg="ImageCreate event name:\"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:02.024569 containerd[1432]: time="2024-11-12T17:43:02.024536555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:02.025939 containerd[1432]: time="2024-11-12T17:43:02.025906341Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.2\" with image id \"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\", size \"23872272\" in 1.769197931s"
Nov 12 17:43:02.026013 containerd[1432]: time="2024-11-12T17:43:02.025943502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\" returns image reference \"sha256:9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba\""
Nov 12 17:43:02.026539 containerd[1432]: time="2024-11-12T17:43:02.026520079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\""
Nov 12 17:43:03.416406 containerd[1432]: time="2024-11-12T17:43:03.416331118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:03.417061 containerd[1432]: time="2024-11-12T17:43:03.417024562Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.2: active requests=0, bytes read=17027038"
Nov 12 17:43:03.417818 containerd[1432]: time="2024-11-12T17:43:03.417778262Z" level=info msg="ImageCreate event name:\"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:03.420746 containerd[1432]: time="2024-11-12T17:43:03.420694032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:03.422049 containerd[1432]: time="2024-11-12T17:43:03.422015868Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.2\" with image id \"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\", size \"18429679\" in 1.395381047s"
Nov 12 17:43:03.422100 containerd[1432]: time="2024-11-12T17:43:03.422054601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\" returns image reference \"sha256:d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a\""
Nov 12 17:43:03.422471 containerd[1432]: time="2024-11-12T17:43:03.422435280Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\""
Nov 12 17:43:04.545205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028952481.mount: Deactivated successfully.
Nov 12 17:43:04.759139 containerd[1432]: time="2024-11-12T17:43:04.759087469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:04.759606 containerd[1432]: time="2024-11-12T17:43:04.759571958Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.2: active requests=0, bytes read=26769666"
Nov 12 17:43:04.760365 containerd[1432]: time="2024-11-12T17:43:04.760318301Z" level=info msg="ImageCreate event name:\"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:04.762799 containerd[1432]: time="2024-11-12T17:43:04.762746907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:04.763490 containerd[1432]: time="2024-11-12T17:43:04.763302506Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.2\" with image id \"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba\", repo tag \"registry.k8s.io/kube-proxy:v1.31.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\", size \"26768683\" in 1.340723589s"
Nov 12 17:43:04.763490 containerd[1432]: time="2024-11-12T17:43:04.763339189Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\" returns image reference \"sha256:021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba\""
Nov 12 17:43:04.763846 containerd[1432]: time="2024-11-12T17:43:04.763760218Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Nov 12 17:43:05.403174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1847437203.mount: Deactivated successfully.
Nov 12 17:43:06.223627 containerd[1432]: time="2024-11-12T17:43:06.223558537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:06.224971 containerd[1432]: time="2024-11-12T17:43:06.224918404Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383"
Nov 12 17:43:06.225827 containerd[1432]: time="2024-11-12T17:43:06.225764622Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:06.233194 containerd[1432]: time="2024-11-12T17:43:06.233135416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:06.234341 containerd[1432]: time="2024-11-12T17:43:06.234291769Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.470494846s"
Nov 12 17:43:06.234341 containerd[1432]: time="2024-11-12T17:43:06.234340497Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Nov 12 17:43:06.235015 containerd[1432]: time="2024-11-12T17:43:06.234966235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\""
Nov 12 17:43:06.716042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187690636.mount: Deactivated successfully.
Nov 12 17:43:06.720515 containerd[1432]: time="2024-11-12T17:43:06.720468206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:06.721237 containerd[1432]: time="2024-11-12T17:43:06.721208446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705"
Nov 12 17:43:06.721975 containerd[1432]: time="2024-11-12T17:43:06.721909928Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:06.724591 containerd[1432]: time="2024-11-12T17:43:06.724550258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:06.726316 containerd[1432]: time="2024-11-12T17:43:06.726181463Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 491.175788ms"
Nov 12 17:43:06.726316 containerd[1432]: time="2024-11-12T17:43:06.726210570Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\""
Nov 12 17:43:06.726630 containerd[1432]: time="2024-11-12T17:43:06.726604531Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\""
Nov 12 17:43:07.211857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount544569824.mount: Deactivated successfully.
Nov 12 17:43:09.731090 containerd[1432]: time="2024-11-12T17:43:09.731041333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:09.731710 containerd[1432]: time="2024-11-12T17:43:09.731660202Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406104"
Nov 12 17:43:09.732311 containerd[1432]: time="2024-11-12T17:43:09.732286457Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:09.736525 containerd[1432]: time="2024-11-12T17:43:09.736466824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:09.737745 containerd[1432]: time="2024-11-12T17:43:09.737605344Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.010966655s"
Nov 12 17:43:09.737745 containerd[1432]: time="2024-11-12T17:43:09.737642992Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\""
Nov 12 17:43:11.342306 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Nov 12 17:43:11.352003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Nov 12 17:43:11.479061 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Nov 12 17:43:11.483035 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Nov 12 17:43:11.515651 kubelet[1995]: E1112 17:43:11.515561    1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Nov 12 17:43:11.518127 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Nov 12 17:43:11.518267 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Nov 12 17:43:13.592228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Nov 12 17:43:13.600933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Nov 12 17:43:13.620991 systemd[1]: Reloading requested from client PID 2010 ('systemctl') (unit session-7.scope)...
Nov 12 17:43:13.621009 systemd[1]: Reloading...
Nov 12 17:43:13.689815 zram_generator::config[2052]: No configuration found.
Nov 12 17:43:13.817553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Nov 12 17:43:13.869567 systemd[1]: Reloading finished in 248 ms.
Nov 12 17:43:13.915617 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Nov 12 17:43:13.917624 systemd[1]: kubelet.service: Deactivated successfully.
Nov 12 17:43:13.917805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Nov 12 17:43:13.919427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Nov 12 17:43:14.008729 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Nov 12 17:43:14.012302 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Nov 12 17:43:14.045166 kubelet[2096]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 12 17:43:14.045166 kubelet[2096]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Nov 12 17:43:14.045166 kubelet[2096]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 12 17:43:14.045453 kubelet[2096]: I1112 17:43:14.045342    2096 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Nov 12 17:43:14.542365 kubelet[2096]: I1112 17:43:14.542314    2096 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Nov 12 17:43:14.542365 kubelet[2096]: I1112 17:43:14.542351    2096 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Nov 12 17:43:14.542625 kubelet[2096]: I1112 17:43:14.542603    2096 server.go:929] "Client rotation is on, will bootstrap in background"
Nov 12 17:43:14.593416 kubelet[2096]: E1112 17:43:14.593360    2096 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError"
Nov 12 17:43:14.594310 kubelet[2096]: I1112 17:43:14.594091    2096 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Nov 12 17:43:14.601540 kubelet[2096]: E1112 17:43:14.601500    2096 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Nov 12 17:43:14.601540 kubelet[2096]: I1112 17:43:14.601532    2096 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Nov 12 17:43:14.604811 kubelet[2096]: I1112 17:43:14.604785    2096 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Nov 12 17:43:14.605532 kubelet[2096]: I1112 17:43:14.605506    2096 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Nov 12 17:43:14.605687 kubelet[2096]: I1112 17:43:14.605652    2096 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Nov 12 17:43:14.605862 kubelet[2096]: I1112 17:43:14.605681    2096 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Nov 12 17:43:14.605993 kubelet[2096]: I1112 17:43:14.605983    2096 topology_manager.go:138] "Creating topology manager with none policy"
Nov 12 17:43:14.606028 kubelet[2096]: I1112 17:43:14.605994    2096 container_manager_linux.go:300] "Creating device plugin manager"
Nov 12 17:43:14.606172 kubelet[2096]: I1112 17:43:14.606160    2096 state_mem.go:36] "Initialized new in-memory state store"
Nov 12 17:43:14.607816 kubelet[2096]: I1112 17:43:14.607775    2096 kubelet.go:408] "Attempting to sync node with API server"
Nov 12 17:43:14.607816 kubelet[2096]: I1112 17:43:14.607802    2096 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Nov 12 17:43:14.607891 kubelet[2096]: I1112 17:43:14.607831    2096 kubelet.go:314] "Adding apiserver pod source"
Nov 12 17:43:14.607891 kubelet[2096]: I1112 17:43:14.607842    2096 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Nov 12 17:43:14.610591 kubelet[2096]: I1112 17:43:14.610385    2096 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Nov 12 17:43:14.611209 kubelet[2096]: W1112 17:43:14.611162    2096 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused
Nov 12 17:43:14.611255 kubelet[2096]: E1112 17:43:14.611220    2096 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError"
Nov 12 17:43:14.611310 kubelet[2096]: W1112 17:43:14.611285    2096 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused
Nov 12 17:43:14.611339 kubelet[2096]: E1112 17:43:14.611314    2096 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError"
Nov 12 17:43:14.614669 kubelet[2096]: I1112 17:43:14.614574    2096 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Nov 12 17:43:14.615514 kubelet[2096]: W1112 17:43:14.615404    2096 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Nov 12 17:43:14.616808 kubelet[2096]: I1112 17:43:14.616790    2096 server.go:1269] "Started kubelet"
Nov 12 17:43:14.618292 kubelet[2096]: I1112 17:43:14.617905    2096 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Nov 12 17:43:14.618292 kubelet[2096]: I1112 17:43:14.617962    2096 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Nov 12 17:43:14.618292 kubelet[2096]: I1112 17:43:14.618150    2096 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Nov 12 17:43:14.620311 kubelet[2096]: I1112 17:43:14.618452    2096 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Nov 12 17:43:14.620311 kubelet[2096]: I1112 17:43:14.618586    2096 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Nov 12 17:43:14.620311 kubelet[2096]: I1112 17:43:14.619349    2096 server.go:460] "Adding debug handlers to kubelet server"
Nov 12 17:43:14.620311 kubelet[2096]: E1112 17:43:14.619972    2096 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Nov 12 17:43:14.620311 kubelet[2096]: I1112 17:43:14.620067    2096 volume_manager.go:289] "Starting Kubelet Volume Manager"
Nov 12 17:43:14.620311 kubelet[2096]: I1112 17:43:14.620289    2096 reconciler.go:26] "Reconciler: start to sync state"
Nov 12 17:43:14.620311 kubelet[2096]: I1112 17:43:14.620317    2096 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Nov 12 17:43:14.621032 kubelet[2096]: W1112 17:43:14.620625    2096 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused
Nov 12 17:43:14.621032 kubelet[2096]: E1112 17:43:14.620682    2096 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError"
Nov 12 17:43:14.621146 kubelet[2096]: I1112 17:43:14.621116    2096 factory.go:221] Registration of the systemd container factory successfully
Nov 12 17:43:14.621216 kubelet[2096]: I1112 17:43:14.621197    2096 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Nov 12 17:43:14.621243 kubelet[2096]: E1112 17:43:14.621221    2096 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="200ms"
Nov 12 17:43:14.626062 kubelet[2096]: I1112 17:43:14.623236    2096 factory.go:221] Registration of the containerd container factory successfully
Nov 12 17:43:14.627302 kubelet[2096]: E1112 17:43:14.626349    2096 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1807497fb6618632  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-11-12 17:43:14.616763954 +0000 UTC m=+0.601375666,LastTimestamp:2024-11-12 17:43:14.616763954 +0000 UTC m=+0.601375666,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Nov 12 17:43:14.629160 kubelet[2096]: E1112 17:43:14.629137    2096 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Nov 12 17:43:14.633815 kubelet[2096]: I1112 17:43:14.633683    2096 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Nov 12 17:43:14.634903 kubelet[2096]: I1112 17:43:14.634605    2096 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Nov 12 17:43:14.634903 kubelet[2096]: I1112 17:43:14.634626    2096 status_manager.go:217] "Starting to sync pod status with apiserver"
Nov 12 17:43:14.634903 kubelet[2096]: I1112 17:43:14.634646    2096 kubelet.go:2321] "Starting kubelet main sync loop"
Nov 12 17:43:14.634903 kubelet[2096]: E1112 17:43:14.634684    2096 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Nov 12 17:43:14.638937 kubelet[2096]: W1112 17:43:14.638894    2096 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused
Nov 12 17:43:14.639059 kubelet[2096]: E1112 17:43:14.639041    2096 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError"
Nov 12 17:43:14.639186 kubelet[2096]: I1112 17:43:14.639173    2096 cpu_manager.go:214] "Starting CPU manager" policy="none"
Nov 12 17:43:14.639238 kubelet[2096]: I1112 17:43:14.639229    2096 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Nov 12 17:43:14.639342 kubelet[2096]: I1112 17:43:14.639331    2096 state_mem.go:36] "Initialized new in-memory state store"
Nov 12 17:43:14.702857 kubelet[2096]: I1112 17:43:14.702825    2096 policy_none.go:49] "None policy: Start"
Nov 12 17:43:14.703882 kubelet[2096]: I1112 17:43:14.703856    2096 memory_manager.go:170] "Starting memorymanager" policy="None"
Nov 12 17:43:14.703956 kubelet[2096]: I1112 17:43:14.703889    2096 state_mem.go:35] "Initializing new in-memory state store"
Nov 12 17:43:14.709961 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Nov 12 17:43:14.720869 kubelet[2096]: E1112 17:43:14.720837    2096 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Nov 12 17:43:14.725449 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Nov 12 17:43:14.728014 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Nov 12 17:43:14.735791 kubelet[2096]: E1112 17:43:14.735752    2096 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Nov 12 17:43:14.739448 kubelet[2096]: I1112 17:43:14.739397    2096 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Nov 12 17:43:14.739767 kubelet[2096]: I1112 17:43:14.739576    2096 eviction_manager.go:189] "Eviction manager: starting control loop"
Nov 12 17:43:14.739767 kubelet[2096]: I1112 17:43:14.739593    2096 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Nov 12 17:43:14.739866 kubelet[2096]: I1112 17:43:14.739811    2096 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Nov 12 17:43:14.741156 kubelet[2096]: E1112 17:43:14.741056    2096 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Nov 12 17:43:14.822147 kubelet[2096]: E1112 17:43:14.822045    2096 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="400ms"
Nov 12 17:43:14.841215 kubelet[2096]: I1112 17:43:14.841172    2096 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Nov 12 17:43:14.843339 kubelet[2096]: E1112 17:43:14.843307    2096 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost"
Nov 12 17:43:14.947783 systemd[1]: Created slice kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice - libcontainer container kubepods-burstable-pod33673bc39d15d92b38b41cdd12700fe3.slice.
Nov 12 17:43:14.967640 systemd[1]: Created slice kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice - libcontainer container kubepods-burstable-pod2bd0c21dd05cc63bc1db25732dedb07c.slice.
Nov 12 17:43:14.982582 systemd[1]: Created slice kubepods-burstable-pod2e559769d664910718a4cc241cc8c76c.slice - libcontainer container kubepods-burstable-pod2e559769d664910718a4cc241cc8c76c.slice.
Nov 12 17:43:15.022769 kubelet[2096]: I1112 17:43:15.022689    2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e559769d664910718a4cc241cc8c76c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e559769d664910718a4cc241cc8c76c\") " pod="kube-system/kube-apiserver-localhost"
Nov 12 17:43:15.022769 kubelet[2096]: I1112 17:43:15.022811    2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e559769d664910718a4cc241cc8c76c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2e559769d664910718a4cc241cc8c76c\") " pod="kube-system/kube-apiserver-localhost"
Nov 12 17:43:15.022769 kubelet[2096]: I1112 17:43:15.022840    2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost"
Nov 12 17:43:15.022769 kubelet[2096]: I1112 17:43:15.022856    2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost"
Nov 12 17:43:15.022769 kubelet[2096]: I1112 17:43:15.022872    2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost"
Nov 12 17:43:15.023132 kubelet[2096]: I1112 17:43:15.022887    2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e559769d664910718a4cc241cc8c76c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e559769d664910718a4cc241cc8c76c\") " pod="kube-system/kube-apiserver-localhost"
Nov 12 17:43:15.023132 kubelet[2096]: I1112 17:43:15.022901    2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost"
Nov 12 17:43:15.023132 kubelet[2096]: I1112 17:43:15.022917    2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost"
Nov 12 17:43:15.023132 kubelet[2096]: I1112 17:43:15.022956    2096 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost"
Nov 12 17:43:15.044427 kubelet[2096]: I1112 17:43:15.044381    2096 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Nov 12 17:43:15.044711 kubelet[2096]: E1112 17:43:15.044685    2096 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost"
Nov 12 17:43:15.223559 kubelet[2096]: E1112 17:43:15.223432    2096 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="800ms"
Nov 12 17:43:15.266972 kubelet[2096]: E1112 17:43:15.266928    2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:15.267666 containerd[1432]: time="2024-11-12T17:43:15.267560079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,}"
Nov 12 17:43:15.281939 kubelet[2096]: E1112 17:43:15.281896    2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:15.282404 containerd[1432]: time="2024-11-12T17:43:15.282360479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,}"
Nov 12 17:43:15.284766 kubelet[2096]: E1112 17:43:15.284674    2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:15.285095 containerd[1432]: time="2024-11-12T17:43:15.285053646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2e559769d664910718a4cc241cc8c76c,Namespace:kube-system,Attempt:0,}"
Nov 12 17:43:15.446006 kubelet[2096]: W1112 17:43:15.445905    2096 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused
Nov 12 17:43:15.446006 kubelet[2096]: E1112 17:43:15.445999    2096 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError"
Nov 12 17:43:15.446252 kubelet[2096]: I1112 17:43:15.446224    2096 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Nov 12 17:43:15.446565 kubelet[2096]: E1112 17:43:15.446534    2096 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost"
Nov 12 17:43:15.529662 kubelet[2096]: W1112 17:43:15.529515    2096 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused
Nov 12 17:43:15.529662 kubelet[2096]: E1112 17:43:15.529585    2096 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError"
Nov 12 17:43:15.595728 kubelet[2096]: W1112 17:43:15.595649    2096 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused
Nov 12 17:43:15.595855 kubelet[2096]: E1112 17:43:15.595739    2096 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError"
Nov 12 17:43:15.684823 kubelet[2096]: W1112 17:43:15.684750    2096 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused
Nov 12 17:43:15.684958 kubelet[2096]: E1112 17:43:15.684823    2096 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.44:6443: connect: connection refused" logger="UnhandledError"
Nov 12 17:43:15.908548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756570602.mount: Deactivated successfully.
Nov 12 17:43:15.913647 containerd[1432]: time="2024-11-12T17:43:15.913561548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Nov 12 17:43:15.915309 containerd[1432]: time="2024-11-12T17:43:15.915277257Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175"
Nov 12 17:43:15.915959 containerd[1432]: time="2024-11-12T17:43:15.915922766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Nov 12 17:43:15.916914 containerd[1432]: time="2024-11-12T17:43:15.916884560Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Nov 12 17:43:15.917218 containerd[1432]: time="2024-11-12T17:43:15.917145961Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Nov 12 17:43:15.917835 containerd[1432]: time="2024-11-12T17:43:15.917808416Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Nov 12 17:43:15.918173 containerd[1432]: time="2024-11-12T17:43:15.918143289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Nov 12 17:43:15.920049 containerd[1432]: time="2024-11-12T17:43:15.920019725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Nov 12 17:43:15.923071 containerd[1432]: time="2024-11-12T17:43:15.923041876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 655.401755ms"
Nov 12 17:43:15.924642 containerd[1432]: time="2024-11-12T17:43:15.924500631Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 642.052899ms"
Nov 12 17:43:15.926509 containerd[1432]: time="2024-11-12T17:43:15.926469008Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 641.348699ms"
Nov 12 17:43:16.024516 kubelet[2096]: E1112 17:43:16.024452    2096 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="1.6s"
Nov 12 17:43:16.122828 containerd[1432]: time="2024-11-12T17:43:16.122690425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:16.122828 containerd[1432]: time="2024-11-12T17:43:16.122803857Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:16.122993 containerd[1432]: time="2024-11-12T17:43:16.122851360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:16.124130 containerd[1432]: time="2024-11-12T17:43:16.124040635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:16.125534 containerd[1432]: time="2024-11-12T17:43:16.125209964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:16.125534 containerd[1432]: time="2024-11-12T17:43:16.125261953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:16.125534 containerd[1432]: time="2024-11-12T17:43:16.125273449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:16.125534 containerd[1432]: time="2024-11-12T17:43:16.125337415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:16.126022 containerd[1432]: time="2024-11-12T17:43:16.125665575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:16.126022 containerd[1432]: time="2024-11-12T17:43:16.125709754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:16.126022 containerd[1432]: time="2024-11-12T17:43:16.125736149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:16.126022 containerd[1432]: time="2024-11-12T17:43:16.125806604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:16.146869 systemd[1]: Started cri-containerd-fda2c636be0c89521bf6f53fa3b09a96439db23e9020a81019be1308b8fd6c84.scope - libcontainer container fda2c636be0c89521bf6f53fa3b09a96439db23e9020a81019be1308b8fd6c84.
Nov 12 17:43:16.151178 systemd[1]: Started cri-containerd-6da0bef82783d9f4ad4aa9a7f7a8b8a4aba8c226f5f18410096e3f0f5ce0b06d.scope - libcontainer container 6da0bef82783d9f4ad4aa9a7f7a8b8a4aba8c226f5f18410096e3f0f5ce0b06d.
Nov 12 17:43:16.153079 systemd[1]: Started cri-containerd-bd209ea23092b8f5fbbcadd58f0e7867c2d7c77ef8269d45a4a944310e54e3bb.scope - libcontainer container bd209ea23092b8f5fbbcadd58f0e7867c2d7c77ef8269d45a4a944310e54e3bb.
Nov 12 17:43:16.183433 containerd[1432]: time="2024-11-12T17:43:16.182577421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:2bd0c21dd05cc63bc1db25732dedb07c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fda2c636be0c89521bf6f53fa3b09a96439db23e9020a81019be1308b8fd6c84\""
Nov 12 17:43:16.185256 kubelet[2096]: E1112 17:43:16.185060    2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:16.187264 containerd[1432]: time="2024-11-12T17:43:16.187223572Z" level=info msg="CreateContainer within sandbox \"fda2c636be0c89521bf6f53fa3b09a96439db23e9020a81019be1308b8fd6c84\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Nov 12 17:43:16.190999 containerd[1432]: time="2024-11-12T17:43:16.190955898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33673bc39d15d92b38b41cdd12700fe3,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd209ea23092b8f5fbbcadd58f0e7867c2d7c77ef8269d45a4a944310e54e3bb\""
Nov 12 17:43:16.191392 containerd[1432]: time="2024-11-12T17:43:16.191366168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2e559769d664910718a4cc241cc8c76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6da0bef82783d9f4ad4aa9a7f7a8b8a4aba8c226f5f18410096e3f0f5ce0b06d\""
Nov 12 17:43:16.191631 kubelet[2096]: E1112 17:43:16.191611    2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:16.191895 kubelet[2096]: E1112 17:43:16.191866    2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:16.193407 containerd[1432]: time="2024-11-12T17:43:16.193231870Z" level=info msg="CreateContainer within sandbox \"bd209ea23092b8f5fbbcadd58f0e7867c2d7c77ef8269d45a4a944310e54e3bb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Nov 12 17:43:16.194184 containerd[1432]: time="2024-11-12T17:43:16.194156710Z" level=info msg="CreateContainer within sandbox \"6da0bef82783d9f4ad4aa9a7f7a8b8a4aba8c226f5f18410096e3f0f5ce0b06d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Nov 12 17:43:16.205368 containerd[1432]: time="2024-11-12T17:43:16.205332579Z" level=info msg="CreateContainer within sandbox \"fda2c636be0c89521bf6f53fa3b09a96439db23e9020a81019be1308b8fd6c84\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fe54b6b8afe0acc63b2dbeed7e62243ad4066269392e0c22b071f01197af988f\""
Nov 12 17:43:16.206283 containerd[1432]: time="2024-11-12T17:43:16.206247005Z" level=info msg="StartContainer for \"fe54b6b8afe0acc63b2dbeed7e62243ad4066269392e0c22b071f01197af988f\""
Nov 12 17:43:16.208566 containerd[1432]: time="2024-11-12T17:43:16.208482763Z" level=info msg="CreateContainer within sandbox \"6da0bef82783d9f4ad4aa9a7f7a8b8a4aba8c226f5f18410096e3f0f5ce0b06d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a9a1f533114f7979c65ab2d5debf9233a89c6d80f23d870d82f678e8ef8ec47a\""
Nov 12 17:43:16.208931 containerd[1432]: time="2024-11-12T17:43:16.208909255Z" level=info msg="StartContainer for \"a9a1f533114f7979c65ab2d5debf9233a89c6d80f23d870d82f678e8ef8ec47a\""
Nov 12 17:43:16.212957 containerd[1432]: time="2024-11-12T17:43:16.212913345Z" level=info msg="CreateContainer within sandbox \"bd209ea23092b8f5fbbcadd58f0e7867c2d7c77ef8269d45a4a944310e54e3bb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"55f1cb69648d69a16f77a3e0ba918de733f9cba5ef1ee3e997ec894e8deb328c\""
Nov 12 17:43:16.214734 containerd[1432]: time="2024-11-12T17:43:16.214260993Z" level=info msg="StartContainer for \"55f1cb69648d69a16f77a3e0ba918de733f9cba5ef1ee3e997ec894e8deb328c\""
Nov 12 17:43:16.231874 systemd[1]: Started cri-containerd-fe54b6b8afe0acc63b2dbeed7e62243ad4066269392e0c22b071f01197af988f.scope - libcontainer container fe54b6b8afe0acc63b2dbeed7e62243ad4066269392e0c22b071f01197af988f.
Nov 12 17:43:16.235765 systemd[1]: Started cri-containerd-55f1cb69648d69a16f77a3e0ba918de733f9cba5ef1ee3e997ec894e8deb328c.scope - libcontainer container 55f1cb69648d69a16f77a3e0ba918de733f9cba5ef1ee3e997ec894e8deb328c.
Nov 12 17:43:16.238918 systemd[1]: Started cri-containerd-a9a1f533114f7979c65ab2d5debf9233a89c6d80f23d870d82f678e8ef8ec47a.scope - libcontainer container a9a1f533114f7979c65ab2d5debf9233a89c6d80f23d870d82f678e8ef8ec47a.
Nov 12 17:43:16.247685 kubelet[2096]: I1112 17:43:16.247655    2096 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Nov 12 17:43:16.248235 kubelet[2096]: E1112 17:43:16.247979    2096 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost"
Nov 12 17:43:16.266808 containerd[1432]: time="2024-11-12T17:43:16.266770495Z" level=info msg="StartContainer for \"fe54b6b8afe0acc63b2dbeed7e62243ad4066269392e0c22b071f01197af988f\" returns successfully"
Nov 12 17:43:16.292048 containerd[1432]: time="2024-11-12T17:43:16.289387467Z" level=info msg="StartContainer for \"55f1cb69648d69a16f77a3e0ba918de733f9cba5ef1ee3e997ec894e8deb328c\" returns successfully"
Nov 12 17:43:16.292048 containerd[1432]: time="2024-11-12T17:43:16.289472181Z" level=info msg="StartContainer for \"a9a1f533114f7979c65ab2d5debf9233a89c6d80f23d870d82f678e8ef8ec47a\" returns successfully"
Nov 12 17:43:16.644381 kubelet[2096]: E1112 17:43:16.644315    2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:16.648508 kubelet[2096]: E1112 17:43:16.648329    2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:16.649067 kubelet[2096]: E1112 17:43:16.649025    2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:17.652257 kubelet[2096]: E1112 17:43:17.652189    2096 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:17.698220 kubelet[2096]: E1112 17:43:17.698178    2096 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Nov 12 17:43:17.850561 kubelet[2096]: I1112 17:43:17.850150    2096 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Nov 12 17:43:17.863192 kubelet[2096]: I1112 17:43:17.863157    2096 kubelet_node_status.go:75] "Successfully registered node" node="localhost"
Nov 12 17:43:18.611000 kubelet[2096]: I1112 17:43:18.610956    2096 apiserver.go:52] "Watching apiserver"
Nov 12 17:43:18.620497 kubelet[2096]: I1112 17:43:18.620449    2096 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Nov 12 17:43:19.967655 systemd[1]: Reloading requested from client PID 2372 ('systemctl') (unit session-7.scope)...
Nov 12 17:43:19.967669 systemd[1]: Reloading...
Nov 12 17:43:20.037750 zram_generator::config[2416]: No configuration found.
Nov 12 17:43:20.120659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Nov 12 17:43:20.185503 systemd[1]: Reloading finished in 217 ms.
Nov 12 17:43:20.219786 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Nov 12 17:43:20.224015 systemd[1]: kubelet.service: Deactivated successfully.
Nov 12 17:43:20.224189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Nov 12 17:43:20.224231 systemd[1]: kubelet.service: Consumed 1.015s CPU time, 117.7M memory peak, 0B memory swap peak.
Nov 12 17:43:20.234082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Nov 12 17:43:20.324262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Nov 12 17:43:20.329022 (kubelet)[2454]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Nov 12 17:43:20.391344 kubelet[2454]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 12 17:43:20.391344 kubelet[2454]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Nov 12 17:43:20.391344 kubelet[2454]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Nov 12 17:43:20.391757 kubelet[2454]: I1112 17:43:20.391403    2454 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Nov 12 17:43:20.396657 kubelet[2454]: I1112 17:43:20.396608    2454 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Nov 12 17:43:20.396657 kubelet[2454]: I1112 17:43:20.396641    2454 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Nov 12 17:43:20.396904 kubelet[2454]: I1112 17:43:20.396878    2454 server.go:929] "Client rotation is on, will bootstrap in background"
Nov 12 17:43:20.398195 kubelet[2454]: I1112 17:43:20.398172    2454 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Nov 12 17:43:20.402717 kubelet[2454]: I1112 17:43:20.402681    2454 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Nov 12 17:43:20.408881 kubelet[2454]: E1112 17:43:20.408839    2454 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Nov 12 17:43:20.408881 kubelet[2454]: I1112 17:43:20.408880    2454 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Nov 12 17:43:20.411914 kubelet[2454]: I1112 17:43:20.411884    2454 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Nov 12 17:43:20.412049 kubelet[2454]: I1112 17:43:20.412016    2454 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Nov 12 17:43:20.412179 kubelet[2454]: I1112 17:43:20.412146    2454 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Nov 12 17:43:20.412334 kubelet[2454]: I1112 17:43:20.412174    2454 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Nov 12 17:43:20.412403 kubelet[2454]: I1112 17:43:20.412342    2454 topology_manager.go:138] "Creating topology manager with none policy"
Nov 12 17:43:20.412403 kubelet[2454]: I1112 17:43:20.412351    2454 container_manager_linux.go:300] "Creating device plugin manager"
Nov 12 17:43:20.412403 kubelet[2454]: I1112 17:43:20.412380    2454 state_mem.go:36] "Initialized new in-memory state store"
Nov 12 17:43:20.412498 kubelet[2454]: I1112 17:43:20.412486    2454 kubelet.go:408] "Attempting to sync node with API server"
Nov 12 17:43:20.412530 kubelet[2454]: I1112 17:43:20.412502    2454 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Nov 12 17:43:20.412552 kubelet[2454]: I1112 17:43:20.412536    2454 kubelet.go:314] "Adding apiserver pod source"
Nov 12 17:43:20.412577 kubelet[2454]: I1112 17:43:20.412554    2454 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Nov 12 17:43:20.416035 kubelet[2454]: I1112 17:43:20.415999    2454 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Nov 12 17:43:20.416486 kubelet[2454]: I1112 17:43:20.416461    2454 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Nov 12 17:43:20.416894 kubelet[2454]: I1112 17:43:20.416869    2454 server.go:1269] "Started kubelet"
Nov 12 17:43:20.418800 kubelet[2454]: I1112 17:43:20.418758    2454 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Nov 12 17:43:20.422246 kubelet[2454]: I1112 17:43:20.421140    2454 server.go:460] "Adding debug handlers to kubelet server"
Nov 12 17:43:20.423117 kubelet[2454]: I1112 17:43:20.423095    2454 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Nov 12 17:43:20.424050 kubelet[2454]: I1112 17:43:20.423627    2454 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Nov 12 17:43:20.424050 kubelet[2454]: I1112 17:43:20.418756    2454 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Nov 12 17:43:20.424141 kubelet[2454]: I1112 17:43:20.424107    2454 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Nov 12 17:43:20.424254 kubelet[2454]: I1112 17:43:20.424233    2454 volume_manager.go:289] "Starting Kubelet Volume Manager"
Nov 12 17:43:20.424429 kubelet[2454]: E1112 17:43:20.424411    2454 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Nov 12 17:43:20.425063 kubelet[2454]: I1112 17:43:20.425037    2454 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Nov 12 17:43:20.425270 kubelet[2454]: I1112 17:43:20.425194    2454 reconciler.go:26] "Reconciler: start to sync state"
Nov 12 17:43:20.427145 kubelet[2454]: I1112 17:43:20.427099    2454 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Nov 12 17:43:20.438330 kubelet[2454]: I1112 17:43:20.437627    2454 factory.go:221] Registration of the containerd container factory successfully
Nov 12 17:43:20.438330 kubelet[2454]: I1112 17:43:20.437649    2454 factory.go:221] Registration of the systemd container factory successfully
Nov 12 17:43:20.439143 kubelet[2454]: E1112 17:43:20.439120    2454 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Nov 12 17:43:20.441949 kubelet[2454]: I1112 17:43:20.441907    2454 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Nov 12 17:43:20.442832 kubelet[2454]: I1112 17:43:20.442809    2454 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Nov 12 17:43:20.442832 kubelet[2454]: I1112 17:43:20.442833    2454 status_manager.go:217] "Starting to sync pod status with apiserver"
Nov 12 17:43:20.442908 kubelet[2454]: I1112 17:43:20.442849    2454 kubelet.go:2321] "Starting kubelet main sync loop"
Nov 12 17:43:20.442908 kubelet[2454]: E1112 17:43:20.442886    2454 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Nov 12 17:43:20.473803 kubelet[2454]: I1112 17:43:20.473776    2454 cpu_manager.go:214] "Starting CPU manager" policy="none"
Nov 12 17:43:20.473925 kubelet[2454]: I1112 17:43:20.473913    2454 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Nov 12 17:43:20.474000 kubelet[2454]: I1112 17:43:20.473991    2454 state_mem.go:36] "Initialized new in-memory state store"
Nov 12 17:43:20.474250 kubelet[2454]: I1112 17:43:20.474183    2454 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Nov 12 17:43:20.474339 kubelet[2454]: I1112 17:43:20.474313    2454 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Nov 12 17:43:20.474388 kubelet[2454]: I1112 17:43:20.474380    2454 policy_none.go:49] "None policy: Start"
Nov 12 17:43:20.475974 kubelet[2454]: I1112 17:43:20.475944    2454 memory_manager.go:170] "Starting memorymanager" policy="None"
Nov 12 17:43:20.475974 kubelet[2454]: I1112 17:43:20.475969    2454 state_mem.go:35] "Initializing new in-memory state store"
Nov 12 17:43:20.476162 kubelet[2454]: I1112 17:43:20.476139    2454 state_mem.go:75] "Updated machine memory state"
Nov 12 17:43:20.480022 kubelet[2454]: I1112 17:43:20.479994    2454 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Nov 12 17:43:20.480173 kubelet[2454]: I1112 17:43:20.480147    2454 eviction_manager.go:189] "Eviction manager: starting control loop"
Nov 12 17:43:20.480204 kubelet[2454]: I1112 17:43:20.480166    2454 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Nov 12 17:43:20.480757 kubelet[2454]: I1112 17:43:20.480734    2454 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Nov 12 17:43:20.585487 kubelet[2454]: I1112 17:43:20.585451    2454 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Nov 12 17:43:20.591533 kubelet[2454]: I1112 17:43:20.591497    2454 kubelet_node_status.go:111] "Node was previously registered" node="localhost"
Nov 12 17:43:20.591623 kubelet[2454]: I1112 17:43:20.591573    2454 kubelet_node_status.go:75] "Successfully registered node" node="localhost"
Nov 12 17:43:20.625727 kubelet[2454]: I1112 17:43:20.625648    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost"
Nov 12 17:43:20.625852 kubelet[2454]: I1112 17:43:20.625749    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost"
Nov 12 17:43:20.625852 kubelet[2454]: I1112 17:43:20.625775    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33673bc39d15d92b38b41cdd12700fe3-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33673bc39d15d92b38b41cdd12700fe3\") " pod="kube-system/kube-scheduler-localhost"
Nov 12 17:43:20.625852 kubelet[2454]: I1112 17:43:20.625792    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e559769d664910718a4cc241cc8c76c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e559769d664910718a4cc241cc8c76c\") " pod="kube-system/kube-apiserver-localhost"
Nov 12 17:43:20.625852 kubelet[2454]: I1112 17:43:20.625811    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e559769d664910718a4cc241cc8c76c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2e559769d664910718a4cc241cc8c76c\") " pod="kube-system/kube-apiserver-localhost"
Nov 12 17:43:20.625852 kubelet[2454]: I1112 17:43:20.625829    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e559769d664910718a4cc241cc8c76c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2e559769d664910718a4cc241cc8c76c\") " pod="kube-system/kube-apiserver-localhost"
Nov 12 17:43:20.625982 kubelet[2454]: I1112 17:43:20.625848    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost"
Nov 12 17:43:20.625982 kubelet[2454]: I1112 17:43:20.625864    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost"
Nov 12 17:43:20.625982 kubelet[2454]: I1112 17:43:20.625892    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bd0c21dd05cc63bc1db25732dedb07c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"2bd0c21dd05cc63bc1db25732dedb07c\") " pod="kube-system/kube-controller-manager-localhost"
Nov 12 17:43:20.852393 kubelet[2454]: E1112 17:43:20.852349    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:20.852512 kubelet[2454]: E1112 17:43:20.852349    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:20.852512 kubelet[2454]: E1112 17:43:20.852432    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:21.413376 kubelet[2454]: I1112 17:43:21.413332    2454 apiserver.go:52] "Watching apiserver"
Nov 12 17:43:21.425776 kubelet[2454]: I1112 17:43:21.425743    2454 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Nov 12 17:43:21.457475 kubelet[2454]: E1112 17:43:21.457440    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:21.457520 kubelet[2454]: E1112 17:43:21.457475    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:21.457676 kubelet[2454]: E1112 17:43:21.457647    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:21.474063 kubelet[2454]: I1112 17:43:21.474003    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.473990755 podStartE2EDuration="1.473990755s" podCreationTimestamp="2024-11-12 17:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:43:21.473628152 +0000 UTC m=+1.141376735" watchObservedRunningTime="2024-11-12 17:43:21.473990755 +0000 UTC m=+1.141739338"
Nov 12 17:43:21.480547 kubelet[2454]: I1112 17:43:21.480484    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.48047075 podStartE2EDuration="1.48047075s" podCreationTimestamp="2024-11-12 17:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:43:21.480336041 +0000 UTC m=+1.148084624" watchObservedRunningTime="2024-11-12 17:43:21.48047075 +0000 UTC m=+1.148219333"
Nov 12 17:43:21.487523 kubelet[2454]: I1112 17:43:21.487472    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.487459831 podStartE2EDuration="1.487459831s" podCreationTimestamp="2024-11-12 17:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:43:21.487133989 +0000 UTC m=+1.154882572" watchObservedRunningTime="2024-11-12 17:43:21.487459831 +0000 UTC m=+1.155208414"
Nov 12 17:43:22.458947 kubelet[2454]: E1112 17:43:22.458917    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:25.155487 sudo[1609]: pam_unix(sudo:session): session closed for user root
Nov 12 17:43:25.157174 sshd[1606]: pam_unix(sshd:session): session closed for user core
Nov 12 17:43:25.160810 systemd[1]: sshd@6-10.0.0.44:22-10.0.0.1:52938.service: Deactivated successfully.
Nov 12 17:43:25.162561 systemd[1]: session-7.scope: Deactivated successfully.
Nov 12 17:43:25.163764 systemd[1]: session-7.scope: Consumed 5.979s CPU time, 153.6M memory peak, 0B memory swap peak.
Nov 12 17:43:25.164227 systemd-logind[1415]: Session 7 logged out. Waiting for processes to exit.
Nov 12 17:43:25.165190 systemd-logind[1415]: Removed session 7.
Nov 12 17:43:26.450600 kubelet[2454]: E1112 17:43:26.450566    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:26.465944 kubelet[2454]: E1112 17:43:26.465637    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:27.197919 kubelet[2454]: E1112 17:43:27.197887    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:27.468181 kubelet[2454]: E1112 17:43:27.468059    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:27.664849 kubelet[2454]: I1112 17:43:27.664819    2454 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Nov 12 17:43:27.665118 containerd[1432]: time="2024-11-12T17:43:27.665086959Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Nov 12 17:43:27.666021 kubelet[2454]: I1112 17:43:27.665662    2454 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Nov 12 17:43:28.476225 kubelet[2454]: E1112 17:43:28.476181    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:28.481107 systemd[1]: Created slice kubepods-besteffort-podcbd1611d_6a16_4313_a432_1ae34990b19f.slice - libcontainer container kubepods-besteffort-podcbd1611d_6a16_4313_a432_1ae34990b19f.slice.
Nov 12 17:43:28.579024 kubelet[2454]: I1112 17:43:28.578979    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cbd1611d-6a16-4313-a432-1ae34990b19f-kube-proxy\") pod \"kube-proxy-krjjv\" (UID: \"cbd1611d-6a16-4313-a432-1ae34990b19f\") " pod="kube-system/kube-proxy-krjjv"
Nov 12 17:43:28.579024 kubelet[2454]: I1112 17:43:28.579025    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cbd1611d-6a16-4313-a432-1ae34990b19f-xtables-lock\") pod \"kube-proxy-krjjv\" (UID: \"cbd1611d-6a16-4313-a432-1ae34990b19f\") " pod="kube-system/kube-proxy-krjjv"
Nov 12 17:43:28.579194 kubelet[2454]: I1112 17:43:28.579053    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbd1611d-6a16-4313-a432-1ae34990b19f-lib-modules\") pod \"kube-proxy-krjjv\" (UID: \"cbd1611d-6a16-4313-a432-1ae34990b19f\") " pod="kube-system/kube-proxy-krjjv"
Nov 12 17:43:28.579194 kubelet[2454]: I1112 17:43:28.579075    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9mcx\" (UniqueName: \"kubernetes.io/projected/cbd1611d-6a16-4313-a432-1ae34990b19f-kube-api-access-t9mcx\") pod \"kube-proxy-krjjv\" (UID: \"cbd1611d-6a16-4313-a432-1ae34990b19f\") " pod="kube-system/kube-proxy-krjjv"
Nov 12 17:43:28.777790 systemd[1]: Created slice kubepods-besteffort-podb0ad598f_90e7_42cd_b8ae_71c3afd1f2c7.slice - libcontainer container kubepods-besteffort-podb0ad598f_90e7_42cd_b8ae_71c3afd1f2c7.slice.
Nov 12 17:43:28.790624 kubelet[2454]: E1112 17:43:28.790592    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:28.791371 containerd[1432]: time="2024-11-12T17:43:28.791312041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-krjjv,Uid:cbd1611d-6a16-4313-a432-1ae34990b19f,Namespace:kube-system,Attempt:0,}"
Nov 12 17:43:28.810501 containerd[1432]: time="2024-11-12T17:43:28.810349555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:28.810729 containerd[1432]: time="2024-11-12T17:43:28.810482334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:28.810729 containerd[1432]: time="2024-11-12T17:43:28.810658827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:28.811452 containerd[1432]: time="2024-11-12T17:43:28.811018578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:28.829962 systemd[1]: Started cri-containerd-4a797f8a99a34a54a9cfb3b7c0c3d112c47a447bd54b0a0fe9afed61f9038b43.scope - libcontainer container 4a797f8a99a34a54a9cfb3b7c0c3d112c47a447bd54b0a0fe9afed61f9038b43.
Nov 12 17:43:28.850808 containerd[1432]: time="2024-11-12T17:43:28.850657421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-krjjv,Uid:cbd1611d-6a16-4313-a432-1ae34990b19f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a797f8a99a34a54a9cfb3b7c0c3d112c47a447bd54b0a0fe9afed61f9038b43\""
Nov 12 17:43:28.851399 kubelet[2454]: E1112 17:43:28.851365    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:28.853194 containerd[1432]: time="2024-11-12T17:43:28.853164386Z" level=info msg="CreateContainer within sandbox \"4a797f8a99a34a54a9cfb3b7c0c3d112c47a447bd54b0a0fe9afed61f9038b43\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Nov 12 17:43:28.866379 containerd[1432]: time="2024-11-12T17:43:28.866335289Z" level=info msg="CreateContainer within sandbox \"4a797f8a99a34a54a9cfb3b7c0c3d112c47a447bd54b0a0fe9afed61f9038b43\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0ad7264578fd4bd66caddaaab44f95dc5b64ceb5982f7d70ea1f987bddcfedb5\""
Nov 12 17:43:28.867156 containerd[1432]: time="2024-11-12T17:43:28.867131608Z" level=info msg="StartContainer for \"0ad7264578fd4bd66caddaaab44f95dc5b64ceb5982f7d70ea1f987bddcfedb5\""
Nov 12 17:43:28.880131 kubelet[2454]: I1112 17:43:28.880083    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czx7h\" (UniqueName: \"kubernetes.io/projected/b0ad598f-90e7-42cd-b8ae-71c3afd1f2c7-kube-api-access-czx7h\") pod \"tigera-operator-f8bc97d4c-sfkv6\" (UID: \"b0ad598f-90e7-42cd-b8ae-71c3afd1f2c7\") " pod="tigera-operator/tigera-operator-f8bc97d4c-sfkv6"
Nov 12 17:43:28.880325 kubelet[2454]: I1112 17:43:28.880239    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b0ad598f-90e7-42cd-b8ae-71c3afd1f2c7-var-lib-calico\") pod \"tigera-operator-f8bc97d4c-sfkv6\" (UID: \"b0ad598f-90e7-42cd-b8ae-71c3afd1f2c7\") " pod="tigera-operator/tigera-operator-f8bc97d4c-sfkv6"
Nov 12 17:43:28.897880 systemd[1]: Started cri-containerd-0ad7264578fd4bd66caddaaab44f95dc5b64ceb5982f7d70ea1f987bddcfedb5.scope - libcontainer container 0ad7264578fd4bd66caddaaab44f95dc5b64ceb5982f7d70ea1f987bddcfedb5.
Nov 12 17:43:28.922809 containerd[1432]: time="2024-11-12T17:43:28.922760194Z" level=info msg="StartContainer for \"0ad7264578fd4bd66caddaaab44f95dc5b64ceb5982f7d70ea1f987bddcfedb5\" returns successfully"
Nov 12 17:43:29.081491 containerd[1432]: time="2024-11-12T17:43:29.081361958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-sfkv6,Uid:b0ad598f-90e7-42cd-b8ae-71c3afd1f2c7,Namespace:tigera-operator,Attempt:0,}"
Nov 12 17:43:29.169527 containerd[1432]: time="2024-11-12T17:43:29.168999932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:29.169527 containerd[1432]: time="2024-11-12T17:43:29.169050448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:29.169527 containerd[1432]: time="2024-11-12T17:43:29.169071863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:29.169527 containerd[1432]: time="2024-11-12T17:43:29.169152481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:29.183867 systemd[1]: Started cri-containerd-1c35e88ee90bd75854cf0a55a6e92014ca556ccd9a5d2bd461f23dd0b45812d6.scope - libcontainer container 1c35e88ee90bd75854cf0a55a6e92014ca556ccd9a5d2bd461f23dd0b45812d6.
Nov 12 17:43:29.210835 containerd[1432]: time="2024-11-12T17:43:29.210793115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-sfkv6,Uid:b0ad598f-90e7-42cd-b8ae-71c3afd1f2c7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1c35e88ee90bd75854cf0a55a6e92014ca556ccd9a5d2bd461f23dd0b45812d6\""
Nov 12 17:43:29.212621 containerd[1432]: time="2024-11-12T17:43:29.212583631Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\""
Nov 12 17:43:29.479952 kubelet[2454]: E1112 17:43:29.479906    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:30.353542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2329082688.mount: Deactivated successfully.
Nov 12 17:43:30.744293 kubelet[2454]: E1112 17:43:30.744046    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:30.764661 kubelet[2454]: I1112 17:43:30.764460    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-krjjv" podStartSLOduration=2.764444706 podStartE2EDuration="2.764444706s" podCreationTimestamp="2024-11-12 17:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:43:29.491605871 +0000 UTC m=+9.159354454" watchObservedRunningTime="2024-11-12 17:43:30.764444706 +0000 UTC m=+10.432193289"
Nov 12 17:43:31.393919 containerd[1432]: time="2024-11-12T17:43:31.393865323Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:31.395684 containerd[1432]: time="2024-11-12T17:43:31.395647666Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=19123649"
Nov 12 17:43:31.396635 containerd[1432]: time="2024-11-12T17:43:31.396587549Z" level=info msg="ImageCreate event name:\"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:31.399134 containerd[1432]: time="2024-11-12T17:43:31.399086551Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:31.399867 containerd[1432]: time="2024-11-12T17:43:31.399833510Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"19117824\" in 2.187213054s"
Nov 12 17:43:31.399926 containerd[1432]: time="2024-11-12T17:43:31.399867212Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\""
Nov 12 17:43:31.403193 containerd[1432]: time="2024-11-12T17:43:31.403156401Z" level=info msg="CreateContainer within sandbox \"1c35e88ee90bd75854cf0a55a6e92014ca556ccd9a5d2bd461f23dd0b45812d6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}"
Nov 12 17:43:31.416135 containerd[1432]: time="2024-11-12T17:43:31.416050231Z" level=info msg="CreateContainer within sandbox \"1c35e88ee90bd75854cf0a55a6e92014ca556ccd9a5d2bd461f23dd0b45812d6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8ab6e0d11b17abfa1c632426263e4985f343ee5ac357b0308fbb09e4c5059b9d\""
Nov 12 17:43:31.416813 containerd[1432]: time="2024-11-12T17:43:31.416769252Z" level=info msg="StartContainer for \"8ab6e0d11b17abfa1c632426263e4985f343ee5ac357b0308fbb09e4c5059b9d\""
Nov 12 17:43:31.456884 systemd[1]: Started cri-containerd-8ab6e0d11b17abfa1c632426263e4985f343ee5ac357b0308fbb09e4c5059b9d.scope - libcontainer container 8ab6e0d11b17abfa1c632426263e4985f343ee5ac357b0308fbb09e4c5059b9d.
Nov 12 17:43:31.488374 containerd[1432]: time="2024-11-12T17:43:31.488325583Z" level=info msg="StartContainer for \"8ab6e0d11b17abfa1c632426263e4985f343ee5ac357b0308fbb09e4c5059b9d\" returns successfully"
Nov 12 17:43:31.491569 kubelet[2454]: E1112 17:43:31.491004    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:34.125842 update_engine[1421]: I20241112 17:43:34.125766  1421 update_attempter.cc:509] Updating boot flags...
Nov 12 17:43:34.175057 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2853)
Nov 12 17:43:34.225815 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2857)
Nov 12 17:43:35.421578 kubelet[2454]: I1112 17:43:35.421360    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-f8bc97d4c-sfkv6" podStartSLOduration=5.231390433 podStartE2EDuration="7.42134356s" podCreationTimestamp="2024-11-12 17:43:28 +0000 UTC" firstStartedPulling="2024-11-12 17:43:29.212106611 +0000 UTC m=+8.879855194" lastFinishedPulling="2024-11-12 17:43:31.402059778 +0000 UTC m=+11.069808321" observedRunningTime="2024-11-12 17:43:31.507121997 +0000 UTC m=+11.174870580" watchObservedRunningTime="2024-11-12 17:43:35.42134356 +0000 UTC m=+15.089092143"
Nov 12 17:43:35.432946 systemd[1]: Created slice kubepods-besteffort-pod7bacf3aa_ff12_44a2_9619_b30e7336887e.slice - libcontainer container kubepods-besteffort-pod7bacf3aa_ff12_44a2_9619_b30e7336887e.slice.
Nov 12 17:43:35.474971 systemd[1]: Created slice kubepods-besteffort-pod8c3704ca_6adb_4f76_8dca_a6a1799ff2db.slice - libcontainer container kubepods-besteffort-pod8c3704ca_6adb_4f76_8dca_a6a1799ff2db.slice.
Nov 12 17:43:35.526277 kubelet[2454]: I1112 17:43:35.526235    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7bacf3aa-ff12-44a2-9619-b30e7336887e-typha-certs\") pod \"calico-typha-7b6b65f998-lgnpk\" (UID: \"7bacf3aa-ff12-44a2-9619-b30e7336887e\") " pod="calico-system/calico-typha-7b6b65f998-lgnpk"
Nov 12 17:43:35.526277 kubelet[2454]: I1112 17:43:35.526280    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zwbq\" (UniqueName: \"kubernetes.io/projected/7bacf3aa-ff12-44a2-9619-b30e7336887e-kube-api-access-7zwbq\") pod \"calico-typha-7b6b65f998-lgnpk\" (UID: \"7bacf3aa-ff12-44a2-9619-b30e7336887e\") " pod="calico-system/calico-typha-7b6b65f998-lgnpk"
Nov 12 17:43:35.526465 kubelet[2454]: I1112 17:43:35.526302    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bacf3aa-ff12-44a2-9619-b30e7336887e-tigera-ca-bundle\") pod \"calico-typha-7b6b65f998-lgnpk\" (UID: \"7bacf3aa-ff12-44a2-9619-b30e7336887e\") " pod="calico-system/calico-typha-7b6b65f998-lgnpk"
Nov 12 17:43:35.591887 kubelet[2454]: E1112 17:43:35.591823    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2dwk" podUID="b411254a-fa39-4c2a-ae0e-e271a38a0ca1"
Nov 12 17:43:35.626537 kubelet[2454]: I1112 17:43:35.626482    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-var-run-calico\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.626537 kubelet[2454]: I1112 17:43:35.626547    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-xtables-lock\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.626745 kubelet[2454]: I1112 17:43:35.626575    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-lib-modules\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.626745 kubelet[2454]: I1112 17:43:35.626591    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-tigera-ca-bundle\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.626745 kubelet[2454]: I1112 17:43:35.626608    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-var-lib-calico\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.626745 kubelet[2454]: I1112 17:43:35.626624    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-cni-net-dir\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.626745 kubelet[2454]: I1112 17:43:35.626640    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-cni-log-dir\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.626866 kubelet[2454]: I1112 17:43:35.626654    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-flexvol-driver-host\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.626866 kubelet[2454]: I1112 17:43:35.626688    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-policysync\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.626866 kubelet[2454]: I1112 17:43:35.626703    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-node-certs\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.626866 kubelet[2454]: I1112 17:43:35.626734    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47gqz\" (UniqueName: \"kubernetes.io/projected/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-kube-api-access-47gqz\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.626866 kubelet[2454]: I1112 17:43:35.626752    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8c3704ca-6adb-4f76-8dca-a6a1799ff2db-cni-bin-dir\") pod \"calico-node-pbzgc\" (UID: \"8c3704ca-6adb-4f76-8dca-a6a1799ff2db\") " pod="calico-system/calico-node-pbzgc"
Nov 12 17:43:35.727384 kubelet[2454]: I1112 17:43:35.727195    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b411254a-fa39-4c2a-ae0e-e271a38a0ca1-socket-dir\") pod \"csi-node-driver-g2dwk\" (UID: \"b411254a-fa39-4c2a-ae0e-e271a38a0ca1\") " pod="calico-system/csi-node-driver-g2dwk"
Nov 12 17:43:35.727384 kubelet[2454]: I1112 17:43:35.727255    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b411254a-fa39-4c2a-ae0e-e271a38a0ca1-kubelet-dir\") pod \"csi-node-driver-g2dwk\" (UID: \"b411254a-fa39-4c2a-ae0e-e271a38a0ca1\") " pod="calico-system/csi-node-driver-g2dwk"
Nov 12 17:43:35.727384 kubelet[2454]: I1112 17:43:35.727294    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b411254a-fa39-4c2a-ae0e-e271a38a0ca1-varrun\") pod \"csi-node-driver-g2dwk\" (UID: \"b411254a-fa39-4c2a-ae0e-e271a38a0ca1\") " pod="calico-system/csi-node-driver-g2dwk"
Nov 12 17:43:35.727384 kubelet[2454]: I1112 17:43:35.727328    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b411254a-fa39-4c2a-ae0e-e271a38a0ca1-registration-dir\") pod \"csi-node-driver-g2dwk\" (UID: \"b411254a-fa39-4c2a-ae0e-e271a38a0ca1\") " pod="calico-system/csi-node-driver-g2dwk"
Nov 12 17:43:35.727384 kubelet[2454]: I1112 17:43:35.727381    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v47zq\" (UniqueName: \"kubernetes.io/projected/b411254a-fa39-4c2a-ae0e-e271a38a0ca1-kube-api-access-v47zq\") pod \"csi-node-driver-g2dwk\" (UID: \"b411254a-fa39-4c2a-ae0e-e271a38a0ca1\") " pod="calico-system/csi-node-driver-g2dwk"
Nov 12 17:43:35.737804 kubelet[2454]: E1112 17:43:35.736957    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:35.739064 containerd[1432]: time="2024-11-12T17:43:35.738181842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b6b65f998-lgnpk,Uid:7bacf3aa-ff12-44a2-9619-b30e7336887e,Namespace:calico-system,Attempt:0,}"
Nov 12 17:43:35.745523 kubelet[2454]: E1112 17:43:35.745479    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.745523 kubelet[2454]: W1112 17:43:35.745526    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.745940 kubelet[2454]: E1112 17:43:35.745571    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.745940 kubelet[2454]: E1112 17:43:35.745861    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.745940 kubelet[2454]: W1112 17:43:35.745904    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.745940 kubelet[2454]: E1112 17:43:35.745915    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.762728 containerd[1432]: time="2024-11-12T17:43:35.759735087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:35.762728 containerd[1432]: time="2024-11-12T17:43:35.759802922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:35.762728 containerd[1432]: time="2024-11-12T17:43:35.759814568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:35.762728 containerd[1432]: time="2024-11-12T17:43:35.759902054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:35.778116 kubelet[2454]: E1112 17:43:35.778072    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:35.779348 containerd[1432]: time="2024-11-12T17:43:35.779303972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pbzgc,Uid:8c3704ca-6adb-4f76-8dca-a6a1799ff2db,Namespace:calico-system,Attempt:0,}"
Nov 12 17:43:35.779953 systemd[1]: Started cri-containerd-1b5350790a38f997c906f03d7f368c9d3dfd40d3dc57d67ad8cb67c88e494c01.scope - libcontainer container 1b5350790a38f997c906f03d7f368c9d3dfd40d3dc57d67ad8cb67c88e494c01.
Nov 12 17:43:35.818456 containerd[1432]: time="2024-11-12T17:43:35.818205419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:35.818456 containerd[1432]: time="2024-11-12T17:43:35.818339609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:35.818456 containerd[1432]: time="2024-11-12T17:43:35.818393758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:35.818643 containerd[1432]: time="2024-11-12T17:43:35.818552321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b6b65f998-lgnpk,Uid:7bacf3aa-ff12-44a2-9619-b30e7336887e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b5350790a38f997c906f03d7f368c9d3dfd40d3dc57d67ad8cb67c88e494c01\""
Nov 12 17:43:35.820528 containerd[1432]: time="2024-11-12T17:43:35.819972985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:35.823166 kubelet[2454]: E1112 17:43:35.822939    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:35.829119 kubelet[2454]: E1112 17:43:35.828331    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.829119 kubelet[2454]: W1112 17:43:35.828350    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.829119 kubelet[2454]: E1112 17:43:35.828366    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.829119 kubelet[2454]: E1112 17:43:35.828594    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.829119 kubelet[2454]: W1112 17:43:35.828603    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.829119 kubelet[2454]: E1112 17:43:35.828616    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.829119 kubelet[2454]: E1112 17:43:35.828781    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.829119 kubelet[2454]: W1112 17:43:35.828790    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.829119 kubelet[2454]: E1112 17:43:35.828803    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.829119 kubelet[2454]: E1112 17:43:35.829013    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.829376 kubelet[2454]: W1112 17:43:35.829023    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.829376 kubelet[2454]: E1112 17:43:35.829036    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.829376 kubelet[2454]: E1112 17:43:35.829368    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.829442 kubelet[2454]: W1112 17:43:35.829378    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.829442 kubelet[2454]: E1112 17:43:35.829425    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.830029 kubelet[2454]: E1112 17:43:35.829987    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.830029 kubelet[2454]: W1112 17:43:35.830018    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.831223 kubelet[2454]: E1112 17:43:35.830069    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.831223 kubelet[2454]: E1112 17:43:35.830266    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.831223 kubelet[2454]: W1112 17:43:35.830275    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.831223 kubelet[2454]: E1112 17:43:35.830301    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.831223 kubelet[2454]: E1112 17:43:35.830483    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.831223 kubelet[2454]: W1112 17:43:35.830547    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.831223 kubelet[2454]: E1112 17:43:35.830582    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.831223 kubelet[2454]: E1112 17:43:35.831128    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.831223 kubelet[2454]: W1112 17:43:35.831143    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.831223 kubelet[2454]: E1112 17:43:35.831225    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.831484 kubelet[2454]: E1112 17:43:35.831349    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.831484 kubelet[2454]: W1112 17:43:35.831362    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.831484 kubelet[2454]: E1112 17:43:35.831387    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.831908 kubelet[2454]: E1112 17:43:35.831575    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.831908 kubelet[2454]: W1112 17:43:35.831589    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.831908 kubelet[2454]: E1112 17:43:35.831678    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.832013 kubelet[2454]: E1112 17:43:35.831912    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.832013 kubelet[2454]: W1112 17:43:35.831921    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.832299 kubelet[2454]: E1112 17:43:35.832085    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.832299 kubelet[2454]: W1112 17:43:35.832097    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.832299 kubelet[2454]: E1112 17:43:35.832253    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.832299 kubelet[2454]: W1112 17:43:35.832260    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.832299 kubelet[2454]: E1112 17:43:35.832270    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.832807 kubelet[2454]: E1112 17:43:35.832648    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.832807 kubelet[2454]: W1112 17:43:35.832659    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.832807 kubelet[2454]: E1112 17:43:35.832663    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.832807 kubelet[2454]: E1112 17:43:35.832669    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.832807 kubelet[2454]: E1112 17:43:35.832730    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.832915 kubelet[2454]: E1112 17:43:35.832861    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.832915 kubelet[2454]: W1112 17:43:35.832870    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.832915 kubelet[2454]: E1112 17:43:35.832887    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.833475 kubelet[2454]: E1112 17:43:35.833254    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.833475 kubelet[2454]: W1112 17:43:35.833276    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.833475 kubelet[2454]: E1112 17:43:35.833324    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.834805 kubelet[2454]: E1112 17:43:35.834593    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.834805 kubelet[2454]: W1112 17:43:35.834610    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.834805 kubelet[2454]: E1112 17:43:35.834646    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.834905 kubelet[2454]: E1112 17:43:35.834869    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.834905 kubelet[2454]: W1112 17:43:35.834878    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.835261 kubelet[2454]: E1112 17:43:35.835032    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.835261 kubelet[2454]: E1112 17:43:35.835163    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.835261 kubelet[2454]: W1112 17:43:35.835174    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.835261 kubelet[2454]: E1112 17:43:35.835230    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.836795 kubelet[2454]: E1112 17:43:35.835362    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.836795 kubelet[2454]: W1112 17:43:35.835371    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.836795 kubelet[2454]: E1112 17:43:35.835451    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.836795 kubelet[2454]: E1112 17:43:35.835603    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.836795 kubelet[2454]: W1112 17:43:35.835612    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.836795 kubelet[2454]: E1112 17:43:35.835793    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.836795 kubelet[2454]: W1112 17:43:35.835800    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.836795 kubelet[2454]: E1112 17:43:35.835810    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.836795 kubelet[2454]: E1112 17:43:35.835954    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.836795 kubelet[2454]: W1112 17:43:35.835962    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.837005 kubelet[2454]: E1112 17:43:35.835969    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.837005 kubelet[2454]: E1112 17:43:35.836134    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.838400 kubelet[2454]: E1112 17:43:35.838217    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.838400 kubelet[2454]: W1112 17:43:35.838397    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.838500 kubelet[2454]: E1112 17:43:35.838449    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.841101 containerd[1432]: time="2024-11-12T17:43:35.841049579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\""
Nov 12 17:43:35.846795 systemd[1]: Started cri-containerd-a3d4c342e684ab132fd91909519364f97411f9bf251f3f8fbb95c1b307fe8729.scope - libcontainer container a3d4c342e684ab132fd91909519364f97411f9bf251f3f8fbb95c1b307fe8729.
Nov 12 17:43:35.855728 kubelet[2454]: E1112 17:43:35.855682    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:35.855840 kubelet[2454]: W1112 17:43:35.855705    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:35.855840 kubelet[2454]: E1112 17:43:35.855755    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:35.866351 containerd[1432]: time="2024-11-12T17:43:35.866309124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pbzgc,Uid:8c3704ca-6adb-4f76-8dca-a6a1799ff2db,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3d4c342e684ab132fd91909519364f97411f9bf251f3f8fbb95c1b307fe8729\""
Nov 12 17:43:35.867062 kubelet[2454]: E1112 17:43:35.867037    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:37.446660 kubelet[2454]: E1112 17:43:37.446470    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2dwk" podUID="b411254a-fa39-4c2a-ae0e-e271a38a0ca1"
Nov 12 17:43:38.170348 containerd[1432]: time="2024-11-12T17:43:38.170305082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:38.171422 containerd[1432]: time="2024-11-12T17:43:38.170956737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=27849584"
Nov 12 17:43:38.171682 containerd[1432]: time="2024-11-12T17:43:38.171645489Z" level=info msg="ImageCreate event name:\"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:38.173555 containerd[1432]: time="2024-11-12T17:43:38.173524141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:38.174833 containerd[1432]: time="2024-11-12T17:43:38.174801680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"29219212\" in 2.333696351s"
Nov 12 17:43:38.174894 containerd[1432]: time="2024-11-12T17:43:38.174844299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\""
Nov 12 17:43:38.175578 containerd[1432]: time="2024-11-12T17:43:38.175500916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\""
Nov 12 17:43:38.188567 containerd[1432]: time="2024-11-12T17:43:38.188536864Z" level=info msg="CreateContainer within sandbox \"1b5350790a38f997c906f03d7f368c9d3dfd40d3dc57d67ad8cb67c88e494c01\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Nov 12 17:43:38.199669 containerd[1432]: time="2024-11-12T17:43:38.199559939Z" level=info msg="CreateContainer within sandbox \"1b5350790a38f997c906f03d7f368c9d3dfd40d3dc57d67ad8cb67c88e494c01\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5989160c88c2057e3eeb01a55e22ada729bd37d421aa607154fc8e4cffa83a48\""
Nov 12 17:43:38.200230 containerd[1432]: time="2024-11-12T17:43:38.200203911Z" level=info msg="StartContainer for \"5989160c88c2057e3eeb01a55e22ada729bd37d421aa607154fc8e4cffa83a48\""
Nov 12 17:43:38.225880 systemd[1]: Started cri-containerd-5989160c88c2057e3eeb01a55e22ada729bd37d421aa607154fc8e4cffa83a48.scope - libcontainer container 5989160c88c2057e3eeb01a55e22ada729bd37d421aa607154fc8e4cffa83a48.
Nov 12 17:43:38.256674 containerd[1432]: time="2024-11-12T17:43:38.256632442Z" level=info msg="StartContainer for \"5989160c88c2057e3eeb01a55e22ada729bd37d421aa607154fc8e4cffa83a48\" returns successfully"
Nov 12 17:43:38.519138 kubelet[2454]: E1112 17:43:38.519032    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:38.546097 kubelet[2454]: I1112 17:43:38.545669    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b6b65f998-lgnpk" podStartSLOduration=1.199767843 podStartE2EDuration="3.545652654s" podCreationTimestamp="2024-11-12 17:43:35 +0000 UTC" firstStartedPulling="2024-11-12 17:43:35.829501894 +0000 UTC m=+15.497250477" lastFinishedPulling="2024-11-12 17:43:38.175386705 +0000 UTC m=+17.843135288" observedRunningTime="2024-11-12 17:43:38.540931995 +0000 UTC m=+18.208680578" watchObservedRunningTime="2024-11-12 17:43:38.545652654 +0000 UTC m=+18.213401237"
Nov 12 17:43:38.548811 kubelet[2454]: E1112 17:43:38.548672    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.548811 kubelet[2454]: W1112 17:43:38.548694    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.548811 kubelet[2454]: E1112 17:43:38.548727    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.549033 kubelet[2454]: E1112 17:43:38.549018    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.549103 kubelet[2454]: W1112 17:43:38.549087    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.549164 kubelet[2454]: E1112 17:43:38.549152    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.549416 kubelet[2454]: E1112 17:43:38.549400    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.549501 kubelet[2454]: W1112 17:43:38.549487    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.549650 kubelet[2454]: E1112 17:43:38.549551    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.549780 kubelet[2454]: E1112 17:43:38.549766    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.549860 kubelet[2454]: W1112 17:43:38.549838    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.549919 kubelet[2454]: E1112 17:43:38.549908    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.550195 kubelet[2454]: E1112 17:43:38.550177    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.550373 kubelet[2454]: W1112 17:43:38.550270    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.550373 kubelet[2454]: E1112 17:43:38.550288    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.550520 kubelet[2454]: E1112 17:43:38.550507    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.550579 kubelet[2454]: W1112 17:43:38.550568    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.550649 kubelet[2454]: E1112 17:43:38.550635    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.550883 kubelet[2454]: E1112 17:43:38.550867    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.550977 kubelet[2454]: W1112 17:43:38.550961    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.551142 kubelet[2454]: E1112 17:43:38.551047    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.551270 kubelet[2454]: E1112 17:43:38.551255    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.551334 kubelet[2454]: W1112 17:43:38.551322    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.551391 kubelet[2454]: E1112 17:43:38.551381    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.551668 kubelet[2454]: E1112 17:43:38.551652    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.551889 kubelet[2454]: W1112 17:43:38.551778    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.551889 kubelet[2454]: E1112 17:43:38.551797    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.552038 kubelet[2454]: E1112 17:43:38.552025    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.552094 kubelet[2454]: W1112 17:43:38.552083    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.552159 kubelet[2454]: E1112 17:43:38.552147    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.552431 kubelet[2454]: E1112 17:43:38.552395    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.552592 kubelet[2454]: W1112 17:43:38.552493    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.552592 kubelet[2454]: E1112 17:43:38.552510    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.552767 kubelet[2454]: E1112 17:43:38.552745    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.552840 kubelet[2454]: W1112 17:43:38.552826    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.553014 kubelet[2454]: E1112 17:43:38.552910    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.553139 kubelet[2454]: E1112 17:43:38.553125    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.553194 kubelet[2454]: W1112 17:43:38.553182    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.553263 kubelet[2454]: E1112 17:43:38.553251    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.553568 kubelet[2454]: E1112 17:43:38.553476    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.553568 kubelet[2454]: W1112 17:43:38.553489    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.553568 kubelet[2454]: E1112 17:43:38.553499    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.553755 kubelet[2454]: E1112 17:43:38.553741    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.553824 kubelet[2454]: W1112 17:43:38.553811    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.553934 kubelet[2454]: E1112 17:43:38.553880    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.649904 kubelet[2454]: E1112 17:43:38.649800    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.649904 kubelet[2454]: W1112 17:43:38.649826    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.649904 kubelet[2454]: E1112 17:43:38.649845    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.650127 kubelet[2454]: E1112 17:43:38.650105    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.650127 kubelet[2454]: W1112 17:43:38.650121    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.650190 kubelet[2454]: E1112 17:43:38.650138    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.650353 kubelet[2454]: E1112 17:43:38.650330    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.650353 kubelet[2454]: W1112 17:43:38.650345    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.650424 kubelet[2454]: E1112 17:43:38.650359    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.650599 kubelet[2454]: E1112 17:43:38.650571    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.650599 kubelet[2454]: W1112 17:43:38.650582    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.650599 kubelet[2454]: E1112 17:43:38.650597    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.650779 kubelet[2454]: E1112 17:43:38.650767    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.650779 kubelet[2454]: W1112 17:43:38.650777    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.650848 kubelet[2454]: E1112 17:43:38.650791    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.650954 kubelet[2454]: E1112 17:43:38.650942    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.650954 kubelet[2454]: W1112 17:43:38.650952    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.651015 kubelet[2454]: E1112 17:43:38.650967    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.651396 kubelet[2454]: E1112 17:43:38.651337    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.651396 kubelet[2454]: W1112 17:43:38.651356    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.652218 kubelet[2454]: E1112 17:43:38.652097    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.652462 kubelet[2454]: E1112 17:43:38.652346    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.652462 kubelet[2454]: W1112 17:43:38.652363    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.652462 kubelet[2454]: E1112 17:43:38.652376    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.652663 kubelet[2454]: E1112 17:43:38.652648    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.652745 kubelet[2454]: W1112 17:43:38.652706    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.652935 kubelet[2454]: E1112 17:43:38.652808    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.653261 kubelet[2454]: E1112 17:43:38.653041    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.653261 kubelet[2454]: W1112 17:43:38.653055    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.653261 kubelet[2454]: E1112 17:43:38.653066    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.653528 kubelet[2454]: E1112 17:43:38.653510    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.653592 kubelet[2454]: W1112 17:43:38.653580    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.653701 kubelet[2454]: E1112 17:43:38.653687    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.659763 kubelet[2454]: E1112 17:43:38.659704    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.659763 kubelet[2454]: W1112 17:43:38.659756    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.659877 kubelet[2454]: E1112 17:43:38.659776    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.660374 kubelet[2454]: E1112 17:43:38.660007    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.660374 kubelet[2454]: W1112 17:43:38.660021    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.660374 kubelet[2454]: E1112 17:43:38.660033    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.661373 kubelet[2454]: E1112 17:43:38.661340    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.661373 kubelet[2454]: W1112 17:43:38.661366    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.661454 kubelet[2454]: E1112 17:43:38.661393    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.661682 kubelet[2454]: E1112 17:43:38.661664    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.661682 kubelet[2454]: W1112 17:43:38.661678    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.661772 kubelet[2454]: E1112 17:43:38.661751    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.661881 kubelet[2454]: E1112 17:43:38.661867    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.661910 kubelet[2454]: W1112 17:43:38.661882    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.662048 kubelet[2454]: E1112 17:43:38.661983    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.662099 kubelet[2454]: E1112 17:43:38.662085    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.662099 kubelet[2454]: W1112 17:43:38.662098    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.662196 kubelet[2454]: E1112 17:43:38.662116    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:38.662310 kubelet[2454]: E1112 17:43:38.662296    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:38.662310 kubelet[2454]: W1112 17:43:38.662308    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:38.662352 kubelet[2454]: E1112 17:43:38.662317    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.443333 kubelet[2454]: E1112 17:43:39.443286    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2dwk" podUID="b411254a-fa39-4c2a-ae0e-e271a38a0ca1"
Nov 12 17:43:39.452025 containerd[1432]: time="2024-11-12T17:43:39.451688526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:39.452470 containerd[1432]: time="2024-11-12T17:43:39.452435809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5117816"
Nov 12 17:43:39.453053 containerd[1432]: time="2024-11-12T17:43:39.453001254Z" level=info msg="ImageCreate event name:\"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:39.455024 containerd[1432]: time="2024-11-12T17:43:39.454990514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:39.455902 containerd[1432]: time="2024-11-12T17:43:39.455866973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6487412\" in 1.280335803s"
Nov 12 17:43:39.455951 containerd[1432]: time="2024-11-12T17:43:39.455903549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\""
Nov 12 17:43:39.457668 containerd[1432]: time="2024-11-12T17:43:39.457576593Z" level=info msg="CreateContainer within sandbox \"a3d4c342e684ab132fd91909519364f97411f9bf251f3f8fbb95c1b307fe8729\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Nov 12 17:43:39.467730 containerd[1432]: time="2024-11-12T17:43:39.467686846Z" level=info msg="CreateContainer within sandbox \"a3d4c342e684ab132fd91909519364f97411f9bf251f3f8fbb95c1b307fe8729\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"57a02cf74d6c74ee539be8d64f9fe9fbaf5071a1aabf8d88898ef48531c1f63f\""
Nov 12 17:43:39.469711 containerd[1432]: time="2024-11-12T17:43:39.468303433Z" level=info msg="StartContainer for \"57a02cf74d6c74ee539be8d64f9fe9fbaf5071a1aabf8d88898ef48531c1f63f\""
Nov 12 17:43:39.498890 systemd[1]: Started cri-containerd-57a02cf74d6c74ee539be8d64f9fe9fbaf5071a1aabf8d88898ef48531c1f63f.scope - libcontainer container 57a02cf74d6c74ee539be8d64f9fe9fbaf5071a1aabf8d88898ef48531c1f63f.
Nov 12 17:43:39.523379 kubelet[2454]: I1112 17:43:39.522932    2454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Nov 12 17:43:39.523379 kubelet[2454]: E1112 17:43:39.523249    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:39.527708 containerd[1432]: time="2024-11-12T17:43:39.527607045Z" level=info msg="StartContainer for \"57a02cf74d6c74ee539be8d64f9fe9fbaf5071a1aabf8d88898ef48531c1f63f\" returns successfully"
Nov 12 17:43:39.568220 kubelet[2454]: E1112 17:43:39.568147    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.568220 kubelet[2454]: W1112 17:43:39.568171    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.568220 kubelet[2454]: E1112 17:43:39.568189    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.568867 kubelet[2454]: E1112 17:43:39.568676    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.568867 kubelet[2454]: W1112 17:43:39.568690    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.568867 kubelet[2454]: E1112 17:43:39.568706    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.569263 kubelet[2454]: E1112 17:43:39.569137    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.569263 kubelet[2454]: W1112 17:43:39.569152    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.569263 kubelet[2454]: E1112 17:43:39.569164    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.569556 kubelet[2454]: E1112 17:43:39.569438    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.569556 kubelet[2454]: W1112 17:43:39.569450    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.569556 kubelet[2454]: E1112 17:43:39.569462    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.569815 kubelet[2454]: E1112 17:43:39.569777    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.569815 kubelet[2454]: W1112 17:43:39.569789    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.569815 kubelet[2454]: E1112 17:43:39.569799    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.570307 kubelet[2454]: E1112 17:43:39.570188    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.570307 kubelet[2454]: W1112 17:43:39.570203    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.570307 kubelet[2454]: E1112 17:43:39.570214    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.570526 kubelet[2454]: E1112 17:43:39.570455    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.570526 kubelet[2454]: W1112 17:43:39.570464    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.570526 kubelet[2454]: E1112 17:43:39.570474    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.570914 kubelet[2454]: E1112 17:43:39.570795    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.570914 kubelet[2454]: W1112 17:43:39.570811    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.570914 kubelet[2454]: E1112 17:43:39.570822    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.571703 kubelet[2454]: E1112 17:43:39.571376    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.571889 kubelet[2454]: W1112 17:43:39.571815    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.571889 kubelet[2454]: E1112 17:43:39.571835    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.572222 kubelet[2454]: E1112 17:43:39.572125    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.572222 kubelet[2454]: W1112 17:43:39.572138    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.572222 kubelet[2454]: E1112 17:43:39.572149    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.572562 kubelet[2454]: E1112 17:43:39.572453    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.572562 kubelet[2454]: W1112 17:43:39.572467    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.572562 kubelet[2454]: E1112 17:43:39.572479    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.573152 kubelet[2454]: E1112 17:43:39.573117    2454 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Nov 12 17:43:39.573152 kubelet[2454]: W1112 17:43:39.573131    2454 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Nov 12 17:43:39.573327 kubelet[2454]: E1112 17:43:39.573248    2454 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Nov 12 17:43:39.578175 systemd[1]: cri-containerd-57a02cf74d6c74ee539be8d64f9fe9fbaf5071a1aabf8d88898ef48531c1f63f.scope: Deactivated successfully.
Nov 12 17:43:39.671980 containerd[1432]: time="2024-11-12T17:43:39.667966079Z" level=info msg="shim disconnected" id=57a02cf74d6c74ee539be8d64f9fe9fbaf5071a1aabf8d88898ef48531c1f63f namespace=k8s.io
Nov 12 17:43:39.671980 containerd[1432]: time="2024-11-12T17:43:39.671804900Z" level=warning msg="cleaning up after shim disconnected" id=57a02cf74d6c74ee539be8d64f9fe9fbaf5071a1aabf8d88898ef48531c1f63f namespace=k8s.io
Nov 12 17:43:39.671980 containerd[1432]: time="2024-11-12T17:43:39.671819026Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Nov 12 17:43:40.191938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57a02cf74d6c74ee539be8d64f9fe9fbaf5071a1aabf8d88898ef48531c1f63f-rootfs.mount: Deactivated successfully.
Nov 12 17:43:40.527350 kubelet[2454]: E1112 17:43:40.527118    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:40.529038 containerd[1432]: time="2024-11-12T17:43:40.528035490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\""
Nov 12 17:43:41.443803 kubelet[2454]: E1112 17:43:41.443729    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2dwk" podUID="b411254a-fa39-4c2a-ae0e-e271a38a0ca1"
Nov 12 17:43:43.444302 kubelet[2454]: E1112 17:43:43.444255    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g2dwk" podUID="b411254a-fa39-4c2a-ae0e-e271a38a0ca1"
Nov 12 17:43:44.196502 containerd[1432]: time="2024-11-12T17:43:44.196452465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:44.197123 containerd[1432]: time="2024-11-12T17:43:44.197044350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=89700517"
Nov 12 17:43:44.197564 containerd[1432]: time="2024-11-12T17:43:44.197540483Z" level=info msg="ImageCreate event name:\"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:44.199745 containerd[1432]: time="2024-11-12T17:43:44.199671743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:44.201109 containerd[1432]: time="2024-11-12T17:43:44.201061546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"91070153\" in 3.672968392s"
Nov 12 17:43:44.201158 containerd[1432]: time="2024-11-12T17:43:44.201106001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\""
Nov 12 17:43:44.202911 containerd[1432]: time="2024-11-12T17:43:44.202859050Z" level=info msg="CreateContainer within sandbox \"a3d4c342e684ab132fd91909519364f97411f9bf251f3f8fbb95c1b307fe8729\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Nov 12 17:43:44.211590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647613030.mount: Deactivated successfully.
Nov 12 17:43:44.215123 containerd[1432]: time="2024-11-12T17:43:44.215071932Z" level=info msg="CreateContainer within sandbox \"a3d4c342e684ab132fd91909519364f97411f9bf251f3f8fbb95c1b307fe8729\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6b0b3721e27464f315f12e947636aa1fcd4d5be792c1299ea5ddd0440a82d377\""
Nov 12 17:43:44.216169 containerd[1432]: time="2024-11-12T17:43:44.215506483Z" level=info msg="StartContainer for \"6b0b3721e27464f315f12e947636aa1fcd4d5be792c1299ea5ddd0440a82d377\""
Nov 12 17:43:44.253875 systemd[1]: Started cri-containerd-6b0b3721e27464f315f12e947636aa1fcd4d5be792c1299ea5ddd0440a82d377.scope - libcontainer container 6b0b3721e27464f315f12e947636aa1fcd4d5be792c1299ea5ddd0440a82d377.
Nov 12 17:43:44.278352 containerd[1432]: time="2024-11-12T17:43:44.278310137Z" level=info msg="StartContainer for \"6b0b3721e27464f315f12e947636aa1fcd4d5be792c1299ea5ddd0440a82d377\" returns successfully"
Nov 12 17:43:44.539460 kubelet[2454]: E1112 17:43:44.539345    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:44.845333 systemd[1]: cri-containerd-6b0b3721e27464f315f12e947636aa1fcd4d5be792c1299ea5ddd0440a82d377.scope: Deactivated successfully.
Nov 12 17:43:44.862065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b0b3721e27464f315f12e947636aa1fcd4d5be792c1299ea5ddd0440a82d377-rootfs.mount: Deactivated successfully.
Nov 12 17:43:44.873645 kubelet[2454]: I1112 17:43:44.872533    2454 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
Nov 12 17:43:44.896696 containerd[1432]: time="2024-11-12T17:43:44.896607216Z" level=info msg="shim disconnected" id=6b0b3721e27464f315f12e947636aa1fcd4d5be792c1299ea5ddd0440a82d377 namespace=k8s.io
Nov 12 17:43:44.896696 containerd[1432]: time="2024-11-12T17:43:44.896686524Z" level=warning msg="cleaning up after shim disconnected" id=6b0b3721e27464f315f12e947636aa1fcd4d5be792c1299ea5ddd0440a82d377 namespace=k8s.io
Nov 12 17:43:44.896696 containerd[1432]: time="2024-11-12T17:43:44.896697928Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Nov 12 17:43:44.928023 systemd[1]: Created slice kubepods-besteffort-pod3e30889a_e694_4689_92d5_cf89f334a65b.slice - libcontainer container kubepods-besteffort-pod3e30889a_e694_4689_92d5_cf89f334a65b.slice.
Nov 12 17:43:44.940809 systemd[1]: Created slice kubepods-burstable-pod98f3ae8e_274c_478a_a46c_2a1f05e70b20.slice - libcontainer container kubepods-burstable-pod98f3ae8e_274c_478a_a46c_2a1f05e70b20.slice.
Nov 12 17:43:44.957412 systemd[1]: Created slice kubepods-besteffort-pod47b1c060_5f96_4d3d_854b_cd0f2891eab7.slice - libcontainer container kubepods-besteffort-pod47b1c060_5f96_4d3d_854b_cd0f2891eab7.slice.
Nov 12 17:43:44.963627 systemd[1]: Created slice kubepods-burstable-pod53e0c452_1122_4c00_814a_21a5b2fcb5be.slice - libcontainer container kubepods-burstable-pod53e0c452_1122_4c00_814a_21a5b2fcb5be.slice.
Nov 12 17:43:44.968960 systemd[1]: Created slice kubepods-besteffort-podb7973464_e27c_437f_b721_54ead210e780.slice - libcontainer container kubepods-besteffort-podb7973464_e27c_437f_b721_54ead210e780.slice.
Nov 12 17:43:44.995500 kubelet[2454]: I1112 17:43:44.995458    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2sl\" (UniqueName: \"kubernetes.io/projected/47b1c060-5f96-4d3d-854b-cd0f2891eab7-kube-api-access-7l2sl\") pod \"calico-apiserver-6548764f9d-nfm69\" (UID: \"47b1c060-5f96-4d3d-854b-cd0f2891eab7\") " pod="calico-apiserver/calico-apiserver-6548764f9d-nfm69"
Nov 12 17:43:44.995500 kubelet[2454]: I1112 17:43:44.995501    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7973464-e27c-437f-b721-54ead210e780-tigera-ca-bundle\") pod \"calico-kube-controllers-68bb8ff95b-ztb6p\" (UID: \"b7973464-e27c-437f-b721-54ead210e780\") " pod="calico-system/calico-kube-controllers-68bb8ff95b-ztb6p"
Nov 12 17:43:44.995622 kubelet[2454]: I1112 17:43:44.995521    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98f3ae8e-274c-478a-a46c-2a1f05e70b20-config-volume\") pod \"coredns-6f6b679f8f-68v88\" (UID: \"98f3ae8e-274c-478a-a46c-2a1f05e70b20\") " pod="kube-system/coredns-6f6b679f8f-68v88"
Nov 12 17:43:44.995622 kubelet[2454]: I1112 17:43:44.995541    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-825pk\" (UniqueName: \"kubernetes.io/projected/53e0c452-1122-4c00-814a-21a5b2fcb5be-kube-api-access-825pk\") pod \"coredns-6f6b679f8f-9q5zc\" (UID: \"53e0c452-1122-4c00-814a-21a5b2fcb5be\") " pod="kube-system/coredns-6f6b679f8f-9q5zc"
Nov 12 17:43:44.995622 kubelet[2454]: I1112 17:43:44.995557    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d84c\" (UniqueName: \"kubernetes.io/projected/3e30889a-e694-4689-92d5-cf89f334a65b-kube-api-access-9d84c\") pod \"calico-apiserver-6548764f9d-j6twf\" (UID: \"3e30889a-e694-4689-92d5-cf89f334a65b\") " pod="calico-apiserver/calico-apiserver-6548764f9d-j6twf"
Nov 12 17:43:44.995622 kubelet[2454]: I1112 17:43:44.995572    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4qg4\" (UniqueName: \"kubernetes.io/projected/b7973464-e27c-437f-b721-54ead210e780-kube-api-access-v4qg4\") pod \"calico-kube-controllers-68bb8ff95b-ztb6p\" (UID: \"b7973464-e27c-437f-b721-54ead210e780\") " pod="calico-system/calico-kube-controllers-68bb8ff95b-ztb6p"
Nov 12 17:43:44.995622 kubelet[2454]: I1112 17:43:44.995591    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpls4\" (UniqueName: \"kubernetes.io/projected/98f3ae8e-274c-478a-a46c-2a1f05e70b20-kube-api-access-zpls4\") pod \"coredns-6f6b679f8f-68v88\" (UID: \"98f3ae8e-274c-478a-a46c-2a1f05e70b20\") " pod="kube-system/coredns-6f6b679f8f-68v88"
Nov 12 17:43:44.995765 kubelet[2454]: I1112 17:43:44.995609    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3e30889a-e694-4689-92d5-cf89f334a65b-calico-apiserver-certs\") pod \"calico-apiserver-6548764f9d-j6twf\" (UID: \"3e30889a-e694-4689-92d5-cf89f334a65b\") " pod="calico-apiserver/calico-apiserver-6548764f9d-j6twf"
Nov 12 17:43:44.995765 kubelet[2454]: I1112 17:43:44.995628    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/47b1c060-5f96-4d3d-854b-cd0f2891eab7-calico-apiserver-certs\") pod \"calico-apiserver-6548764f9d-nfm69\" (UID: \"47b1c060-5f96-4d3d-854b-cd0f2891eab7\") " pod="calico-apiserver/calico-apiserver-6548764f9d-nfm69"
Nov 12 17:43:44.995765 kubelet[2454]: I1112 17:43:44.995643    2454 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53e0c452-1122-4c00-814a-21a5b2fcb5be-config-volume\") pod \"coredns-6f6b679f8f-9q5zc\" (UID: \"53e0c452-1122-4c00-814a-21a5b2fcb5be\") " pod="kube-system/coredns-6f6b679f8f-9q5zc"
Nov 12 17:43:45.235579 containerd[1432]: time="2024-11-12T17:43:45.234961184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6548764f9d-j6twf,Uid:3e30889a-e694-4689-92d5-cf89f334a65b,Namespace:calico-apiserver,Attempt:0,}"
Nov 12 17:43:45.245001 kubelet[2454]: E1112 17:43:45.244964    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:45.245554 containerd[1432]: time="2024-11-12T17:43:45.245346006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-68v88,Uid:98f3ae8e-274c-478a-a46c-2a1f05e70b20,Namespace:kube-system,Attempt:0,}"
Nov 12 17:43:45.262732 containerd[1432]: time="2024-11-12T17:43:45.262679984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6548764f9d-nfm69,Uid:47b1c060-5f96-4d3d-854b-cd0f2891eab7,Namespace:calico-apiserver,Attempt:0,}"
Nov 12 17:43:45.267534 kubelet[2454]: E1112 17:43:45.267488    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:45.272986 containerd[1432]: time="2024-11-12T17:43:45.272956370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9q5zc,Uid:53e0c452-1122-4c00-814a-21a5b2fcb5be,Namespace:kube-system,Attempt:0,}"
Nov 12 17:43:45.273305 containerd[1432]: time="2024-11-12T17:43:45.272988100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb8ff95b-ztb6p,Uid:b7973464-e27c-437f-b721-54ead210e780,Namespace:calico-system,Attempt:0,}"
Nov 12 17:43:45.461865 systemd[1]: Created slice kubepods-besteffort-podb411254a_fa39_4c2a_ae0e_e271a38a0ca1.slice - libcontainer container kubepods-besteffort-podb411254a_fa39_4c2a_ae0e_e271a38a0ca1.slice.
Nov 12 17:43:45.484425 containerd[1432]: time="2024-11-12T17:43:45.482411512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2dwk,Uid:b411254a-fa39-4c2a-ae0e-e271a38a0ca1,Namespace:calico-system,Attempt:0,}"
Nov 12 17:43:45.583399 kubelet[2454]: E1112 17:43:45.583121    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:45.592139 containerd[1432]: time="2024-11-12T17:43:45.592084352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\""
Nov 12 17:43:45.629604 containerd[1432]: time="2024-11-12T17:43:45.629541078Z" level=error msg="Failed to destroy network for sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.630208 containerd[1432]: time="2024-11-12T17:43:45.630175210Z" level=error msg="encountered an error cleaning up failed sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.630334 containerd[1432]: time="2024-11-12T17:43:45.630312095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6548764f9d-j6twf,Uid:3e30889a-e694-4689-92d5-cf89f334a65b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.631806 containerd[1432]: time="2024-11-12T17:43:45.631767260Z" level=error msg="Failed to destroy network for sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.632097 containerd[1432]: time="2024-11-12T17:43:45.632063319Z" level=error msg="encountered an error cleaning up failed sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.632154 kubelet[2454]: E1112 17:43:45.632057    2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.632208 containerd[1432]: time="2024-11-12T17:43:45.632116057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb8ff95b-ztb6p,Uid:b7973464-e27c-437f-b721-54ead210e780,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.632318 kubelet[2454]: E1112 17:43:45.632163    2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6548764f9d-j6twf"
Nov 12 17:43:45.632318 kubelet[2454]: E1112 17:43:45.632192    2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6548764f9d-j6twf"
Nov 12 17:43:45.632318 kubelet[2454]: E1112 17:43:45.632235    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6548764f9d-j6twf_calico-apiserver(3e30889a-e694-4689-92d5-cf89f334a65b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6548764f9d-j6twf_calico-apiserver(3e30889a-e694-4689-92d5-cf89f334a65b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6548764f9d-j6twf" podUID="3e30889a-e694-4689-92d5-cf89f334a65b"
Nov 12 17:43:45.633472 kubelet[2454]: E1112 17:43:45.632247    2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.633472 kubelet[2454]: E1112 17:43:45.632293    2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68bb8ff95b-ztb6p"
Nov 12 17:43:45.633472 kubelet[2454]: E1112 17:43:45.632310    2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68bb8ff95b-ztb6p"
Nov 12 17:43:45.633548 kubelet[2454]: E1112 17:43:45.632340    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68bb8ff95b-ztb6p_calico-system(b7973464-e27c-437f-b721-54ead210e780)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68bb8ff95b-ztb6p_calico-system(b7973464-e27c-437f-b721-54ead210e780)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68bb8ff95b-ztb6p" podUID="b7973464-e27c-437f-b721-54ead210e780"
Nov 12 17:43:45.640916 containerd[1432]: time="2024-11-12T17:43:45.640862212Z" level=error msg="Failed to destroy network for sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.641342 containerd[1432]: time="2024-11-12T17:43:45.641308481Z" level=error msg="encountered an error cleaning up failed sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.641450 containerd[1432]: time="2024-11-12T17:43:45.641429001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6548764f9d-nfm69,Uid:47b1c060-5f96-4d3d-854b-cd0f2891eab7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.641774 kubelet[2454]: E1112 17:43:45.641701    2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.641901 kubelet[2454]: E1112 17:43:45.641883    2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6548764f9d-nfm69"
Nov 12 17:43:45.641985 kubelet[2454]: E1112 17:43:45.641957    2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6548764f9d-nfm69"
Nov 12 17:43:45.642519 kubelet[2454]: E1112 17:43:45.642487    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6548764f9d-nfm69_calico-apiserver(47b1c060-5f96-4d3d-854b-cd0f2891eab7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6548764f9d-nfm69_calico-apiserver(47b1c060-5f96-4d3d-854b-cd0f2891eab7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6548764f9d-nfm69" podUID="47b1c060-5f96-4d3d-854b-cd0f2891eab7"
Nov 12 17:43:45.645296 containerd[1432]: time="2024-11-12T17:43:45.645258238Z" level=error msg="Failed to destroy network for sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.647982 containerd[1432]: time="2024-11-12T17:43:45.647950815Z" level=error msg="encountered an error cleaning up failed sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.648037 containerd[1432]: time="2024-11-12T17:43:45.648007194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9q5zc,Uid:53e0c452-1122-4c00-814a-21a5b2fcb5be,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.648207 kubelet[2454]: E1112 17:43:45.648180    2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.648246 kubelet[2454]: E1112 17:43:45.648223    2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9q5zc"
Nov 12 17:43:45.648281 kubelet[2454]: E1112 17:43:45.648244    2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-9q5zc"
Nov 12 17:43:45.648309 kubelet[2454]: E1112 17:43:45.648279    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-9q5zc_kube-system(53e0c452-1122-4c00-814a-21a5b2fcb5be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-9q5zc_kube-system(53e0c452-1122-4c00-814a-21a5b2fcb5be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9q5zc" podUID="53e0c452-1122-4c00-814a-21a5b2fcb5be"
Nov 12 17:43:45.651828 containerd[1432]: time="2024-11-12T17:43:45.651789975Z" level=error msg="Failed to destroy network for sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.652204 containerd[1432]: time="2024-11-12T17:43:45.652162299Z" level=error msg="encountered an error cleaning up failed sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.652247 containerd[1432]: time="2024-11-12T17:43:45.652215837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-68v88,Uid:98f3ae8e-274c-478a-a46c-2a1f05e70b20,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.652420 kubelet[2454]: E1112 17:43:45.652393    2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.652460 kubelet[2454]: E1112 17:43:45.652435    2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-68v88"
Nov 12 17:43:45.652460 kubelet[2454]: E1112 17:43:45.652455    2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-68v88"
Nov 12 17:43:45.652518 kubelet[2454]: E1112 17:43:45.652489    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-68v88_kube-system(98f3ae8e-274c-478a-a46c-2a1f05e70b20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-68v88_kube-system(98f3ae8e-274c-478a-a46c-2a1f05e70b20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-68v88" podUID="98f3ae8e-274c-478a-a46c-2a1f05e70b20"
Nov 12 17:43:45.654828 containerd[1432]: time="2024-11-12T17:43:45.654798458Z" level=error msg="Failed to destroy network for sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.655081 containerd[1432]: time="2024-11-12T17:43:45.655055624Z" level=error msg="encountered an error cleaning up failed sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.655131 containerd[1432]: time="2024-11-12T17:43:45.655107401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2dwk,Uid:b411254a-fa39-4c2a-ae0e-e271a38a0ca1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.655327 kubelet[2454]: E1112 17:43:45.655300    2454 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:45.655364 kubelet[2454]: E1112 17:43:45.655342    2454 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g2dwk"
Nov 12 17:43:45.655403 kubelet[2454]: E1112 17:43:45.655367    2454 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g2dwk"
Nov 12 17:43:45.655453 kubelet[2454]: E1112 17:43:45.655405    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g2dwk_calico-system(b411254a-fa39-4c2a-ae0e-e271a38a0ca1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g2dwk_calico-system(b411254a-fa39-4c2a-ae0e-e271a38a0ca1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g2dwk" podUID="b411254a-fa39-4c2a-ae0e-e271a38a0ca1"
Nov 12 17:43:46.210671 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89-shm.mount: Deactivated successfully.
Nov 12 17:43:46.212935 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de-shm.mount: Deactivated successfully.
Nov 12 17:43:46.587139 kubelet[2454]: I1112 17:43:46.586769    2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:43:46.588762 containerd[1432]: time="2024-11-12T17:43:46.588080554Z" level=info msg="StopPodSandbox for \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\""
Nov 12 17:43:46.588762 containerd[1432]: time="2024-11-12T17:43:46.588282419Z" level=info msg="Ensure that sandbox 3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de in task-service has been cleanup successfully"
Nov 12 17:43:46.589045 kubelet[2454]: I1112 17:43:46.589024    2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:43:46.589617 containerd[1432]: time="2024-11-12T17:43:46.589582275Z" level=info msg="StopPodSandbox for \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\""
Nov 12 17:43:46.589751 containerd[1432]: time="2024-11-12T17:43:46.589731002Z" level=info msg="Ensure that sandbox 6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0 in task-service has been cleanup successfully"
Nov 12 17:43:46.590831 kubelet[2454]: I1112 17:43:46.590677    2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:43:46.591693 containerd[1432]: time="2024-11-12T17:43:46.591658260Z" level=info msg="StopPodSandbox for \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\""
Nov 12 17:43:46.592549 kubelet[2454]: I1112 17:43:46.592512    2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:43:46.592941 containerd[1432]: time="2024-11-12T17:43:46.592918223Z" level=info msg="Ensure that sandbox 1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89 in task-service has been cleanup successfully"
Nov 12 17:43:46.593208 containerd[1432]: time="2024-11-12T17:43:46.592937349Z" level=info msg="StopPodSandbox for \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\""
Nov 12 17:43:46.593365 containerd[1432]: time="2024-11-12T17:43:46.593320792Z" level=info msg="Ensure that sandbox acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7 in task-service has been cleanup successfully"
Nov 12 17:43:46.595818 kubelet[2454]: I1112 17:43:46.595752    2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:43:46.596438 containerd[1432]: time="2024-11-12T17:43:46.596397337Z" level=info msg="StopPodSandbox for \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\""
Nov 12 17:43:46.596654 containerd[1432]: time="2024-11-12T17:43:46.596601563Z" level=info msg="Ensure that sandbox 72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36 in task-service has been cleanup successfully"
Nov 12 17:43:46.599253 kubelet[2454]: I1112 17:43:46.599225    2454 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:43:46.600281 containerd[1432]: time="2024-11-12T17:43:46.600172466Z" level=info msg="StopPodSandbox for \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\""
Nov 12 17:43:46.600360 containerd[1432]: time="2024-11-12T17:43:46.600321634Z" level=info msg="Ensure that sandbox fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201 in task-service has been cleanup successfully"
Nov 12 17:43:46.639402 containerd[1432]: time="2024-11-12T17:43:46.639193242Z" level=error msg="StopPodSandbox for \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\" failed" error="failed to destroy network for sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:46.639542 kubelet[2454]: E1112 17:43:46.639437    2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:43:46.639542 kubelet[2454]: E1112 17:43:46.639505    2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"}
Nov 12 17:43:46.639611 kubelet[2454]: E1112 17:43:46.639567    2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98f3ae8e-274c-478a-a46c-2a1f05e70b20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Nov 12 17:43:46.639611 kubelet[2454]: E1112 17:43:46.639589    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98f3ae8e-274c-478a-a46c-2a1f05e70b20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-68v88" podUID="98f3ae8e-274c-478a-a46c-2a1f05e70b20"
Nov 12 17:43:46.649747 containerd[1432]: time="2024-11-12T17:43:46.649638867Z" level=error msg="StopPodSandbox for \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\" failed" error="failed to destroy network for sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:46.650149 kubelet[2454]: E1112 17:43:46.649950    2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:43:46.650149 kubelet[2454]: E1112 17:43:46.650005    2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"}
Nov 12 17:43:46.650149 kubelet[2454]: E1112 17:43:46.650038    2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e30889a-e694-4689-92d5-cf89f334a65b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Nov 12 17:43:46.650149 kubelet[2454]: E1112 17:43:46.650070    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e30889a-e694-4689-92d5-cf89f334a65b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6548764f9d-j6twf" podUID="3e30889a-e694-4689-92d5-cf89f334a65b"
Nov 12 17:43:46.651315 containerd[1432]: time="2024-11-12T17:43:46.651261947Z" level=error msg="StopPodSandbox for \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\" failed" error="failed to destroy network for sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:46.651472 kubelet[2454]: E1112 17:43:46.651432    2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:43:46.651510 kubelet[2454]: E1112 17:43:46.651473    2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"}
Nov 12 17:43:46.651510 kubelet[2454]: E1112 17:43:46.651504    2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53e0c452-1122-4c00-814a-21a5b2fcb5be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Nov 12 17:43:46.651572 kubelet[2454]: E1112 17:43:46.651523    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53e0c452-1122-4c00-814a-21a5b2fcb5be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-9q5zc" podUID="53e0c452-1122-4c00-814a-21a5b2fcb5be"
Nov 12 17:43:46.659010 containerd[1432]: time="2024-11-12T17:43:46.658965054Z" level=error msg="StopPodSandbox for \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\" failed" error="failed to destroy network for sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:46.659159 kubelet[2454]: E1112 17:43:46.659126    2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:43:46.659204 kubelet[2454]: E1112 17:43:46.659164    2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"}
Nov 12 17:43:46.659204 kubelet[2454]: E1112 17:43:46.659192    2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b411254a-fa39-4c2a-ae0e-e271a38a0ca1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Nov 12 17:43:46.659270 kubelet[2454]: E1112 17:43:46.659213    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b411254a-fa39-4c2a-ae0e-e271a38a0ca1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g2dwk" podUID="b411254a-fa39-4c2a-ae0e-e271a38a0ca1"
Nov 12 17:43:46.662965 containerd[1432]: time="2024-11-12T17:43:46.662932124Z" level=error msg="StopPodSandbox for \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\" failed" error="failed to destroy network for sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:46.663234 kubelet[2454]: E1112 17:43:46.663092    2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:43:46.663234 kubelet[2454]: E1112 17:43:46.663125    2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"}
Nov 12 17:43:46.663234 kubelet[2454]: E1112 17:43:46.663151    2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7973464-e27c-437f-b721-54ead210e780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Nov 12 17:43:46.663234 kubelet[2454]: E1112 17:43:46.663168    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7973464-e27c-437f-b721-54ead210e780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68bb8ff95b-ztb6p" podUID="b7973464-e27c-437f-b721-54ead210e780"
Nov 12 17:43:46.665795 containerd[1432]: time="2024-11-12T17:43:46.665753948Z" level=error msg="StopPodSandbox for \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\" failed" error="failed to destroy network for sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Nov 12 17:43:46.666021 kubelet[2454]: E1112 17:43:46.665927    2454 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:43:46.666067 kubelet[2454]: E1112 17:43:46.666022    2454 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"}
Nov 12 17:43:46.666067 kubelet[2454]: E1112 17:43:46.666048    2454 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47b1c060-5f96-4d3d-854b-cd0f2891eab7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Nov 12 17:43:46.666196 kubelet[2454]: E1112 17:43:46.666065    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47b1c060-5f96-4d3d-854b-cd0f2891eab7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6548764f9d-nfm69" podUID="47b1c060-5f96-4d3d-854b-cd0f2891eab7"
Nov 12 17:43:49.262284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654835024.mount: Deactivated successfully.
Nov 12 17:43:49.502917 containerd[1432]: time="2024-11-12T17:43:49.502862774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:49.503608 containerd[1432]: time="2024-11-12T17:43:49.503569335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=135495328"
Nov 12 17:43:49.504258 containerd[1432]: time="2024-11-12T17:43:49.504221642Z" level=info msg="ImageCreate event name:\"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:49.506051 containerd[1432]: time="2024-11-12T17:43:49.506005871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:49.506499 containerd[1432]: time="2024-11-12T17:43:49.506473725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"135495190\" in 3.914342597s"
Nov 12 17:43:49.506546 containerd[1432]: time="2024-11-12T17:43:49.506505974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\""
Nov 12 17:43:49.533772 containerd[1432]: time="2024-11-12T17:43:49.533413579Z" level=info msg="CreateContainer within sandbox \"a3d4c342e684ab132fd91909519364f97411f9bf251f3f8fbb95c1b307fe8729\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Nov 12 17:43:49.554462 containerd[1432]: time="2024-11-12T17:43:49.554415778Z" level=info msg="CreateContainer within sandbox \"a3d4c342e684ab132fd91909519364f97411f9bf251f3f8fbb95c1b307fe8729\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"205741bf0c9e605a98e22a87971ed9b942cf8ee4801b925c77e74d815cfdf3d6\""
Nov 12 17:43:49.556229 containerd[1432]: time="2024-11-12T17:43:49.554841699Z" level=info msg="StartContainer for \"205741bf0c9e605a98e22a87971ed9b942cf8ee4801b925c77e74d815cfdf3d6\""
Nov 12 17:43:49.602861 systemd[1]: Started cri-containerd-205741bf0c9e605a98e22a87971ed9b942cf8ee4801b925c77e74d815cfdf3d6.scope - libcontainer container 205741bf0c9e605a98e22a87971ed9b942cf8ee4801b925c77e74d815cfdf3d6.
Nov 12 17:43:49.631007 containerd[1432]: time="2024-11-12T17:43:49.630959880Z" level=info msg="StartContainer for \"205741bf0c9e605a98e22a87971ed9b942cf8ee4801b925c77e74d815cfdf3d6\" returns successfully"
Nov 12 17:43:49.783751 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Nov 12 17:43:49.783854 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Nov 12 17:43:50.611296 kubelet[2454]: E1112 17:43:50.611226    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:51.612614 kubelet[2454]: E1112 17:43:51.612525    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:52.434109 systemd[1]: Started sshd@7-10.0.0.44:22-10.0.0.1:51618.service - OpenSSH per-connection server daemon (10.0.0.1:51618).
Nov 12 17:43:52.480328 sshd[3838]: Accepted publickey for core from 10.0.0.1 port 51618 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:43:52.482006 sshd[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:43:52.487253 systemd-logind[1415]: New session 8 of user core.
Nov 12 17:43:52.495003 systemd[1]: Started session-8.scope - Session 8 of User core.
Nov 12 17:43:52.652927 sshd[3838]: pam_unix(sshd:session): session closed for user core
Nov 12 17:43:52.656986 systemd[1]: sshd@7-10.0.0.44:22-10.0.0.1:51618.service: Deactivated successfully.
Nov 12 17:43:52.659013 systemd[1]: session-8.scope: Deactivated successfully.
Nov 12 17:43:52.659643 systemd-logind[1415]: Session 8 logged out. Waiting for processes to exit.
Nov 12 17:43:52.660495 systemd-logind[1415]: Removed session 8.
Nov 12 17:43:57.449234 containerd[1432]: time="2024-11-12T17:43:57.448489346Z" level=info msg="StopPodSandbox for \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\""
Nov 12 17:43:57.611217 kubelet[2454]: I1112 17:43:57.611147    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pbzgc" podStartSLOduration=8.951277167 podStartE2EDuration="22.611125166s" podCreationTimestamp="2024-11-12 17:43:35 +0000 UTC" firstStartedPulling="2024-11-12 17:43:35.867923449 +0000 UTC m=+15.535671992" lastFinishedPulling="2024-11-12 17:43:49.527771408 +0000 UTC m=+29.195519991" observedRunningTime="2024-11-12 17:43:50.636066926 +0000 UTC m=+30.303815509" watchObservedRunningTime="2024-11-12 17:43:57.611125166 +0000 UTC m=+37.278873749"
Nov 12 17:43:57.664953 systemd[1]: Started sshd@8-10.0.0.44:22-10.0.0.1:38320.service - OpenSSH per-connection server daemon (10.0.0.1:38320).
Nov 12 17:43:57.711491 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 38320 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:43:57.713008 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:43:57.719873 systemd-logind[1415]: New session 9 of user core.
Nov 12 17:43:57.727127 systemd[1]: Started session-9.scope - Session 9 of User core.
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.607 [INFO][3980] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.608 [INFO][3980] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" iface="eth0" netns="/var/run/netns/cni-2d16e4cd-ff1a-a2a4-cd1c-ce6e7825c60f"
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.608 [INFO][3980] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" iface="eth0" netns="/var/run/netns/cni-2d16e4cd-ff1a-a2a4-cd1c-ce6e7825c60f"
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.609 [INFO][3980] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" iface="eth0" netns="/var/run/netns/cni-2d16e4cd-ff1a-a2a4-cd1c-ce6e7825c60f"
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.609 [INFO][3980] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.609 [INFO][3980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.705 [INFO][3998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" HandleID="k8s-pod-network.72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.705 [INFO][3998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.705 [INFO][3998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.717 [WARNING][3998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" HandleID="k8s-pod-network.72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.717 [INFO][3998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" HandleID="k8s-pod-network.72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.720 [INFO][3998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:43:57.728823 containerd[1432]: 2024-11-12 17:43:57.722 [INFO][3980] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:43:57.728823 containerd[1432]: time="2024-11-12T17:43:57.724811084Z" level=info msg="TearDown network for sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\" successfully"
Nov 12 17:43:57.728823 containerd[1432]: time="2024-11-12T17:43:57.724837010Z" level=info msg="StopPodSandbox for \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\" returns successfully"
Nov 12 17:43:57.728823 containerd[1432]: time="2024-11-12T17:43:57.727981662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb8ff95b-ztb6p,Uid:b7973464-e27c-437f-b721-54ead210e780,Namespace:calico-system,Attempt:1,}"
Nov 12 17:43:57.730305 systemd[1]: run-netns-cni\x2d2d16e4cd\x2dff1a\x2da2a4\x2dcd1c\x2dce6e7825c60f.mount: Deactivated successfully.
Nov 12 17:43:57.876497 systemd-networkd[1362]: cali6811d14c6a5: Link UP
Nov 12 17:43:57.877883 systemd-networkd[1362]: cali6811d14c6a5: Gained carrier
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.777 [INFO][4012] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.795 [INFO][4012] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0 calico-kube-controllers-68bb8ff95b- calico-system  b7973464-e27c-437f-b721-54ead210e780 932 0 2024-11-12 17:43:35 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68bb8ff95b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  localhost  calico-kube-controllers-68bb8ff95b-ztb6p eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6811d14c6a5  [] []}} ContainerID="95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" Namespace="calico-system" Pod="calico-kube-controllers-68bb8ff95b-ztb6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.795 [INFO][4012] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" Namespace="calico-system" Pod="calico-kube-controllers-68bb8ff95b-ztb6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.823 [INFO][4033] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" HandleID="k8s-pod-network.95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.836 [INFO][4033] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" HandleID="k8s-pod-network.95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c060), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-68bb8ff95b-ztb6p", "timestamp":"2024-11-12 17:43:57.823704224 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.836 [INFO][4033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.836 [INFO][4033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.836 [INFO][4033] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.838 [INFO][4033] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" host="localhost"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.844 [INFO][4033] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.852 [INFO][4033] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.854 [INFO][4033] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.856 [INFO][4033] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.856 [INFO][4033] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" host="localhost"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.858 [INFO][4033] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.862 [INFO][4033] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" host="localhost"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.868 [INFO][4033] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" host="localhost"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.868 [INFO][4033] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" host="localhost"
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.868 [INFO][4033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:43:57.893091 containerd[1432]: 2024-11-12 17:43:57.868 [INFO][4033] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" HandleID="k8s-pod-network.95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:43:57.894438 containerd[1432]: 2024-11-12 17:43:57.870 [INFO][4012] cni-plugin/k8s.go 386: Populated endpoint ContainerID="95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" Namespace="calico-system" Pod="calico-kube-controllers-68bb8ff95b-ztb6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0", GenerateName:"calico-kube-controllers-68bb8ff95b-", Namespace:"calico-system", SelfLink:"", UID:"b7973464-e27c-437f-b721-54ead210e780", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bb8ff95b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-68bb8ff95b-ztb6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6811d14c6a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:43:57.894438 containerd[1432]: 2024-11-12 17:43:57.870 [INFO][4012] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" Namespace="calico-system" Pod="calico-kube-controllers-68bb8ff95b-ztb6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:43:57.894438 containerd[1432]: 2024-11-12 17:43:57.870 [INFO][4012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6811d14c6a5 ContainerID="95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" Namespace="calico-system" Pod="calico-kube-controllers-68bb8ff95b-ztb6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:43:57.894438 containerd[1432]: 2024-11-12 17:43:57.877 [INFO][4012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" Namespace="calico-system" Pod="calico-kube-controllers-68bb8ff95b-ztb6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:43:57.894438 containerd[1432]: 2024-11-12 17:43:57.878 [INFO][4012] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" Namespace="calico-system" Pod="calico-kube-controllers-68bb8ff95b-ztb6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0", GenerateName:"calico-kube-controllers-68bb8ff95b-", Namespace:"calico-system", SelfLink:"", UID:"b7973464-e27c-437f-b721-54ead210e780", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bb8ff95b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80", Pod:"calico-kube-controllers-68bb8ff95b-ztb6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6811d14c6a5", MAC:"b2:1c:ee:2a:b4:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:43:57.894438 containerd[1432]: 2024-11-12 17:43:57.890 [INFO][4012] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80" Namespace="calico-system" Pod="calico-kube-controllers-68bb8ff95b-ztb6p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:43:57.911106 sshd[4005]: pam_unix(sshd:session): session closed for user core
Nov 12 17:43:57.914117 systemd[1]: session-9.scope: Deactivated successfully.
Nov 12 17:43:57.916313 systemd[1]: sshd@8-10.0.0.44:22-10.0.0.1:38320.service: Deactivated successfully.
Nov 12 17:43:57.918751 systemd-logind[1415]: Session 9 logged out. Waiting for processes to exit.
Nov 12 17:43:57.919988 systemd-logind[1415]: Removed session 9.
Nov 12 17:43:57.932112 containerd[1432]: time="2024-11-12T17:43:57.931642477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:57.932287 containerd[1432]: time="2024-11-12T17:43:57.932135986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:57.932287 containerd[1432]: time="2024-11-12T17:43:57.932151709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:57.932287 containerd[1432]: time="2024-11-12T17:43:57.932228966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:57.953885 systemd[1]: Started cri-containerd-95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80.scope - libcontainer container 95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80.
Nov 12 17:43:57.962900 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Nov 12 17:43:57.985127 containerd[1432]: time="2024-11-12T17:43:57.985089768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68bb8ff95b-ztb6p,Uid:b7973464-e27c-437f-b721-54ead210e780,Namespace:calico-system,Attempt:1,} returns sandbox id \"95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80\""
Nov 12 17:43:57.986796 containerd[1432]: time="2024-11-12T17:43:57.986761256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\""
Nov 12 17:43:58.444620 containerd[1432]: time="2024-11-12T17:43:58.444524478Z" level=info msg="StopPodSandbox for \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\""
Nov 12 17:43:58.444781 containerd[1432]: time="2024-11-12T17:43:58.444542802Z" level=info msg="StopPodSandbox for \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\""
Nov 12 17:43:58.445120 containerd[1432]: time="2024-11-12T17:43:58.444524438Z" level=info msg="StopPodSandbox for \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\""
Nov 12 17:43:58.488415 kubelet[2454]: I1112 17:43:58.486792    2454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Nov 12 17:43:58.488415 kubelet[2454]: E1112 17:43:58.487642    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.504 [INFO][4152] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.504 [INFO][4152] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" iface="eth0" netns="/var/run/netns/cni-d243d661-4059-9a8b-33da-4a0acbe966e5"
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.504 [INFO][4152] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" iface="eth0" netns="/var/run/netns/cni-d243d661-4059-9a8b-33da-4a0acbe966e5"
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.506 [INFO][4152] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" iface="eth0" netns="/var/run/netns/cni-d243d661-4059-9a8b-33da-4a0acbe966e5"
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.506 [INFO][4152] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.506 [INFO][4152] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.557 [INFO][4172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" HandleID="k8s-pod-network.acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.558 [INFO][4172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.558 [INFO][4172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.573 [WARNING][4172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" HandleID="k8s-pod-network.acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.575 [INFO][4172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" HandleID="k8s-pod-network.acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.577 [INFO][4172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:43:58.597344 containerd[1432]: 2024-11-12 17:43:58.590 [INFO][4152] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:43:58.599812 containerd[1432]: time="2024-11-12T17:43:58.599774692Z" level=info msg="TearDown network for sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\" successfully"
Nov 12 17:43:58.599812 containerd[1432]: time="2024-11-12T17:43:58.599806619Z" level=info msg="StopPodSandbox for \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\" returns successfully"
Nov 12 17:43:58.601929 containerd[1432]: time="2024-11-12T17:43:58.600860005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2dwk,Uid:b411254a-fa39-4c2a-ae0e-e271a38a0ca1,Namespace:calico-system,Attempt:1,}"
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.534 [INFO][4151] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.534 [INFO][4151] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" iface="eth0" netns="/var/run/netns/cni-2c6ce125-77d8-c3d4-3ccc-57d653e51233"
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.534 [INFO][4151] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" iface="eth0" netns="/var/run/netns/cni-2c6ce125-77d8-c3d4-3ccc-57d653e51233"
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.535 [INFO][4151] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" iface="eth0" netns="/var/run/netns/cni-2c6ce125-77d8-c3d4-3ccc-57d653e51233"
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.535 [INFO][4151] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.535 [INFO][4151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.582 [INFO][4187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" HandleID="k8s-pod-network.6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.582 [INFO][4187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.582 [INFO][4187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.601 [WARNING][4187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" HandleID="k8s-pod-network.6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.601 [INFO][4187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" HandleID="k8s-pod-network.6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.603 [INFO][4187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:43:58.609660 containerd[1432]: 2024-11-12 17:43:58.605 [INFO][4151] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:43:58.611094 containerd[1432]: time="2024-11-12T17:43:58.610889753Z" level=info msg="TearDown network for sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\" successfully"
Nov 12 17:43:58.611094 containerd[1432]: time="2024-11-12T17:43:58.610927121Z" level=info msg="StopPodSandbox for \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\" returns successfully"
Nov 12 17:43:58.612455 containerd[1432]: time="2024-11-12T17:43:58.611621430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9q5zc,Uid:53e0c452-1122-4c00-814a-21a5b2fcb5be,Namespace:kube-system,Attempt:1,}"
Nov 12 17:43:58.612503 kubelet[2454]: E1112 17:43:58.611301    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.523 [INFO][4150] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.525 [INFO][4150] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" iface="eth0" netns="/var/run/netns/cni-f6b786e0-a33d-6af8-5a9d-ecb71a2a7889"
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.526 [INFO][4150] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" iface="eth0" netns="/var/run/netns/cni-f6b786e0-a33d-6af8-5a9d-ecb71a2a7889"
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.527 [INFO][4150] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" iface="eth0" netns="/var/run/netns/cni-f6b786e0-a33d-6af8-5a9d-ecb71a2a7889"
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.527 [INFO][4150] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.527 [INFO][4150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.600 [INFO][4186] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" HandleID="k8s-pod-network.3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.602 [INFO][4186] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.603 [INFO][4186] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.615 [WARNING][4186] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" HandleID="k8s-pod-network.3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.615 [INFO][4186] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" HandleID="k8s-pod-network.3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.619 [INFO][4186] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:43:58.626916 containerd[1432]: 2024-11-12 17:43:58.621 [INFO][4150] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:43:58.627293 containerd[1432]: time="2024-11-12T17:43:58.626775035Z" level=info msg="TearDown network for sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\" successfully"
Nov 12 17:43:58.627293 containerd[1432]: time="2024-11-12T17:43:58.626941151Z" level=info msg="StopPodSandbox for \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\" returns successfully"
Nov 12 17:43:58.628171 containerd[1432]: time="2024-11-12T17:43:58.628018822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6548764f9d-j6twf,Uid:3e30889a-e694-4689-92d5-cf89f334a65b,Namespace:calico-apiserver,Attempt:1,}"
Nov 12 17:43:58.631522 kubelet[2454]: E1112 17:43:58.631399    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:58.735476 systemd[1]: run-netns-cni\x2dd243d661\x2d4059\x2d9a8b\x2d33da\x2d4a0acbe966e5.mount: Deactivated successfully.
Nov 12 17:43:58.735563 systemd[1]: run-netns-cni\x2d2c6ce125\x2d77d8\x2dc3d4\x2d3ccc\x2d57d653e51233.mount: Deactivated successfully.
Nov 12 17:43:58.735612 systemd[1]: run-netns-cni\x2df6b786e0\x2da33d\x2d6af8\x2d5a9d\x2decb71a2a7889.mount: Deactivated successfully.
Nov 12 17:43:58.796274 systemd-networkd[1362]: cali2d691fbac35: Link UP
Nov 12 17:43:58.796862 systemd-networkd[1362]: cali2d691fbac35: Gained carrier
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.647 [INFO][4218] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.668 [INFO][4218] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--g2dwk-eth0 csi-node-driver- calico-system  b411254a-fa39-4c2a-ae0e-e271a38a0ca1 949 0 2024-11-12 17:43:35 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:548d65b7bf k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  localhost  csi-node-driver-g2dwk eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] cali2d691fbac35  [] []}} ContainerID="0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" Namespace="calico-system" Pod="csi-node-driver-g2dwk" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2dwk-"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.668 [INFO][4218] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" Namespace="calico-system" Pod="csi-node-driver-g2dwk" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.743 [INFO][4268] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" HandleID="k8s-pod-network.0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.763 [INFO][4268] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" HandleID="k8s-pod-network.0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e2a80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-g2dwk", "timestamp":"2024-11-12 17:43:58.743182209 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.763 [INFO][4268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.763 [INFO][4268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.763 [INFO][4268] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.766 [INFO][4268] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" host="localhost"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.770 [INFO][4268] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.774 [INFO][4268] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.776 [INFO][4268] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.779 [INFO][4268] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.779 [INFO][4268] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" host="localhost"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.781 [INFO][4268] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.785 [INFO][4268] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" host="localhost"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.791 [INFO][4268] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" host="localhost"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.791 [INFO][4268] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" host="localhost"
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.791 [INFO][4268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:43:58.812956 containerd[1432]: 2024-11-12 17:43:58.791 [INFO][4268] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" HandleID="k8s-pod-network.0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:43:58.813610 containerd[1432]: 2024-11-12 17:43:58.794 [INFO][4218] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" Namespace="calico-system" Pod="csi-node-driver-g2dwk" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2dwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g2dwk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b411254a-fa39-4c2a-ae0e-e271a38a0ca1", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-g2dwk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d691fbac35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:43:58.813610 containerd[1432]: 2024-11-12 17:43:58.794 [INFO][4218] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" Namespace="calico-system" Pod="csi-node-driver-g2dwk" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:43:58.813610 containerd[1432]: 2024-11-12 17:43:58.794 [INFO][4218] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d691fbac35 ContainerID="0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" Namespace="calico-system" Pod="csi-node-driver-g2dwk" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:43:58.813610 containerd[1432]: 2024-11-12 17:43:58.796 [INFO][4218] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" Namespace="calico-system" Pod="csi-node-driver-g2dwk" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:43:58.813610 containerd[1432]: 2024-11-12 17:43:58.797 [INFO][4218] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" Namespace="calico-system" Pod="csi-node-driver-g2dwk" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2dwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g2dwk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b411254a-fa39-4c2a-ae0e-e271a38a0ca1", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815", Pod:"csi-node-driver-g2dwk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d691fbac35", MAC:"7e:3f:d1:e8:a8:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:43:58.813610 containerd[1432]: 2024-11-12 17:43:58.810 [INFO][4218] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815" Namespace="calico-system" Pod="csi-node-driver-g2dwk" WorkloadEndpoint="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:43:58.837130 containerd[1432]: time="2024-11-12T17:43:58.836748731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:58.837454 containerd[1432]: time="2024-11-12T17:43:58.837293168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:58.837454 containerd[1432]: time="2024-11-12T17:43:58.837328495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:58.837550 containerd[1432]: time="2024-11-12T17:43:58.837440159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:58.864938 systemd[1]: Started cri-containerd-0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815.scope - libcontainer container 0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815.
Nov 12 17:43:58.875208 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Nov 12 17:43:58.884274 containerd[1432]: time="2024-11-12T17:43:58.884226221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g2dwk,Uid:b411254a-fa39-4c2a-ae0e-e271a38a0ca1,Namespace:calico-system,Attempt:1,} returns sandbox id \"0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815\""
Nov 12 17:43:58.937096 systemd-networkd[1362]: calibb61417da7c: Link UP
Nov 12 17:43:58.937258 systemd-networkd[1362]: calibb61417da7c: Gained carrier
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.687 [INFO][4231] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.716 [INFO][4231] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0 coredns-6f6b679f8f- kube-system  53e0c452-1122-4c00-814a-21a5b2fcb5be 957 0 2024-11-12 17:43:28 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  localhost  coredns-6f6b679f8f-9q5zc eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] calibb61417da7c  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" Namespace="kube-system" Pod="coredns-6f6b679f8f-9q5zc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9q5zc-"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.716 [INFO][4231] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" Namespace="kube-system" Pod="coredns-6f6b679f8f-9q5zc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.768 [INFO][4285] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" HandleID="k8s-pod-network.b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.782 [INFO][4285] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" HandleID="k8s-pod-network.b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032c2a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-9q5zc", "timestamp":"2024-11-12 17:43:58.768168001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.782 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.791 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.791 [INFO][4285] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.867 [INFO][4285] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" host="localhost"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.910 [INFO][4285] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.914 [INFO][4285] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.916 [INFO][4285] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.919 [INFO][4285] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.919 [INFO][4285] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" host="localhost"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.920 [INFO][4285] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.926 [INFO][4285] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" host="localhost"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.932 [INFO][4285] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" host="localhost"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.932 [INFO][4285] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" host="localhost"
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.933 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:43:58.947591 containerd[1432]: 2024-11-12 17:43:58.933 [INFO][4285] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" HandleID="k8s-pod-network.b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:43:58.948170 containerd[1432]: 2024-11-12 17:43:58.935 [INFO][4231] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" Namespace="kube-system" Pod="coredns-6f6b679f8f-9q5zc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"53e0c452-1122-4c00-814a-21a5b2fcb5be", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-9q5zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb61417da7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:43:58.948170 containerd[1432]: 2024-11-12 17:43:58.935 [INFO][4231] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" Namespace="kube-system" Pod="coredns-6f6b679f8f-9q5zc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:43:58.948170 containerd[1432]: 2024-11-12 17:43:58.935 [INFO][4231] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb61417da7c ContainerID="b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" Namespace="kube-system" Pod="coredns-6f6b679f8f-9q5zc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:43:58.948170 containerd[1432]: 2024-11-12 17:43:58.937 [INFO][4231] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" Namespace="kube-system" Pod="coredns-6f6b679f8f-9q5zc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:43:58.948170 containerd[1432]: 2024-11-12 17:43:58.937 [INFO][4231] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" Namespace="kube-system" Pod="coredns-6f6b679f8f-9q5zc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"53e0c452-1122-4c00-814a-21a5b2fcb5be", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da", Pod:"coredns-6f6b679f8f-9q5zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb61417da7c", MAC:"9e:26:2c:e8:cd:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:43:58.948170 containerd[1432]: 2024-11-12 17:43:58.945 [INFO][4231] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da" Namespace="kube-system" Pod="coredns-6f6b679f8f-9q5zc" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:43:58.956308 systemd-networkd[1362]: cali6811d14c6a5: Gained IPv6LL
Nov 12 17:43:58.968498 containerd[1432]: time="2024-11-12T17:43:58.968097986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:58.968498 containerd[1432]: time="2024-11-12T17:43:58.968146956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:58.968498 containerd[1432]: time="2024-11-12T17:43:58.968169521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:58.968498 containerd[1432]: time="2024-11-12T17:43:58.968250298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:58.987914 systemd[1]: Started cri-containerd-b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da.scope - libcontainer container b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da.
Nov 12 17:43:59.003443 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Nov 12 17:43:59.021160 containerd[1432]: time="2024-11-12T17:43:59.021007844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9q5zc,Uid:53e0c452-1122-4c00-814a-21a5b2fcb5be,Namespace:kube-system,Attempt:1,} returns sandbox id \"b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da\""
Nov 12 17:43:59.022155 kubelet[2454]: E1112 17:43:59.021707    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:59.024754 containerd[1432]: time="2024-11-12T17:43:59.024002428Z" level=info msg="CreateContainer within sandbox \"b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Nov 12 17:43:59.055024 systemd-networkd[1362]: calibe31829fa78: Link UP
Nov 12 17:43:59.055582 systemd-networkd[1362]: calibe31829fa78: Gained carrier
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:58.735 [INFO][4251] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:58.759 [INFO][4251] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0 calico-apiserver-6548764f9d- calico-apiserver  3e30889a-e694-4689-92d5-cf89f334a65b 955 0 2024-11-12 17:43:35 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6548764f9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  localhost  calico-apiserver-6548764f9d-j6twf eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe31829fa78  [] []}} ContainerID="e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-j6twf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--j6twf-"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:58.759 [INFO][4251] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-j6twf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:58.793 [INFO][4295] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" HandleID="k8s-pod-network.e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:58.909 [INFO][4295] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" HandleID="k8s-pod-network.e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c680), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6548764f9d-j6twf", "timestamp":"2024-11-12 17:43:58.793178198 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:58.909 [INFO][4295] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:58.933 [INFO][4295] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:58.933 [INFO][4295] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:58.968 [INFO][4295] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" host="localhost"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:59.011 [INFO][4295] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:59.019 [INFO][4295] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:59.025 [INFO][4295] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:59.029 [INFO][4295] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:59.029 [INFO][4295] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" host="localhost"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:59.032 [INFO][4295] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:59.037 [INFO][4295] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" host="localhost"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:59.048 [INFO][4295] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" host="localhost"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:59.048 [INFO][4295] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" host="localhost"
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:59.049 [INFO][4295] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:43:59.071531 containerd[1432]: 2024-11-12 17:43:59.049 [INFO][4295] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" HandleID="k8s-pod-network.e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:43:59.072662 containerd[1432]: 2024-11-12 17:43:59.051 [INFO][4251] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-j6twf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0", GenerateName:"calico-apiserver-6548764f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e30889a-e694-4689-92d5-cf89f334a65b", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6548764f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6548764f9d-j6twf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe31829fa78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:43:59.072662 containerd[1432]: 2024-11-12 17:43:59.052 [INFO][4251] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-j6twf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:43:59.072662 containerd[1432]: 2024-11-12 17:43:59.052 [INFO][4251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe31829fa78 ContainerID="e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-j6twf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:43:59.072662 containerd[1432]: 2024-11-12 17:43:59.055 [INFO][4251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-j6twf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:43:59.072662 containerd[1432]: 2024-11-12 17:43:59.056 [INFO][4251] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-j6twf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0", GenerateName:"calico-apiserver-6548764f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e30889a-e694-4689-92d5-cf89f334a65b", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6548764f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70", Pod:"calico-apiserver-6548764f9d-j6twf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe31829fa78", MAC:"ae:56:c1:37:04:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:43:59.072662 containerd[1432]: 2024-11-12 17:43:59.069 [INFO][4251] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-j6twf" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:43:59.072662 containerd[1432]: time="2024-11-12T17:43:59.072133305Z" level=info msg="CreateContainer within sandbox \"b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4ea8d1fa357ea670fcbb1a7c1f16ce4165f2dc47a31961f3e515c9cc4f1da1be\""
Nov 12 17:43:59.073938 containerd[1432]: time="2024-11-12T17:43:59.073089744Z" level=info msg="StartContainer for \"4ea8d1fa357ea670fcbb1a7c1f16ce4165f2dc47a31961f3e515c9cc4f1da1be\""
Nov 12 17:43:59.098917 systemd[1]: Started cri-containerd-4ea8d1fa357ea670fcbb1a7c1f16ce4165f2dc47a31961f3e515c9cc4f1da1be.scope - libcontainer container 4ea8d1fa357ea670fcbb1a7c1f16ce4165f2dc47a31961f3e515c9cc4f1da1be.
Nov 12 17:43:59.125277 containerd[1432]: time="2024-11-12T17:43:59.125161923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:59.125277 containerd[1432]: time="2024-11-12T17:43:59.125214854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:59.125277 containerd[1432]: time="2024-11-12T17:43:59.125231177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:59.125492 containerd[1432]: time="2024-11-12T17:43:59.125423097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:59.132645 containerd[1432]: time="2024-11-12T17:43:59.132602074Z" level=info msg="StartContainer for \"4ea8d1fa357ea670fcbb1a7c1f16ce4165f2dc47a31961f3e515c9cc4f1da1be\" returns successfully"
Nov 12 17:43:59.152968 systemd[1]: Started cri-containerd-e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70.scope - libcontainer container e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70.
Nov 12 17:43:59.173704 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Nov 12 17:43:59.193638 containerd[1432]: time="2024-11-12T17:43:59.193591592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6548764f9d-j6twf,Uid:3e30889a-e694-4689-92d5-cf89f334a65b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70\""
Nov 12 17:43:59.444562 containerd[1432]: time="2024-11-12T17:43:59.444220575Z" level=info msg="StopPodSandbox for \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\""
Nov 12 17:43:59.523746 kernel: bpftool[4542]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.521 [INFO][4526] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.523 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" iface="eth0" netns="/var/run/netns/cni-50000bcb-82ab-06f3-3ead-ae2c26c18106"
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.524 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" iface="eth0" netns="/var/run/netns/cni-50000bcb-82ab-06f3-3ead-ae2c26c18106"
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.524 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" iface="eth0" netns="/var/run/netns/cni-50000bcb-82ab-06f3-3ead-ae2c26c18106"
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.524 [INFO][4526] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.524 [INFO][4526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.568 [INFO][4543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" HandleID="k8s-pod-network.fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.568 [INFO][4543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.568 [INFO][4543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.580 [WARNING][4543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" HandleID="k8s-pod-network.fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.580 [INFO][4543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" HandleID="k8s-pod-network.fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.582 [INFO][4543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:43:59.588179 containerd[1432]: 2024-11-12 17:43:59.583 [INFO][4526] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:43:59.588950 containerd[1432]: time="2024-11-12T17:43:59.588623367Z" level=info msg="TearDown network for sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\" successfully"
Nov 12 17:43:59.588950 containerd[1432]: time="2024-11-12T17:43:59.588651253Z" level=info msg="StopPodSandbox for \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\" returns successfully"
Nov 12 17:43:59.589781 containerd[1432]: time="2024-11-12T17:43:59.589563403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6548764f9d-nfm69,Uid:47b1c060-5f96-4d3d-854b-cd0f2891eab7,Namespace:calico-apiserver,Attempt:1,}"
Nov 12 17:43:59.600359 containerd[1432]: time="2024-11-12T17:43:59.600311725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=31961371"
Nov 12 17:43:59.604368 containerd[1432]: time="2024-11-12T17:43:59.604314880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:59.605147 containerd[1432]: time="2024-11-12T17:43:59.605114606Z" level=info msg="ImageCreate event name:\"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:59.610626 containerd[1432]: time="2024-11-12T17:43:59.610267881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:43:59.617854 containerd[1432]: time="2024-11-12T17:43:59.617808613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"33330975\" in 1.631003748s"
Nov 12 17:43:59.617854 containerd[1432]: time="2024-11-12T17:43:59.617855503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\""
Nov 12 17:43:59.619922 containerd[1432]: time="2024-11-12T17:43:59.619687885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\""
Nov 12 17:43:59.628302 containerd[1432]: time="2024-11-12T17:43:59.628264754Z" level=info msg="CreateContainer within sandbox \"95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Nov 12 17:43:59.637468 kubelet[2454]: E1112 17:43:59.637246    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:43:59.649358 containerd[1432]: time="2024-11-12T17:43:59.649165672Z" level=info msg="CreateContainer within sandbox \"95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1e6ce33f830dde989109a0e1289d3608d0e137271bc890037a5aeb233676b458\""
Nov 12 17:43:59.652587 containerd[1432]: time="2024-11-12T17:43:59.652549178Z" level=info msg="StartContainer for \"1e6ce33f830dde989109a0e1289d3608d0e137271bc890037a5aeb233676b458\""
Nov 12 17:43:59.682122 kubelet[2454]: I1112 17:43:59.664187    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9q5zc" podStartSLOduration=31.664159919 podStartE2EDuration="31.664159919s" podCreationTimestamp="2024-11-12 17:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:43:59.651096475 +0000 UTC m=+39.318845058" watchObservedRunningTime="2024-11-12 17:43:59.664159919 +0000 UTC m=+39.331908502"
Nov 12 17:43:59.738599 systemd[1]: run-netns-cni\x2d50000bcb\x2d82ab\x2d06f3\x2d3ead\x2dae2c26c18106.mount: Deactivated successfully.
Nov 12 17:43:59.742623 systemd-networkd[1362]: vxlan.calico: Link UP
Nov 12 17:43:59.742631 systemd-networkd[1362]: vxlan.calico: Gained carrier
Nov 12 17:43:59.748970 systemd[1]: Started cri-containerd-1e6ce33f830dde989109a0e1289d3608d0e137271bc890037a5aeb233676b458.scope - libcontainer container 1e6ce33f830dde989109a0e1289d3608d0e137271bc890037a5aeb233676b458.
Nov 12 17:43:59.803065 systemd-networkd[1362]: califcdcd61e342: Link UP
Nov 12 17:43:59.803785 systemd-networkd[1362]: califcdcd61e342: Gained carrier
Nov 12 17:43:59.807489 containerd[1432]: time="2024-11-12T17:43:59.807279083Z" level=info msg="StartContainer for \"1e6ce33f830dde989109a0e1289d3608d0e137271bc890037a5aeb233676b458\" returns successfully"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.661 [INFO][4552] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0 calico-apiserver-6548764f9d- calico-apiserver  47b1c060-5f96-4d3d-854b-cd0f2891eab7 980 0 2024-11-12 17:43:35 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6548764f9d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  localhost  calico-apiserver-6548764f9d-nfm69 eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califcdcd61e342  [] []}} ContainerID="aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-nfm69" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--nfm69-"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.662 [INFO][4552] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-nfm69" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.708 [INFO][4590] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" HandleID="k8s-pod-network.aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.724 [INFO][4590] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" HandleID="k8s-pod-network.aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031f690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6548764f9d-nfm69", "timestamp":"2024-11-12 17:43:59.708698206 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.724 [INFO][4590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.725 [INFO][4590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.725 [INFO][4590] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.728 [INFO][4590] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" host="localhost"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.746 [INFO][4590] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.752 [INFO][4590] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.755 [INFO][4590] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.759 [INFO][4590] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.759 [INFO][4590] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" host="localhost"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.767 [INFO][4590] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.780 [INFO][4590] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" host="localhost"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.794 [INFO][4590] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" host="localhost"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.795 [INFO][4590] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" host="localhost"
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.795 [INFO][4590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:43:59.818581 containerd[1432]: 2024-11-12 17:43:59.795 [INFO][4590] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" HandleID="k8s-pod-network.aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:43:59.819247 containerd[1432]: 2024-11-12 17:43:59.797 [INFO][4552] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-nfm69" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0", GenerateName:"calico-apiserver-6548764f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"47b1c060-5f96-4d3d-854b-cd0f2891eab7", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6548764f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6548764f9d-nfm69", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcdcd61e342", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:43:59.819247 containerd[1432]: 2024-11-12 17:43:59.797 [INFO][4552] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-nfm69" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:43:59.819247 containerd[1432]: 2024-11-12 17:43:59.798 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califcdcd61e342 ContainerID="aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-nfm69" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:43:59.819247 containerd[1432]: 2024-11-12 17:43:59.804 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-nfm69" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:43:59.819247 containerd[1432]: 2024-11-12 17:43:59.804 [INFO][4552] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-nfm69" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0", GenerateName:"calico-apiserver-6548764f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"47b1c060-5f96-4d3d-854b-cd0f2891eab7", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6548764f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab", Pod:"calico-apiserver-6548764f9d-nfm69", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcdcd61e342", MAC:"be:18:3b:02:f6:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:43:59.819247 containerd[1432]: 2024-11-12 17:43:59.816 [INFO][4552] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab" Namespace="calico-apiserver" Pod="calico-apiserver-6548764f9d-nfm69" WorkloadEndpoint="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:43:59.862807 containerd[1432]: time="2024-11-12T17:43:59.859878572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:43:59.862807 containerd[1432]: time="2024-11-12T17:43:59.862773335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:43:59.863130 containerd[1432]: time="2024-11-12T17:43:59.862788339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:59.863130 containerd[1432]: time="2024-11-12T17:43:59.863082960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:43:59.891935 systemd[1]: Started cri-containerd-aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab.scope - libcontainer container aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab.
Nov 12 17:43:59.914841 systemd-networkd[1362]: cali2d691fbac35: Gained IPv6LL
Nov 12 17:43:59.947775 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Nov 12 17:43:59.971923 containerd[1432]: time="2024-11-12T17:43:59.971880247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6548764f9d-nfm69,Uid:47b1c060-5f96-4d3d-854b-cd0f2891eab7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab\""
Nov 12 17:44:00.041861 systemd-networkd[1362]: calibb61417da7c: Gained IPv6LL
Nov 12 17:44:00.444492 containerd[1432]: time="2024-11-12T17:44:00.444420147Z" level=info msg="StopPodSandbox for \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\""
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.490 [INFO][4768] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.491 [INFO][4768] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" iface="eth0" netns="/var/run/netns/cni-a1d7193c-37e3-d4bb-cc61-c83bbd2b70ae"
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.491 [INFO][4768] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" iface="eth0" netns="/var/run/netns/cni-a1d7193c-37e3-d4bb-cc61-c83bbd2b70ae"
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.491 [INFO][4768] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" iface="eth0" netns="/var/run/netns/cni-a1d7193c-37e3-d4bb-cc61-c83bbd2b70ae"
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.491 [INFO][4768] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.491 [INFO][4768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.513 [INFO][4776] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" HandleID="k8s-pod-network.1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.513 [INFO][4776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.513 [INFO][4776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.522 [WARNING][4776] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" HandleID="k8s-pod-network.1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.522 [INFO][4776] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" HandleID="k8s-pod-network.1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.523 [INFO][4776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:00.526920 containerd[1432]: 2024-11-12 17:44:00.525 [INFO][4768] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:00.529175 systemd[1]: run-netns-cni\x2da1d7193c\x2d37e3\x2dd4bb\x2dcc61\x2dc83bbd2b70ae.mount: Deactivated successfully.
Nov 12 17:44:00.529896 containerd[1432]: time="2024-11-12T17:44:00.529860670Z" level=info msg="TearDown network for sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\" successfully"
Nov 12 17:44:00.529896 containerd[1432]: time="2024-11-12T17:44:00.529893917Z" level=info msg="StopPodSandbox for \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\" returns successfully"
Nov 12 17:44:00.530233 kubelet[2454]: E1112 17:44:00.530212    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:44:00.530940 containerd[1432]: time="2024-11-12T17:44:00.530580336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-68v88,Uid:98f3ae8e-274c-478a-a46c-2a1f05e70b20,Namespace:kube-system,Attempt:1,}"
Nov 12 17:44:00.648727 kubelet[2454]: E1112 17:44:00.648682    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:44:00.659082 kubelet[2454]: I1112 17:44:00.659029    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68bb8ff95b-ztb6p" podStartSLOduration=24.026512371 podStartE2EDuration="25.659012196s" podCreationTimestamp="2024-11-12 17:43:35 +0000 UTC" firstStartedPulling="2024-11-12 17:43:57.986383413 +0000 UTC m=+37.654131956" lastFinishedPulling="2024-11-12 17:43:59.618883198 +0000 UTC m=+39.286631781" observedRunningTime="2024-11-12 17:44:00.657742498 +0000 UTC m=+40.325491081" watchObservedRunningTime="2024-11-12 17:44:00.659012196 +0000 UTC m=+40.326760739"
Nov 12 17:44:00.676840 systemd-networkd[1362]: cali39adbfce4fd: Link UP
Nov 12 17:44:00.677025 systemd-networkd[1362]: cali39adbfce4fd: Gained carrier
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.582 [INFO][4785] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--68v88-eth0 coredns-6f6b679f8f- kube-system  98f3ae8e-274c-478a-a46c-2a1f05e70b20 1012 0 2024-11-12 17:43:28 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  localhost  coredns-6f6b679f8f-68v88 eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali39adbfce4fd  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" Namespace="kube-system" Pod="coredns-6f6b679f8f-68v88" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--68v88-"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.582 [INFO][4785] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" Namespace="kube-system" Pod="coredns-6f6b679f8f-68v88" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.622 [INFO][4799] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" HandleID="k8s-pod-network.8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.633 [INFO][4799] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" HandleID="k8s-pod-network.8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000292930), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-68v88", "timestamp":"2024-11-12 17:44:00.622649606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.633 [INFO][4799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.633 [INFO][4799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.633 [INFO][4799] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.635 [INFO][4799] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" host="localhost"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.640 [INFO][4799] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.648 [INFO][4799] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.651 [INFO][4799] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.655 [INFO][4799] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.655 [INFO][4799] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" host="localhost"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.657 [INFO][4799] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.663 [INFO][4799] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" host="localhost"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.670 [INFO][4799] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" host="localhost"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.670 [INFO][4799] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" host="localhost"
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.670 [INFO][4799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:00.692643 containerd[1432]: 2024-11-12 17:44:00.670 [INFO][4799] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" HandleID="k8s-pod-network.8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:00.693824 containerd[1432]: 2024-11-12 17:44:00.673 [INFO][4785] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" Namespace="kube-system" Pod="coredns-6f6b679f8f-68v88" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--68v88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--68v88-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"98f3ae8e-274c-478a-a46c-2a1f05e70b20", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-68v88", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39adbfce4fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:00.693824 containerd[1432]: 2024-11-12 17:44:00.673 [INFO][4785] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" Namespace="kube-system" Pod="coredns-6f6b679f8f-68v88" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:00.693824 containerd[1432]: 2024-11-12 17:44:00.673 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39adbfce4fd ContainerID="8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" Namespace="kube-system" Pod="coredns-6f6b679f8f-68v88" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:00.693824 containerd[1432]: 2024-11-12 17:44:00.677 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" Namespace="kube-system" Pod="coredns-6f6b679f8f-68v88" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:00.693824 containerd[1432]: 2024-11-12 17:44:00.677 [INFO][4785] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" Namespace="kube-system" Pod="coredns-6f6b679f8f-68v88" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--68v88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--68v88-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"98f3ae8e-274c-478a-a46c-2a1f05e70b20", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3", Pod:"coredns-6f6b679f8f-68v88", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39adbfce4fd", MAC:"42:5a:e4:c8:35:67", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:00.693824 containerd[1432]: 2024-11-12 17:44:00.687 [INFO][4785] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3" Namespace="kube-system" Pod="coredns-6f6b679f8f-68v88" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:00.710655 containerd[1432]: time="2024-11-12T17:44:00.710487416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Nov 12 17:44:00.710655 containerd[1432]: time="2024-11-12T17:44:00.710565032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Nov 12 17:44:00.710655 containerd[1432]: time="2024-11-12T17:44:00.710589957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:44:00.711264 containerd[1432]: time="2024-11-12T17:44:00.710738747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Nov 12 17:44:00.729881 systemd[1]: Started cri-containerd-8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3.scope - libcontainer container 8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3.
Nov 12 17:44:00.741024 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Nov 12 17:44:00.759823 containerd[1432]: time="2024-11-12T17:44:00.759781753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-68v88,Uid:98f3ae8e-274c-478a-a46c-2a1f05e70b20,Namespace:kube-system,Attempt:1,} returns sandbox id \"8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3\""
Nov 12 17:44:00.760651 kubelet[2454]: E1112 17:44:00.760630    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:44:00.762299 containerd[1432]: time="2024-11-12T17:44:00.762268259Z" level=info msg="CreateContainer within sandbox \"8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Nov 12 17:44:00.765019 containerd[1432]: time="2024-11-12T17:44:00.764971608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:44:00.767758 containerd[1432]: time="2024-11-12T17:44:00.767710045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7464731"
Nov 12 17:44:00.768702 containerd[1432]: time="2024-11-12T17:44:00.768647195Z" level=info msg="ImageCreate event name:\"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:44:00.772191 containerd[1432]: time="2024-11-12T17:44:00.772141065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:44:00.773211 containerd[1432]: time="2024-11-12T17:44:00.773169554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"8834367\" in 1.153423376s"
Nov 12 17:44:00.773211 containerd[1432]: time="2024-11-12T17:44:00.773206401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\""
Nov 12 17:44:00.774884 containerd[1432]: time="2024-11-12T17:44:00.774852656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\""
Nov 12 17:44:00.777017 containerd[1432]: time="2024-11-12T17:44:00.776973407Z" level=info msg="CreateContainer within sandbox \"0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Nov 12 17:44:00.793569 containerd[1432]: time="2024-11-12T17:44:00.793525731Z" level=info msg="CreateContainer within sandbox \"8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66a2e733b999687b586f6ede51d9191ee6a947369cf2f384b5eb912fba6cc08a\""
Nov 12 17:44:00.794687 containerd[1432]: time="2024-11-12T17:44:00.794658401Z" level=info msg="StartContainer for \"66a2e733b999687b586f6ede51d9191ee6a947369cf2f384b5eb912fba6cc08a\""
Nov 12 17:44:00.809806 systemd-networkd[1362]: calibe31829fa78: Gained IPv6LL
Nov 12 17:44:00.821613 containerd[1432]: time="2024-11-12T17:44:00.821504216Z" level=info msg="CreateContainer within sandbox \"0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"517e87f12b7a2983cc745c5c94476c29a02be13b79bf86da49752be6af2c60f4\""
Nov 12 17:44:00.821887 systemd[1]: Started cri-containerd-66a2e733b999687b586f6ede51d9191ee6a947369cf2f384b5eb912fba6cc08a.scope - libcontainer container 66a2e733b999687b586f6ede51d9191ee6a947369cf2f384b5eb912fba6cc08a.
Nov 12 17:44:00.822653 containerd[1432]: time="2024-11-12T17:44:00.822515182Z" level=info msg="StartContainer for \"517e87f12b7a2983cc745c5c94476c29a02be13b79bf86da49752be6af2c60f4\""
Nov 12 17:44:00.847539 containerd[1432]: time="2024-11-12T17:44:00.847500059Z" level=info msg="StartContainer for \"66a2e733b999687b586f6ede51d9191ee6a947369cf2f384b5eb912fba6cc08a\" returns successfully"
Nov 12 17:44:00.851891 systemd[1]: Started cri-containerd-517e87f12b7a2983cc745c5c94476c29a02be13b79bf86da49752be6af2c60f4.scope - libcontainer container 517e87f12b7a2983cc745c5c94476c29a02be13b79bf86da49752be6af2c60f4.
Nov 12 17:44:00.875854 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL
Nov 12 17:44:00.892500 containerd[1432]: time="2024-11-12T17:44:00.892434390Z" level=info msg="StartContainer for \"517e87f12b7a2983cc745c5c94476c29a02be13b79bf86da49752be6af2c60f4\" returns successfully"
Nov 12 17:44:01.129855 systemd-networkd[1362]: califcdcd61e342: Gained IPv6LL
Nov 12 17:44:01.653925 kubelet[2454]: I1112 17:44:01.653373    2454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Nov 12 17:44:01.653925 kubelet[2454]: E1112 17:44:01.653638    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:44:01.654273 kubelet[2454]: E1112 17:44:01.654034    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:44:01.667805 kubelet[2454]: I1112 17:44:01.667450    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-68v88" podStartSLOduration=33.667432604 podStartE2EDuration="33.667432604s" podCreationTimestamp="2024-11-12 17:43:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:44:01.666262813 +0000 UTC m=+41.334011436" watchObservedRunningTime="2024-11-12 17:44:01.667432604 +0000 UTC m=+41.335181187"
Nov 12 17:44:02.025887 systemd-networkd[1362]: cali39adbfce4fd: Gained IPv6LL
Nov 12 17:44:02.640784 containerd[1432]: time="2024-11-12T17:44:02.640732853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:44:02.641552 containerd[1432]: time="2024-11-12T17:44:02.641512724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=39277239"
Nov 12 17:44:02.646485 containerd[1432]: time="2024-11-12T17:44:02.646453280Z" level=info msg="ImageCreate event name:\"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:44:02.649278 containerd[1432]: time="2024-11-12T17:44:02.649228457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:44:02.650389 containerd[1432]: time="2024-11-12T17:44:02.650355435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 1.875470373s"
Nov 12 17:44:02.650442 containerd[1432]: time="2024-11-12T17:44:02.650388722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\""
Nov 12 17:44:02.651194 containerd[1432]: time="2024-11-12T17:44:02.651171633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\""
Nov 12 17:44:02.652305 containerd[1432]: time="2024-11-12T17:44:02.652277287Z" level=info msg="CreateContainer within sandbox \"e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Nov 12 17:44:02.655216 kubelet[2454]: E1112 17:44:02.655189    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:44:02.665573 containerd[1432]: time="2024-11-12T17:44:02.665534493Z" level=info msg="CreateContainer within sandbox \"e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3c997df9c0ed2f9d4a4da0dfc3094c48f69b32a925531f77f0ac6086ecdc9aff\""
Nov 12 17:44:02.666390 containerd[1432]: time="2024-11-12T17:44:02.666298241Z" level=info msg="StartContainer for \"3c997df9c0ed2f9d4a4da0dfc3094c48f69b32a925531f77f0ac6086ecdc9aff\""
Nov 12 17:44:02.695896 systemd[1]: Started cri-containerd-3c997df9c0ed2f9d4a4da0dfc3094c48f69b32a925531f77f0ac6086ecdc9aff.scope - libcontainer container 3c997df9c0ed2f9d4a4da0dfc3094c48f69b32a925531f77f0ac6086ecdc9aff.
Nov 12 17:44:02.730077 containerd[1432]: time="2024-11-12T17:44:02.728517924Z" level=info msg="StartContainer for \"3c997df9c0ed2f9d4a4da0dfc3094c48f69b32a925531f77f0ac6086ecdc9aff\" returns successfully"
Nov 12 17:44:02.926870 systemd[1]: Started sshd@9-10.0.0.44:22-10.0.0.1:45046.service - OpenSSH per-connection server daemon (10.0.0.1:45046).
Nov 12 17:44:02.938842 containerd[1432]: time="2024-11-12T17:44:02.938337137Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:44:02.940299 containerd[1432]: time="2024-11-12T17:44:02.940265111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77"
Nov 12 17:44:02.942574 containerd[1432]: time="2024-11-12T17:44:02.942543192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 291.340432ms"
Nov 12 17:44:02.942786 containerd[1432]: time="2024-11-12T17:44:02.942577118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\""
Nov 12 17:44:02.943764 containerd[1432]: time="2024-11-12T17:44:02.943570831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\""
Nov 12 17:44:02.944238 containerd[1432]: time="2024-11-12T17:44:02.944210114Z" level=info msg="CreateContainer within sandbox \"aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Nov 12 17:44:02.962441 containerd[1432]: time="2024-11-12T17:44:02.962396354Z" level=info msg="CreateContainer within sandbox \"aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e6900e273651b9fe287e4120fa90710403e0ae285a224f6732cee05446c34f69\""
Nov 12 17:44:02.964062 containerd[1432]: time="2024-11-12T17:44:02.962964864Z" level=info msg="StartContainer for \"e6900e273651b9fe287e4120fa90710403e0ae285a224f6732cee05446c34f69\""
Nov 12 17:44:02.997877 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 45046 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:44:02.999785 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:44:02.999942 systemd[1]: Started cri-containerd-e6900e273651b9fe287e4120fa90710403e0ae285a224f6732cee05446c34f69.scope - libcontainer container e6900e273651b9fe287e4120fa90710403e0ae285a224f6732cee05446c34f69.
Nov 12 17:44:03.004629 systemd-logind[1415]: New session 10 of user core.
Nov 12 17:44:03.010886 systemd[1]: Started session-10.scope - Session 10 of User core.
Nov 12 17:44:03.042154 containerd[1432]: time="2024-11-12T17:44:03.042112808Z" level=info msg="StartContainer for \"e6900e273651b9fe287e4120fa90710403e0ae285a224f6732cee05446c34f69\" returns successfully"
Nov 12 17:44:03.265579 sshd[5036]: pam_unix(sshd:session): session closed for user core
Nov 12 17:44:03.273080 systemd[1]: sshd@9-10.0.0.44:22-10.0.0.1:45046.service: Deactivated successfully.
Nov 12 17:44:03.274441 systemd[1]: session-10.scope: Deactivated successfully.
Nov 12 17:44:03.275771 systemd-logind[1415]: Session 10 logged out. Waiting for processes to exit.
Nov 12 17:44:03.277147 systemd[1]: Started sshd@10-10.0.0.44:22-10.0.0.1:45054.service - OpenSSH per-connection server daemon (10.0.0.1:45054).
Nov 12 17:44:03.278399 systemd-logind[1415]: Removed session 10.
Nov 12 17:44:03.317008 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 45054 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:44:03.318379 sshd[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:44:03.325437 systemd-logind[1415]: New session 11 of user core.
Nov 12 17:44:03.331158 systemd[1]: Started session-11.scope - Session 11 of User core.
Nov 12 17:44:03.563709 sshd[5089]: pam_unix(sshd:session): session closed for user core
Nov 12 17:44:03.570203 systemd[1]: sshd@10-10.0.0.44:22-10.0.0.1:45054.service: Deactivated successfully.
Nov 12 17:44:03.574878 systemd[1]: session-11.scope: Deactivated successfully.
Nov 12 17:44:03.577399 systemd-logind[1415]: Session 11 logged out. Waiting for processes to exit.
Nov 12 17:44:03.591431 systemd[1]: Started sshd@11-10.0.0.44:22-10.0.0.1:45070.service - OpenSSH per-connection server daemon (10.0.0.1:45070).
Nov 12 17:44:03.593593 systemd-logind[1415]: Removed session 11.
Nov 12 17:44:03.625752 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 45070 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:44:03.626708 sshd[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:44:03.633500 systemd-logind[1415]: New session 12 of user core.
Nov 12 17:44:03.636904 systemd[1]: Started session-12.scope - Session 12 of User core.
Nov 12 17:44:03.663008 kubelet[2454]: E1112 17:44:03.661922    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:44:03.673189 kubelet[2454]: I1112 17:44:03.673134    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6548764f9d-nfm69" podStartSLOduration=25.702911505 podStartE2EDuration="28.673109622s" podCreationTimestamp="2024-11-12 17:43:35 +0000 UTC" firstStartedPulling="2024-11-12 17:43:59.972999281 +0000 UTC m=+39.640747864" lastFinishedPulling="2024-11-12 17:44:02.943197398 +0000 UTC m=+42.610945981" observedRunningTime="2024-11-12 17:44:03.671942201 +0000 UTC m=+43.339690744" watchObservedRunningTime="2024-11-12 17:44:03.673109622 +0000 UTC m=+43.340858165"
Nov 12 17:44:03.807808 sshd[5104]: pam_unix(sshd:session): session closed for user core
Nov 12 17:44:03.811094 systemd[1]: sshd@11-10.0.0.44:22-10.0.0.1:45070.service: Deactivated successfully.
Nov 12 17:44:03.812769 systemd[1]: session-12.scope: Deactivated successfully.
Nov 12 17:44:03.813378 systemd-logind[1415]: Session 12 logged out. Waiting for processes to exit.
Nov 12 17:44:03.816884 systemd-logind[1415]: Removed session 12.
Nov 12 17:44:04.112214 containerd[1432]: time="2024-11-12T17:44:04.112157025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:44:04.113582 containerd[1432]: time="2024-11-12T17:44:04.113546842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=9883360"
Nov 12 17:44:04.114289 containerd[1432]: time="2024-11-12T17:44:04.114241970Z" level=info msg="ImageCreate event name:\"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:44:04.116608 containerd[1432]: time="2024-11-12T17:44:04.116574202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Nov 12 17:44:04.117553 containerd[1432]: time="2024-11-12T17:44:04.117427880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11252948\" in 1.173825483s"
Nov 12 17:44:04.117553 containerd[1432]: time="2024-11-12T17:44:04.117462286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\""
Nov 12 17:44:04.119475 containerd[1432]: time="2024-11-12T17:44:04.119447694Z" level=info msg="CreateContainer within sandbox \"0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Nov 12 17:44:04.133620 containerd[1432]: time="2024-11-12T17:44:04.133503535Z" level=info msg="CreateContainer within sandbox \"0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c63047b594fb3f59ebcaf6f1afaea16a853740ab238abeaf2000838943ec5fba\""
Nov 12 17:44:04.135255 containerd[1432]: time="2024-11-12T17:44:04.133970942Z" level=info msg="StartContainer for \"c63047b594fb3f59ebcaf6f1afaea16a853740ab238abeaf2000838943ec5fba\""
Nov 12 17:44:04.170891 systemd[1]: Started cri-containerd-c63047b594fb3f59ebcaf6f1afaea16a853740ab238abeaf2000838943ec5fba.scope - libcontainer container c63047b594fb3f59ebcaf6f1afaea16a853740ab238abeaf2000838943ec5fba.
Nov 12 17:44:04.204083 containerd[1432]: time="2024-11-12T17:44:04.203416115Z" level=info msg="StartContainer for \"c63047b594fb3f59ebcaf6f1afaea16a853740ab238abeaf2000838943ec5fba\" returns successfully"
Nov 12 17:44:04.541020 kubelet[2454]: I1112 17:44:04.540867    2454 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Nov 12 17:44:04.543140 kubelet[2454]: I1112 17:44:04.543098    2454 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Nov 12 17:44:04.666115 kubelet[2454]: I1112 17:44:04.666075    2454 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Nov 12 17:44:04.692251 kubelet[2454]: I1112 17:44:04.692181    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-g2dwk" podStartSLOduration=24.459204751 podStartE2EDuration="29.692163811s" podCreationTimestamp="2024-11-12 17:43:35 +0000 UTC" firstStartedPulling="2024-11-12 17:43:58.885410714 +0000 UTC m=+38.553159297" lastFinishedPulling="2024-11-12 17:44:04.118369774 +0000 UTC m=+43.786118357" observedRunningTime="2024-11-12 17:44:04.691812626 +0000 UTC m=+44.359561169" watchObservedRunningTime="2024-11-12 17:44:04.692163811 +0000 UTC m=+44.359912434"
Nov 12 17:44:04.692412 kubelet[2454]: I1112 17:44:04.692297    2454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6548764f9d-j6twf" podStartSLOduration=26.236003061 podStartE2EDuration="29.692292435s" podCreationTimestamp="2024-11-12 17:43:35 +0000 UTC" firstStartedPulling="2024-11-12 17:43:59.194765317 +0000 UTC m=+38.862513900" lastFinishedPulling="2024-11-12 17:44:02.651054691 +0000 UTC m=+42.318803274" observedRunningTime="2024-11-12 17:44:03.688604673 +0000 UTC m=+43.356353256" watchObservedRunningTime="2024-11-12 17:44:04.692292435 +0000 UTC m=+44.360041058"
Nov 12 17:44:08.822272 systemd[1]: Started sshd@12-10.0.0.44:22-10.0.0.1:45080.service - OpenSSH per-connection server daemon (10.0.0.1:45080).
Nov 12 17:44:08.896988 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 45080 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:44:08.898592 sshd[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:44:08.903032 systemd-logind[1415]: New session 13 of user core.
Nov 12 17:44:08.916128 systemd[1]: Started session-13.scope - Session 13 of User core.
Nov 12 17:44:09.121620 sshd[5178]: pam_unix(sshd:session): session closed for user core
Nov 12 17:44:09.137419 systemd[1]: Started sshd@13-10.0.0.44:22-10.0.0.1:45084.service - OpenSSH per-connection server daemon (10.0.0.1:45084).
Nov 12 17:44:09.138202 systemd[1]: sshd@12-10.0.0.44:22-10.0.0.1:45080.service: Deactivated successfully.
Nov 12 17:44:09.140256 systemd[1]: session-13.scope: Deactivated successfully.
Nov 12 17:44:09.143143 systemd-logind[1415]: Session 13 logged out. Waiting for processes to exit.
Nov 12 17:44:09.145226 systemd-logind[1415]: Removed session 13.
Nov 12 17:44:09.176843 sshd[5190]: Accepted publickey for core from 10.0.0.1 port 45084 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:44:09.178184 sshd[5190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:44:09.182341 systemd-logind[1415]: New session 14 of user core.
Nov 12 17:44:09.193888 systemd[1]: Started session-14.scope - Session 14 of User core.
Nov 12 17:44:09.440357 sshd[5190]: pam_unix(sshd:session): session closed for user core
Nov 12 17:44:09.448381 systemd[1]: sshd@13-10.0.0.44:22-10.0.0.1:45084.service: Deactivated successfully.
Nov 12 17:44:09.451076 systemd[1]: session-14.scope: Deactivated successfully.
Nov 12 17:44:09.453481 systemd-logind[1415]: Session 14 logged out. Waiting for processes to exit.
Nov 12 17:44:09.466837 systemd[1]: Started sshd@14-10.0.0.44:22-10.0.0.1:45086.service - OpenSSH per-connection server daemon (10.0.0.1:45086).
Nov 12 17:44:09.468847 systemd-logind[1415]: Removed session 14.
Nov 12 17:44:09.507003 sshd[5205]: Accepted publickey for core from 10.0.0.1 port 45086 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:44:09.508408 sshd[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:44:09.512752 systemd-logind[1415]: New session 15 of user core.
Nov 12 17:44:09.524967 systemd[1]: Started session-15.scope - Session 15 of User core.
Nov 12 17:44:11.100899 sshd[5205]: pam_unix(sshd:session): session closed for user core
Nov 12 17:44:11.113193 systemd[1]: sshd@14-10.0.0.44:22-10.0.0.1:45086.service: Deactivated successfully.
Nov 12 17:44:11.118552 systemd[1]: session-15.scope: Deactivated successfully.
Nov 12 17:44:11.121856 systemd-logind[1415]: Session 15 logged out. Waiting for processes to exit.
Nov 12 17:44:11.132697 systemd[1]: Started sshd@15-10.0.0.44:22-10.0.0.1:45090.service - OpenSSH per-connection server daemon (10.0.0.1:45090).
Nov 12 17:44:11.135038 systemd-logind[1415]: Removed session 15.
Nov 12 17:44:11.172857 sshd[5239]: Accepted publickey for core from 10.0.0.1 port 45090 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:44:11.174088 sshd[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:44:11.177759 systemd-logind[1415]: New session 16 of user core.
Nov 12 17:44:11.184879 systemd[1]: Started session-16.scope - Session 16 of User core.
Nov 12 17:44:11.500086 sshd[5239]: pam_unix(sshd:session): session closed for user core
Nov 12 17:44:11.510319 systemd[1]: sshd@15-10.0.0.44:22-10.0.0.1:45090.service: Deactivated successfully.
Nov 12 17:44:11.514289 systemd[1]: session-16.scope: Deactivated successfully.
Nov 12 17:44:11.517552 systemd-logind[1415]: Session 16 logged out. Waiting for processes to exit.
Nov 12 17:44:11.525557 systemd[1]: Started sshd@16-10.0.0.44:22-10.0.0.1:45106.service - OpenSSH per-connection server daemon (10.0.0.1:45106).
Nov 12 17:44:11.526973 systemd-logind[1415]: Removed session 16.
Nov 12 17:44:11.557301 sshd[5252]: Accepted publickey for core from 10.0.0.1 port 45106 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:44:11.558530 sshd[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:44:11.562085 systemd-logind[1415]: New session 17 of user core.
Nov 12 17:44:11.571914 systemd[1]: Started session-17.scope - Session 17 of User core.
Nov 12 17:44:11.702056 sshd[5252]: pam_unix(sshd:session): session closed for user core
Nov 12 17:44:11.705373 systemd[1]: sshd@16-10.0.0.44:22-10.0.0.1:45106.service: Deactivated successfully.
Nov 12 17:44:11.708363 systemd[1]: session-17.scope: Deactivated successfully.
Nov 12 17:44:11.709520 systemd-logind[1415]: Session 17 logged out. Waiting for processes to exit.
Nov 12 17:44:11.710446 systemd-logind[1415]: Removed session 17.
Nov 12 17:44:16.723522 systemd[1]: Started sshd@17-10.0.0.44:22-10.0.0.1:59630.service - OpenSSH per-connection server daemon (10.0.0.1:59630).
Nov 12 17:44:16.763299 sshd[5269]: Accepted publickey for core from 10.0.0.1 port 59630 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:44:16.764832 sshd[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:44:16.769259 systemd-logind[1415]: New session 18 of user core.
Nov 12 17:44:16.776882 systemd[1]: Started session-18.scope - Session 18 of User core.
Nov 12 17:44:16.916935 sshd[5269]: pam_unix(sshd:session): session closed for user core
Nov 12 17:44:16.919882 systemd-logind[1415]: Session 18 logged out. Waiting for processes to exit.
Nov 12 17:44:16.920055 systemd[1]: sshd@17-10.0.0.44:22-10.0.0.1:59630.service: Deactivated successfully.
Nov 12 17:44:16.921703 systemd[1]: session-18.scope: Deactivated successfully.
Nov 12 17:44:16.923168 systemd-logind[1415]: Removed session 18.
Nov 12 17:44:18.433996 kubelet[2454]: E1112 17:44:18.433913    2454 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Nov 12 17:44:20.437838 containerd[1432]: time="2024-11-12T17:44:20.437801982Z" level=info msg="StopPodSandbox for \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\""
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.480 [WARNING][5326] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0", GenerateName:"calico-apiserver-6548764f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"47b1c060-5f96-4d3d-854b-cd0f2891eab7", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6548764f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab", Pod:"calico-apiserver-6548764f9d-nfm69", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcdcd61e342", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.481 [INFO][5326] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.481 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" iface="eth0" netns=""
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.481 [INFO][5326] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.481 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.500 [INFO][5336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" HandleID="k8s-pod-network.fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.501 [INFO][5336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.501 [INFO][5336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.508 [WARNING][5336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" HandleID="k8s-pod-network.fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.508 [INFO][5336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" HandleID="k8s-pod-network.fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.510 [INFO][5336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:20.514295 containerd[1432]: 2024-11-12 17:44:20.512 [INFO][5326] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:44:20.514977 containerd[1432]: time="2024-11-12T17:44:20.514746252Z" level=info msg="TearDown network for sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\" successfully"
Nov 12 17:44:20.514977 containerd[1432]: time="2024-11-12T17:44:20.514772376Z" level=info msg="StopPodSandbox for \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\" returns successfully"
Nov 12 17:44:20.515244 containerd[1432]: time="2024-11-12T17:44:20.515216160Z" level=info msg="RemovePodSandbox for \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\""
Nov 12 17:44:20.523304 containerd[1432]: time="2024-11-12T17:44:20.523264130Z" level=info msg="Forcibly stopping sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\""
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.557 [WARNING][5359] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0", GenerateName:"calico-apiserver-6548764f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"47b1c060-5f96-4d3d-854b-cd0f2891eab7", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6548764f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aca0e55214569cb17953e1f73f86ccc1c77ddc33771d12128fb261f7b2d84bab", Pod:"calico-apiserver-6548764f9d-nfm69", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcdcd61e342", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.557 [INFO][5359] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.557 [INFO][5359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" iface="eth0" netns=""
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.557 [INFO][5359] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.557 [INFO][5359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.575 [INFO][5366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" HandleID="k8s-pod-network.fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.575 [INFO][5366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.575 [INFO][5366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.582 [WARNING][5366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" HandleID="k8s-pod-network.fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.583 [INFO][5366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" HandleID="k8s-pod-network.fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201" Workload="localhost-k8s-calico--apiserver--6548764f9d--nfm69-eth0"
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.585 [INFO][5366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:20.588788 containerd[1432]: 2024-11-12 17:44:20.586 [INFO][5359] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201"
Nov 12 17:44:20.589188 containerd[1432]: time="2024-11-12T17:44:20.588825905Z" level=info msg="TearDown network for sandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\" successfully"
Nov 12 17:44:20.602944 containerd[1432]: time="2024-11-12T17:44:20.602910593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Nov 12 17:44:20.603017 containerd[1432]: time="2024-11-12T17:44:20.602974242Z" level=info msg="RemovePodSandbox \"fd43521d6c6f93941b4e20fb1dce88a4f018054864c82de6a05f4486c604c201\" returns successfully"
Nov 12 17:44:20.603633 containerd[1432]: time="2024-11-12T17:44:20.603610255Z" level=info msg="StopPodSandbox for \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\""
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.637 [WARNING][5389] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0", GenerateName:"calico-kube-controllers-68bb8ff95b-", Namespace:"calico-system", SelfLink:"", UID:"b7973464-e27c-437f-b721-54ead210e780", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bb8ff95b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80", Pod:"calico-kube-controllers-68bb8ff95b-ztb6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6811d14c6a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.637 [INFO][5389] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.637 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" iface="eth0" netns=""
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.637 [INFO][5389] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.637 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.662 [INFO][5396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" HandleID="k8s-pod-network.72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.663 [INFO][5396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.663 [INFO][5396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.674 [WARNING][5396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" HandleID="k8s-pod-network.72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.674 [INFO][5396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" HandleID="k8s-pod-network.72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.677 [INFO][5396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:20.682518 containerd[1432]: 2024-11-12 17:44:20.680 [INFO][5389] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:44:20.682518 containerd[1432]: time="2024-11-12T17:44:20.682501288Z" level=info msg="TearDown network for sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\" successfully"
Nov 12 17:44:20.682518 containerd[1432]: time="2024-11-12T17:44:20.682523651Z" level=info msg="StopPodSandbox for \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\" returns successfully"
Nov 12 17:44:20.683280 containerd[1432]: time="2024-11-12T17:44:20.682920429Z" level=info msg="RemovePodSandbox for \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\""
Nov 12 17:44:20.683280 containerd[1432]: time="2024-11-12T17:44:20.682952193Z" level=info msg="Forcibly stopping sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\""
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.733 [WARNING][5418] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0", GenerateName:"calico-kube-controllers-68bb8ff95b-", Namespace:"calico-system", SelfLink:"", UID:"b7973464-e27c-437f-b721-54ead210e780", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68bb8ff95b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95d169201e9afbdbecc621b7edc0f04cefb11f0b1f6d2401377423c75ba41c80", Pod:"calico-kube-controllers-68bb8ff95b-ztb6p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6811d14c6a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.735 [INFO][5418] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.735 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" iface="eth0" netns=""
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.735 [INFO][5418] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.735 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.756 [INFO][5427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" HandleID="k8s-pod-network.72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.756 [INFO][5427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.756 [INFO][5427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.766 [WARNING][5427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" HandleID="k8s-pod-network.72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.766 [INFO][5427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" HandleID="k8s-pod-network.72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36" Workload="localhost-k8s-calico--kube--controllers--68bb8ff95b--ztb6p-eth0"
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.767 [INFO][5427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:20.772820 containerd[1432]: 2024-11-12 17:44:20.771 [INFO][5418] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36"
Nov 12 17:44:20.772820 containerd[1432]: time="2024-11-12T17:44:20.772737170Z" level=info msg="TearDown network for sandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\" successfully"
Nov 12 17:44:20.783863 containerd[1432]: time="2024-11-12T17:44:20.783824863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Nov 12 17:44:20.783863 containerd[1432]: time="2024-11-12T17:44:20.783890913Z" level=info msg="RemovePodSandbox \"72d4e41f99974b98d24a6c5d1c8000d92d49574ce5c3b636a1582636a44c3f36\" returns successfully"
Nov 12 17:44:20.784329 containerd[1432]: time="2024-11-12T17:44:20.784294451Z" level=info msg="StopPodSandbox for \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\""
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.826 [WARNING][5451] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0", GenerateName:"calico-apiserver-6548764f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e30889a-e694-4689-92d5-cf89f334a65b", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6548764f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70", Pod:"calico-apiserver-6548764f9d-j6twf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe31829fa78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.826 [INFO][5451] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.826 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" iface="eth0" netns=""
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.826 [INFO][5451] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.826 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.850 [INFO][5458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" HandleID="k8s-pod-network.3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.850 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.850 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.860 [WARNING][5458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" HandleID="k8s-pod-network.3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.860 [INFO][5458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" HandleID="k8s-pod-network.3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.862 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:20.866842 containerd[1432]: 2024-11-12 17:44:20.864 [INFO][5451] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:44:20.867279 containerd[1432]: time="2024-11-12T17:44:20.866871500Z" level=info msg="TearDown network for sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\" successfully"
Nov 12 17:44:20.867279 containerd[1432]: time="2024-11-12T17:44:20.866897544Z" level=info msg="StopPodSandbox for \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\" returns successfully"
Nov 12 17:44:20.868609 containerd[1432]: time="2024-11-12T17:44:20.868581549Z" level=info msg="RemovePodSandbox for \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\""
Nov 12 17:44:20.868671 containerd[1432]: time="2024-11-12T17:44:20.868616034Z" level=info msg="Forcibly stopping sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\""
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.912 [WARNING][5480] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0", GenerateName:"calico-apiserver-6548764f9d-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e30889a-e694-4689-92d5-cf89f334a65b", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6548764f9d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7b08de32005cd88bdeed7142f42f2c8d7c91b6c3fc05bfb83742b7d2050ac70", Pod:"calico-apiserver-6548764f9d-j6twf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe31829fa78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.912 [INFO][5480] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.912 [INFO][5480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" iface="eth0" netns=""
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.912 [INFO][5480] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.912 [INFO][5480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.932 [INFO][5488] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" HandleID="k8s-pod-network.3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.933 [INFO][5488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.933 [INFO][5488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.942 [WARNING][5488] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" HandleID="k8s-pod-network.3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.942 [INFO][5488] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" HandleID="k8s-pod-network.3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de" Workload="localhost-k8s-calico--apiserver--6548764f9d--j6twf-eth0"
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.945 [INFO][5488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:20.948609 containerd[1432]: 2024-11-12 17:44:20.946 [INFO][5480] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de"
Nov 12 17:44:20.949119 containerd[1432]: time="2024-11-12T17:44:20.948638351Z" level=info msg="TearDown network for sandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\" successfully"
Nov 12 17:44:20.962481 containerd[1432]: time="2024-11-12T17:44:20.962410834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Nov 12 17:44:20.962584 containerd[1432]: time="2024-11-12T17:44:20.962512769Z" level=info msg="RemovePodSandbox \"3f488b82016ae3c7304b7568e43bc58520a3bf9babf223ff5b03203da4c0e2de\" returns successfully"
Nov 12 17:44:20.963015 containerd[1432]: time="2024-11-12T17:44:20.962994399Z" level=info msg="StopPodSandbox for \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\""
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.005 [WARNING][5511] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"53e0c452-1122-4c00-814a-21a5b2fcb5be", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da", Pod:"coredns-6f6b679f8f-9q5zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb61417da7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.005 [INFO][5511] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.005 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" iface="eth0" netns=""
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.005 [INFO][5511] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.005 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.031 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" HandleID="k8s-pod-network.6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.031 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.031 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.042 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" HandleID="k8s-pod-network.6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.042 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" HandleID="k8s-pod-network.6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.043 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:21.049808 containerd[1432]: 2024-11-12 17:44:21.046 [INFO][5511] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:44:21.049808 containerd[1432]: time="2024-11-12T17:44:21.048883225Z" level=info msg="TearDown network for sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\" successfully"
Nov 12 17:44:21.049808 containerd[1432]: time="2024-11-12T17:44:21.048909469Z" level=info msg="StopPodSandbox for \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\" returns successfully"
Nov 12 17:44:21.049808 containerd[1432]: time="2024-11-12T17:44:21.049405900Z" level=info msg="RemovePodSandbox for \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\""
Nov 12 17:44:21.049808 containerd[1432]: time="2024-11-12T17:44:21.049437545Z" level=info msg="Forcibly stopping sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\""
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.087 [WARNING][5540] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"53e0c452-1122-4c00-814a-21a5b2fcb5be", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b3dd5f6d0034e377f3a0b56d8a7c865fb64c85cd2957d8ef2f0cd58511eac2da", Pod:"coredns-6f6b679f8f-9q5zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibb61417da7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.087 [INFO][5540] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.088 [INFO][5540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" iface="eth0" netns=""
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.088 [INFO][5540] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.088 [INFO][5540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.120 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" HandleID="k8s-pod-network.6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.121 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.121 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.129 [WARNING][5547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" HandleID="k8s-pod-network.6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.130 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" HandleID="k8s-pod-network.6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0" Workload="localhost-k8s-coredns--6f6b679f8f--9q5zc-eth0"
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.133 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:21.139566 containerd[1432]: 2024-11-12 17:44:21.135 [INFO][5540] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0"
Nov 12 17:44:21.139566 containerd[1432]: time="2024-11-12T17:44:21.138795057Z" level=info msg="TearDown network for sandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\" successfully"
Nov 12 17:44:21.142565 containerd[1432]: time="2024-11-12T17:44:21.142534476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Nov 12 17:44:21.142751 containerd[1432]: time="2024-11-12T17:44:21.142690578Z" level=info msg="RemovePodSandbox \"6f495629f756372ec6d38c8a6619a1d170ed7e02e8c0bc3bb1630df3fb417ad0\" returns successfully"
Nov 12 17:44:21.143196 containerd[1432]: time="2024-11-12T17:44:21.143169927Z" level=info msg="StopPodSandbox for \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\""
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.190 [WARNING][5568] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--68v88-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"98f3ae8e-274c-478a-a46c-2a1f05e70b20", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3", Pod:"coredns-6f6b679f8f-68v88", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39adbfce4fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.190 [INFO][5568] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.190 [INFO][5568] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" iface="eth0" netns=""
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.190 [INFO][5568] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.190 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.213 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" HandleID="k8s-pod-network.1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.213 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.214 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.222 [WARNING][5576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" HandleID="k8s-pod-network.1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.222 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" HandleID="k8s-pod-network.1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.224 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:21.228012 containerd[1432]: 2024-11-12 17:44:21.226 [INFO][5568] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:21.228787 containerd[1432]: time="2024-11-12T17:44:21.228055476Z" level=info msg="TearDown network for sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\" successfully"
Nov 12 17:44:21.228787 containerd[1432]: time="2024-11-12T17:44:21.228083320Z" level=info msg="StopPodSandbox for \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\" returns successfully"
Nov 12 17:44:21.229587 containerd[1432]: time="2024-11-12T17:44:21.229178477Z" level=info msg="RemovePodSandbox for \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\""
Nov 12 17:44:21.229587 containerd[1432]: time="2024-11-12T17:44:21.229229685Z" level=info msg="Forcibly stopping sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\""
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.281 [WARNING][5599] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--68v88-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"98f3ae8e-274c-478a-a46c-2a1f05e70b20", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e14978de4f073f3bd401424786ca620f36986541e1f6238442bde73d6405bb3", Pod:"coredns-6f6b679f8f-68v88", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali39adbfce4fd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.281 [INFO][5599] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.281 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" iface="eth0" netns=""
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.281 [INFO][5599] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.281 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.310 [INFO][5607] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" HandleID="k8s-pod-network.1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.310 [INFO][5607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.310 [INFO][5607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.325 [WARNING][5607] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" HandleID="k8s-pod-network.1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.325 [INFO][5607] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" HandleID="k8s-pod-network.1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89" Workload="localhost-k8s-coredns--6f6b679f8f--68v88-eth0"
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.330 [INFO][5607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:21.337894 containerd[1432]: 2024-11-12 17:44:21.334 [INFO][5599] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89"
Nov 12 17:44:21.341783 containerd[1432]: time="2024-11-12T17:44:21.338389570Z" level=info msg="TearDown network for sandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\" successfully"
Nov 12 17:44:21.344144 containerd[1432]: time="2024-11-12T17:44:21.344100072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Nov 12 17:44:21.344335 containerd[1432]: time="2024-11-12T17:44:21.344317584Z" level=info msg="RemovePodSandbox \"1575ca90829120569de3787de12b8af8c925b9e8d5635a710f67732f9452fc89\" returns successfully"
Nov 12 17:44:21.344883 containerd[1432]: time="2024-11-12T17:44:21.344855821Z" level=info msg="StopPodSandbox for \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\""
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.411 [WARNING][5629] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g2dwk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b411254a-fa39-4c2a-ae0e-e271a38a0ca1", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815", Pod:"csi-node-driver-g2dwk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d691fbac35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.411 [INFO][5629] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.411 [INFO][5629] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" iface="eth0" netns=""
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.411 [INFO][5629] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.411 [INFO][5629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.442 [INFO][5636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" HandleID="k8s-pod-network.acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.442 [INFO][5636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.443 [INFO][5636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.453 [WARNING][5636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" HandleID="k8s-pod-network.acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.453 [INFO][5636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" HandleID="k8s-pod-network.acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.455 [INFO][5636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:21.461427 containerd[1432]: 2024-11-12 17:44:21.457 [INFO][5629] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:44:21.462449 containerd[1432]: time="2024-11-12T17:44:21.462184963Z" level=info msg="TearDown network for sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\" successfully"
Nov 12 17:44:21.462449 containerd[1432]: time="2024-11-12T17:44:21.462237251Z" level=info msg="StopPodSandbox for \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\" returns successfully"
Nov 12 17:44:21.463274 containerd[1432]: time="2024-11-12T17:44:21.462972517Z" level=info msg="RemovePodSandbox for \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\""
Nov 12 17:44:21.463274 containerd[1432]: time="2024-11-12T17:44:21.463006842Z" level=info msg="Forcibly stopping sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\""
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.505 [WARNING][5659] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g2dwk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b411254a-fa39-4c2a-ae0e-e271a38a0ca1", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 43, 35, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0bc38654939dcecf7120db7bebe47ff2cf075f35e562d581c927fcd5cd9c3815", Pod:"csi-node-driver-g2dwk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2d691fbac35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.505 [INFO][5659] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.505 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" iface="eth0" netns=""
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.505 [INFO][5659] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.505 [INFO][5659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.524 [INFO][5667] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" HandleID="k8s-pod-network.acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.524 [INFO][5667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.525 [INFO][5667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.541 [WARNING][5667] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" HandleID="k8s-pod-network.acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.541 [INFO][5667] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" HandleID="k8s-pod-network.acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7" Workload="localhost-k8s-csi--node--driver--g2dwk-eth0"
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.543 [INFO][5667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Nov 12 17:44:21.546899 containerd[1432]: 2024-11-12 17:44:21.545 [INFO][5659] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7"
Nov 12 17:44:21.547307 containerd[1432]: time="2024-11-12T17:44:21.546937412Z" level=info msg="TearDown network for sandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\" successfully"
Nov 12 17:44:21.549778 containerd[1432]: time="2024-11-12T17:44:21.549695490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Nov 12 17:44:21.549968 containerd[1432]: time="2024-11-12T17:44:21.549814507Z" level=info msg="RemovePodSandbox \"acd770bc198fdc08465682e8ac7a04a357582fc0addca72439556adca6b192c7\" returns successfully"
Nov 12 17:44:21.929926 systemd[1]: Started sshd@18-10.0.0.44:22-10.0.0.1:59646.service - OpenSSH per-connection server daemon (10.0.0.1:59646).
Nov 12 17:44:21.970388 sshd[5676]: Accepted publickey for core from 10.0.0.1 port 59646 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:44:21.971828 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:44:21.977644 systemd-logind[1415]: New session 19 of user core.
Nov 12 17:44:21.983372 systemd[1]: Started session-19.scope - Session 19 of User core.
Nov 12 17:44:22.120400 sshd[5676]: pam_unix(sshd:session): session closed for user core
Nov 12 17:44:22.124771 systemd[1]: sshd@18-10.0.0.44:22-10.0.0.1:59646.service: Deactivated successfully.
Nov 12 17:44:22.127023 systemd[1]: session-19.scope: Deactivated successfully.
Nov 12 17:44:22.128812 systemd-logind[1415]: Session 19 logged out. Waiting for processes to exit.
Nov 12 17:44:22.129999 systemd-logind[1415]: Removed session 19.
Nov 12 17:44:27.136071 systemd[1]: Started sshd@19-10.0.0.44:22-10.0.0.1:39084.service - OpenSSH per-connection server daemon (10.0.0.1:39084).
Nov 12 17:44:27.171927 sshd[5690]: Accepted publickey for core from 10.0.0.1 port 39084 ssh2: RSA SHA256:0/Njp3Vk1MHv0WcCO9/UA+beq4MlL3BRl9mBP4xwGAg
Nov 12 17:44:27.173461 sshd[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Nov 12 17:44:27.179438 systemd-logind[1415]: New session 20 of user core.
Nov 12 17:44:27.183895 systemd[1]: Started session-20.scope - Session 20 of User core.
Nov 12 17:44:27.315815 sshd[5690]: pam_unix(sshd:session): session closed for user core
Nov 12 17:44:27.320753 systemd[1]: sshd@19-10.0.0.44:22-10.0.0.1:39084.service: Deactivated successfully.
Nov 12 17:44:27.323242 systemd[1]: session-20.scope: Deactivated successfully.
Nov 12 17:44:27.325485 systemd-logind[1415]: Session 20 logged out. Waiting for processes to exit.
Nov 12 17:44:27.326516 systemd-logind[1415]: Removed session 20.