Jan 29 10:51:28.311759 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 10:51:28.311780 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 29 10:51:28.311788 kernel: KASLR enabled Jan 29 10:51:28.311794 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 29 10:51:28.311801 kernel: printk: bootconsole [pl11] enabled Jan 29 10:51:28.311806 kernel: efi: EFI v2.7 by EDK II Jan 29 10:51:28.311813 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jan 29 10:51:28.311819 kernel: random: crng init done Jan 29 10:51:28.311825 kernel: secureboot: Secure boot disabled Jan 29 10:51:28.311830 kernel: ACPI: Early table checksum verification disabled Jan 29 10:51:28.311836 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 29 10:51:28.311842 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311847 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311855 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 29 10:51:28.311862 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311868 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311874 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311881 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311888 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311893 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311899 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 29 10:51:28.311905 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311911 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 29 10:51:28.311917 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 29 10:51:28.311923 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 29 10:51:28.311929 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 29 10:51:28.311935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 29 10:51:28.311941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 29 10:51:28.311949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 29 10:51:28.311955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 29 10:51:28.311961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 29 10:51:28.311967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 29 10:51:28.311973 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 29 10:51:28.311979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 29 10:51:28.311985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 29 10:51:28.311991 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 29 10:51:28.311997 kernel: Zone ranges: Jan 29 10:51:28.312003 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 29 10:51:28.312009 kernel: DMA32 empty Jan 29 10:51:28.312015 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 10:51:28.312025 kernel: Movable zone start for each node Jan 29 10:51:28.312032 kernel: Early memory node ranges Jan 29 10:51:28.312049 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 29 10:51:28.312056 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jan 29 10:51:28.312063 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jan 29 10:51:28.312071 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jan 29 10:51:28.312078 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 29 10:51:28.312084 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 29 10:51:28.312090 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 29 10:51:28.312097 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 29 10:51:28.312103 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 10:51:28.312109 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 29 10:51:28.312116 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 29 10:51:28.312122 kernel: psci: probing for conduit method from ACPI. Jan 29 10:51:28.312128 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 10:51:28.312135 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 10:51:28.312141 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 29 10:51:28.312149 kernel: psci: SMC Calling Convention v1.4 Jan 29 10:51:28.312155 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 29 10:51:28.312161 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 29 10:51:28.312168 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 10:51:28.312174 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 10:51:28.312181 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 10:51:28.312187 kernel: Detected PIPT I-cache on CPU0 Jan 29 10:51:28.312193 kernel: CPU features: detected: GIC system register CPU interface Jan 29 10:51:28.312200 kernel: CPU features: detected: Hardware dirty bit management Jan 29 10:51:28.312206 kernel: CPU features: detected: Spectre-BHB Jan 29 10:51:28.312212 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 10:51:28.312220 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 10:51:28.312227 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 10:51:28.312233 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 29 10:51:28.312239 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 10:51:28.312246 kernel: alternatives: applying boot alternatives Jan 29 10:51:28.312253 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:51:28.312260 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 10:51:28.312267 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 10:51:28.312273 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 10:51:28.312279 kernel: Fallback order for Node 0: 0 Jan 29 10:51:28.312286 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 29 10:51:28.312294 kernel: Policy zone: Normal Jan 29 10:51:28.312300 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 10:51:28.312306 kernel: software IO TLB: area num 2. Jan 29 10:51:28.312313 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jan 29 10:51:28.312319 kernel: Memory: 3982052K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 212108K reserved, 0K cma-reserved) Jan 29 10:51:28.312326 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 10:51:28.312332 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 10:51:28.312339 kernel: rcu: RCU event tracing is enabled. Jan 29 10:51:28.312346 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 10:51:28.312352 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 10:51:28.312359 kernel: Tracing variant of Tasks RCU enabled. Jan 29 10:51:28.312367 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 10:51:28.312373 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 10:51:28.312380 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 10:51:28.312386 kernel: GICv3: 960 SPIs implemented Jan 29 10:51:28.312392 kernel: GICv3: 0 Extended SPIs implemented Jan 29 10:51:28.312398 kernel: Root IRQ handler: gic_handle_irq Jan 29 10:51:28.312405 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 10:51:28.312411 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 29 10:51:28.312417 kernel: ITS: No ITS available, not enabling LPIs Jan 29 10:51:28.312424 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 10:51:28.312430 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:51:28.312436 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 10:51:28.312445 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 10:51:28.312451 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 10:51:28.312458 kernel: Console: colour dummy device 80x25 Jan 29 10:51:28.312465 kernel: printk: console [tty1] enabled Jan 29 10:51:28.312471 kernel: ACPI: Core revision 20230628 Jan 29 10:51:28.312478 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 10:51:28.312484 kernel: pid_max: default: 32768 minimum: 301 Jan 29 10:51:28.312491 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 10:51:28.312497 kernel: landlock: Up and running. Jan 29 10:51:28.312505 kernel: SELinux: Initializing. Jan 29 10:51:28.312512 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:51:28.312518 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:51:28.312525 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:51:28.312532 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:51:28.312538 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 29 10:51:28.312545 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 29 10:51:28.312558 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 10:51:28.312565 kernel: rcu: Hierarchical SRCU implementation. Jan 29 10:51:28.312571 kernel: rcu: Max phase no-delay instances is 400. Jan 29 10:51:28.312578 kernel: Remapping and enabling EFI services. Jan 29 10:51:28.312585 kernel: smp: Bringing up secondary CPUs ... Jan 29 10:51:28.312593 kernel: Detected PIPT I-cache on CPU1 Jan 29 10:51:28.312600 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 29 10:51:28.312607 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:51:28.312614 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 10:51:28.312621 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 10:51:28.312629 kernel: SMP: Total of 2 processors activated. Jan 29 10:51:28.312636 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 10:51:28.312643 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 29 10:51:28.312650 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 10:51:28.312657 kernel: CPU features: detected: CRC32 instructions Jan 29 10:51:28.312664 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 10:51:28.312671 kernel: CPU features: detected: LSE atomic instructions Jan 29 10:51:28.312678 kernel: CPU features: detected: Privileged Access Never Jan 29 10:51:28.312685 kernel: CPU: All CPU(s) started at EL1 Jan 29 10:51:28.312693 kernel: alternatives: applying system-wide alternatives Jan 29 10:51:28.312700 kernel: devtmpfs: initialized Jan 29 10:51:28.312707 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 10:51:28.312714 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 10:51:28.312721 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 10:51:28.312728 kernel: SMBIOS 3.1.0 present. Jan 29 10:51:28.312735 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 29 10:51:28.312742 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 10:51:28.312749 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 10:51:28.312757 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 10:51:28.312764 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 10:51:28.312771 kernel: audit: initializing netlink subsys (disabled) Jan 29 10:51:28.312778 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jan 29 10:51:28.312785 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 10:51:28.312792 kernel: cpuidle: using governor menu Jan 29 10:51:28.312799 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 10:51:28.312806 kernel: ASID allocator initialised with 32768 entries Jan 29 10:51:28.312813 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 10:51:28.312821 kernel: Serial: AMBA PL011 UART driver Jan 29 10:51:28.312828 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 10:51:28.312834 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 10:51:28.312841 kernel: Modules: 508880 pages in range for PLT usage Jan 29 10:51:28.312848 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 10:51:28.312855 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 10:51:28.312862 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 10:51:28.312869 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 10:51:28.312876 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 10:51:28.312884 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 10:51:28.312891 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 10:51:28.312898 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 10:51:28.312905 kernel: ACPI: Added _OSI(Module Device) Jan 29 10:51:28.312912 kernel: ACPI: Added _OSI(Processor Device) Jan 29 10:51:28.312919 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 10:51:28.312926 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 10:51:28.312932 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 10:51:28.312939 kernel: ACPI: Interpreter enabled Jan 29 10:51:28.312948 kernel: ACPI: Using GIC for interrupt routing Jan 29 10:51:28.312955 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 29 10:51:28.312962 kernel: printk: console [ttyAMA0] enabled Jan 29 10:51:28.312968 kernel: printk: bootconsole [pl11] disabled Jan 29 10:51:28.312976 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 29 10:51:28.312982 kernel: iommu: Default domain type: Translated Jan 29 10:51:28.312989 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 10:51:28.312996 kernel: efivars: Registered efivars operations Jan 29 10:51:28.313003 kernel: vgaarb: loaded Jan 29 10:51:28.313011 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 10:51:28.313018 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 10:51:28.313025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 10:51:28.313032 kernel: pnp: PnP ACPI init Jan 29 10:51:28.313047 kernel: pnp: PnP ACPI: found 0 devices Jan 29 10:51:28.313054 kernel: NET: Registered PF_INET protocol family Jan 29 10:51:28.313061 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 10:51:28.313068 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 10:51:28.313075 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 10:51:28.313085 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 10:51:28.313092 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 10:51:28.313099 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 10:51:28.313106 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:51:28.313113 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:51:28.313120 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 10:51:28.313127 kernel: PCI: CLS 0 bytes, default 64 Jan 29 10:51:28.313134 kernel: kvm [1]: HYP mode not available Jan 29 10:51:28.313141 kernel: Initialise system trusted keyrings Jan 29 10:51:28.313151 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 10:51:28.313159 kernel: Key type asymmetric registered Jan 29 10:51:28.313166 kernel: Asymmetric key parser 'x509' registered Jan 29 10:51:28.313172 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 10:51:28.313180 kernel: io scheduler mq-deadline registered Jan 29 10:51:28.313187 kernel: io scheduler kyber registered Jan 29 10:51:28.313193 kernel: io scheduler bfq registered Jan 29 10:51:28.313200 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 10:51:28.313207 kernel: thunder_xcv, ver 1.0 Jan 29 10:51:28.313216 kernel: thunder_bgx, ver 1.0 Jan 29 10:51:28.313223 kernel: nicpf, ver 1.0 Jan 29 10:51:28.313230 kernel: nicvf, ver 1.0 Jan 29 10:51:28.313367 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 10:51:28.313440 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T10:51:27 UTC (1738147887) Jan 29 10:51:28.313450 kernel: efifb: probing for efifb Jan 29 10:51:28.313458 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 10:51:28.313465 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 10:51:28.313474 kernel: efifb: scrolling: redraw Jan 29 10:51:28.313481 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 10:51:28.313488 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 10:51:28.313495 kernel: fb0: EFI VGA frame buffer device Jan 29 10:51:28.313502 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 29 10:51:28.313509 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 10:51:28.313516 kernel: No ACPI PMU IRQ for CPU0 Jan 29 10:51:28.313522 kernel: No ACPI PMU IRQ for CPU1 Jan 29 10:51:28.313529 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 29 10:51:28.313538 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 10:51:28.313545 kernel: watchdog: Hard watchdog permanently disabled Jan 29 10:51:28.313552 kernel: NET: Registered PF_INET6 protocol family Jan 29 10:51:28.313559 kernel: Segment Routing with IPv6 Jan 29 10:51:28.313566 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 10:51:28.313573 kernel: NET: Registered PF_PACKET protocol family Jan 29 10:51:28.313580 kernel: Key type dns_resolver registered Jan 29 10:51:28.313586 kernel: registered taskstats version 1 Jan 29 10:51:28.313593 kernel: Loading compiled-in X.509 certificates Jan 29 10:51:28.313602 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 29 10:51:28.313609 kernel: Key type .fscrypt registered Jan 29 10:51:28.313615 kernel: Key type fscrypt-provisioning registered Jan 29 10:51:28.313622 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 10:51:28.313629 kernel: ima: Allocated hash algorithm: sha1 Jan 29 10:51:28.313636 kernel: ima: No architecture policies found Jan 29 10:51:28.313643 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 10:51:28.313650 kernel: clk: Disabling unused clocks Jan 29 10:51:28.313657 kernel: Freeing unused kernel memory: 39936K Jan 29 10:51:28.313666 kernel: Run /init as init process Jan 29 10:51:28.313673 kernel: with arguments: Jan 29 10:51:28.313680 kernel: /init Jan 29 10:51:28.313686 kernel: with environment: Jan 29 10:51:28.313693 kernel: HOME=/ Jan 29 10:51:28.313700 kernel: TERM=linux Jan 29 10:51:28.313707 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 10:51:28.313716 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:51:28.313727 systemd[1]: Detected virtualization microsoft. Jan 29 10:51:28.313734 systemd[1]: Detected architecture arm64. Jan 29 10:51:28.313742 systemd[1]: Running in initrd. Jan 29 10:51:28.313749 systemd[1]: No hostname configured, using default hostname. Jan 29 10:51:28.313756 systemd[1]: Hostname set to . Jan 29 10:51:28.313764 systemd[1]: Initializing machine ID from random generator. Jan 29 10:51:28.313771 systemd[1]: Queued start job for default target initrd.target. Jan 29 10:51:28.313779 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:51:28.313788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:51:28.313796 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 10:51:28.313804 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:51:28.313812 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 10:51:28.313819 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 10:51:28.313829 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 10:51:28.313838 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 10:51:28.313846 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:51:28.313853 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:51:28.313861 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:51:28.313869 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:51:28.313877 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:51:28.313884 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:51:28.313892 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:51:28.313899 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:51:28.313908 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 10:51:28.313916 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 10:51:28.313923 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:51:28.313931 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:51:28.313939 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:51:28.313946 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:51:28.313961 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 10:51:28.313969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:51:28.313976 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 10:51:28.313985 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 10:51:28.313993 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:51:28.314000 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:51:28.314023 systemd-journald[218]: Collecting audit messages is disabled. Jan 29 10:51:28.314054 systemd-journald[218]: Journal started Jan 29 10:51:28.314076 systemd-journald[218]: Runtime Journal (/run/log/journal/0ff03d0a4c894b94b9507cd8c1a65b02) is 8.0M, max 78.5M, 70.5M free. Jan 29 10:51:28.336516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:51:28.337211 systemd-modules-load[219]: Inserted module 'overlay' Jan 29 10:51:28.349056 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:51:28.355636 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 10:51:28.367508 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:51:28.396128 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 10:51:28.396150 kernel: Bridge firewalling registered Jan 29 10:51:28.391571 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 10:51:28.394531 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 29 10:51:28.399872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:51:28.410321 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:28.435301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:51:28.450128 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:51:28.464107 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:51:28.490215 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:51:28.501057 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:51:28.514553 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:51:28.520724 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:51:28.543211 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:51:28.567540 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 10:51:28.583221 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:51:28.599413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:51:28.620660 dracut-cmdline[252]: dracut-dracut-053 Jan 29 10:51:28.620501 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:51:28.642790 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:51:28.681617 systemd-resolved[257]: Positive Trust Anchors: Jan 29 10:51:28.681639 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:51:28.681670 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:51:28.683752 systemd-resolved[257]: Defaulting to hostname 'linux'. Jan 29 10:51:28.686092 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:51:28.694980 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:51:28.809055 kernel: SCSI subsystem initialized Jan 29 10:51:28.816049 kernel: Loading iSCSI transport class v2.0-870. Jan 29 10:51:28.827049 kernel: iscsi: registered transport (tcp) Jan 29 10:51:28.844622 kernel: iscsi: registered transport (qla4xxx) Jan 29 10:51:28.844674 kernel: QLogic iSCSI HBA Driver Jan 29 10:51:28.877352 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 10:51:28.894272 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 10:51:28.926999 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 10:51:28.927053 kernel: device-mapper: uevent: version 1.0.3 Jan 29 10:51:28.933410 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 10:51:28.981061 kernel: raid6: neonx8 gen() 15769 MB/s Jan 29 10:51:29.001048 kernel: raid6: neonx4 gen() 15788 MB/s Jan 29 10:51:29.021049 kernel: raid6: neonx2 gen() 13207 MB/s Jan 29 10:51:29.042046 kernel: raid6: neonx1 gen() 10526 MB/s Jan 29 10:51:29.062044 kernel: raid6: int64x8 gen() 6795 MB/s Jan 29 10:51:29.082043 kernel: raid6: int64x4 gen() 7353 MB/s Jan 29 10:51:29.103046 kernel: raid6: int64x2 gen() 6112 MB/s Jan 29 10:51:29.128147 kernel: raid6: int64x1 gen() 5059 MB/s Jan 29 10:51:29.128160 kernel: raid6: using algorithm neonx4 gen() 15788 MB/s Jan 29 10:51:29.152141 kernel: raid6: .... xor() 12437 MB/s, rmw enabled Jan 29 10:51:29.152153 kernel: raid6: using neon recovery algorithm Jan 29 10:51:29.164579 kernel: xor: measuring software checksum speed Jan 29 10:51:29.164595 kernel: 8regs : 21653 MB/sec Jan 29 10:51:29.168888 kernel: 32regs : 21681 MB/sec Jan 29 10:51:29.172437 kernel: arm64_neon : 28041 MB/sec Jan 29 10:51:29.176675 kernel: xor: using function: arm64_neon (28041 MB/sec) Jan 29 10:51:29.226059 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 10:51:29.235292 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:51:29.253163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:51:29.275690 systemd-udevd[440]: Using default interface naming scheme 'v255'. Jan 29 10:51:29.281572 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:51:29.300235 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 10:51:29.312673 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jan 29 10:51:29.334668 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:51:29.350166 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:51:29.383250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:51:29.408312 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 10:51:29.430295 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 10:51:29.440505 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:51:29.460850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:51:29.471313 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:51:29.507437 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 10:51:29.558930 kernel: hv_vmbus: Vmbus version:5.3 Jan 29 10:51:29.558953 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 10:51:29.558963 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 10:51:29.558972 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 29 10:51:29.558982 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 10:51:29.558990 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 10:51:29.534938 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:51:29.535112 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:51:29.632559 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 10:51:29.632583 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 29 10:51:29.632592 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 10:51:29.632602 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 10:51:29.632769 kernel: PTP clock support registered Jan 29 10:51:29.632780 kernel: scsi host0: storvsc_host_t Jan 29 10:51:29.632892 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 10:51:29.632990 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 10:51:29.633101 kernel: scsi host1: storvsc_host_t Jan 29 10:51:29.606318 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:51:29.643077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:51:29.643389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:29.652167 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:51:29.680350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:51:29.709437 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 10:51:29.709459 kernel: hv_vmbus: registering driver hv_utils Jan 29 10:51:29.687333 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:51:29.738455 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 10:51:29.738494 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 10:51:29.738530 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 10:51:29.738539 kernel: hv_netvsc 000d3ac4-e1dd-000d-3ac4-e1dd000d3ac4 eth0: VF slot 1 added Jan 29 10:51:29.709551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:51:29.703081 systemd-journald[218]: Time jumped backwards, rotating. Jan 29 10:51:29.709669 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:29.682900 systemd-resolved[257]: Clock change detected. Flushing caches. Jan 29 10:51:29.688104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:51:29.732789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:29.753973 kernel: hv_vmbus: registering driver hv_pci Jan 29 10:51:29.754001 kernel: hv_pci 2b09882e-433a-4613-87e5-4e9d39899b22: PCI VMBus probing: Using version 0x10004 Jan 29 10:51:29.864926 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 10:51:29.865056 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 10:51:29.865065 kernel: hv_pci 2b09882e-433a-4613-87e5-4e9d39899b22: PCI host bridge to bus 433a:00 Jan 29 10:51:29.865146 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 10:51:29.865236 kernel: pci_bus 433a:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 29 10:51:29.865328 kernel: pci_bus 433a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 10:51:29.865403 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 10:51:29.865491 kernel: pci 433a:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 29 10:51:29.865583 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 10:51:29.865665 kernel: pci 433a:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 10:51:29.865745 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 10:51:29.865825 kernel: pci 433a:00:02.0: enabling Extended Tags Jan 29 10:51:29.865925 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 10:51:29.866007 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 10:51:29.866090 kernel: pci 433a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 433a:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 29 10:51:29.866173 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 10:51:29.866181 kernel: pci_bus 433a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 10:51:29.866255 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 10:51:29.866335 kernel: pci 433a:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 10:51:29.762447 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:51:29.848079 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:51:29.924495 kernel: mlx5_core 433a:00:02.0: enabling device (0000 -> 0002) Jan 29 10:51:30.143389 kernel: mlx5_core 433a:00:02.0: firmware version: 16.30.1284 Jan 29 10:51:30.143514 kernel: hv_netvsc 000d3ac4-e1dd-000d-3ac4-e1dd000d3ac4 eth0: VF registering: eth1 Jan 29 10:51:30.143609 kernel: mlx5_core 433a:00:02.0 eth1: joined to eth0 Jan 29 10:51:30.143700 kernel: mlx5_core 433a:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 29 10:51:30.151895 kernel: mlx5_core 433a:00:02.0 enP17210s1: renamed from eth1 Jan 29 10:51:30.464826 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 10:51:30.539898 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (486) Jan 29 10:51:30.553846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 10:51:30.602808 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 10:51:30.644887 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (505) Jan 29 10:51:30.658173 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 10:51:30.665810 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 10:51:30.699130 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 10:51:30.724883 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 10:51:30.732878 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 10:51:31.741099 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 10:51:31.741149 disk-uuid[604]: The operation has completed successfully. Jan 29 10:51:31.798656 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 10:51:31.798768 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 10:51:31.839069 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 10:51:31.852758 sh[690]: Success Jan 29 10:51:31.885933 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 10:51:32.079808 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 10:51:32.098997 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 10:51:32.105708 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 10:51:32.144943 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 29 10:51:32.144997 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:51:32.152371 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 10:51:32.157867 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 10:51:32.162424 kernel: BTRFS info (device dm-0): using free space tree Jan 29 10:51:32.497544 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 10:51:32.503202 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 10:51:32.523101 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 10:51:32.534980 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 10:51:32.567080 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:51:32.567146 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:51:32.567164 kernel: BTRFS info (device sda6): using free space tree Jan 29 10:51:32.596387 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 10:51:32.603634 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 10:51:32.617027 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:51:32.625282 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 10:51:32.639198 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 10:51:32.670904 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:51:32.688028 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:51:32.717130 systemd-networkd[874]: lo: Link UP Jan 29 10:51:32.717140 systemd-networkd[874]: lo: Gained carrier Jan 29 10:51:32.718738 systemd-networkd[874]: Enumeration completed Jan 29 10:51:32.719347 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:51:32.719350 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:51:32.724438 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:51:32.730769 systemd[1]: Reached target network.target - Network. Jan 29 10:51:32.782882 kernel: mlx5_core 433a:00:02.0 enP17210s1: Link up Jan 29 10:51:32.822878 kernel: hv_netvsc 000d3ac4-e1dd-000d-3ac4-e1dd000d3ac4 eth0: Data path switched to VF: enP17210s1 Jan 29 10:51:32.823450 systemd-networkd[874]: enP17210s1: Link UP Jan 29 10:51:32.823714 systemd-networkd[874]: eth0: Link UP Jan 29 10:51:32.824124 systemd-networkd[874]: eth0: Gained carrier Jan 29 10:51:32.824134 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:51:32.832372 systemd-networkd[874]: enP17210s1: Gained carrier Jan 29 10:51:32.858916 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 10:51:33.394107 ignition[849]: Ignition 2.20.0 Jan 29 10:51:33.394120 ignition[849]: Stage: fetch-offline Jan 29 10:51:33.398538 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:51:33.394155 ignition[849]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:33.413997 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 10:51:33.394164 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:33.394260 ignition[849]: parsed url from cmdline: "" Jan 29 10:51:33.394264 ignition[849]: no config URL provided Jan 29 10:51:33.394269 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:51:33.394276 ignition[849]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:51:33.394281 ignition[849]: failed to fetch config: resource requires networking Jan 29 10:51:33.394457 ignition[849]: Ignition finished successfully Jan 29 10:51:33.428155 ignition[883]: Ignition 2.20.0 Jan 29 10:51:33.428162 ignition[883]: Stage: fetch Jan 29 10:51:33.428343 ignition[883]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:33.428353 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:33.428454 ignition[883]: parsed url from cmdline: "" Jan 29 10:51:33.428458 ignition[883]: no config URL provided Jan 29 10:51:33.428462 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:51:33.428470 ignition[883]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:51:33.428500 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 10:51:33.516011 ignition[883]: GET result: OK Jan 29 10:51:33.516077 ignition[883]: config has been read from IMDS userdata Jan 29 10:51:33.516121 ignition[883]: parsing config with SHA512: e2bf46c1ec955248931f0818a1d6fc5bb09b31dd0cd18d78f2cf39175caa315f0af335d27eaf1594a43c98a3c55c577ca17b8bea4e66144ff0df44edbe2f11c6 Jan 29 10:51:33.521084 unknown[883]: fetched base config from "system" Jan 29 10:51:33.521686 ignition[883]: fetch: fetch complete Jan 29 10:51:33.521102 unknown[883]: fetched base config from "system" Jan 29 10:51:33.521691 ignition[883]: fetch: fetch passed Jan 29 10:51:33.521117 unknown[883]: fetched user config from "azure" Jan 29 10:51:33.521743 ignition[883]: Ignition finished successfully Jan 29 10:51:33.524430 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 10:51:33.566611 ignition[890]: Ignition 2.20.0 Jan 29 10:51:33.548148 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 10:51:33.566618 ignition[890]: Stage: kargs Jan 29 10:51:33.574089 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 10:51:33.566878 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:33.566905 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:33.570419 ignition[890]: kargs: kargs passed Jan 29 10:51:33.599077 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 10:51:33.570490 ignition[890]: Ignition finished successfully Jan 29 10:51:33.623715 ignition[896]: Ignition 2.20.0 Jan 29 10:51:33.623722 ignition[896]: Stage: disks Jan 29 10:51:33.627126 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 10:51:33.624206 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:33.634841 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 10:51:33.624217 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:33.643970 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 10:51:33.625457 ignition[896]: disks: disks passed Jan 29 10:51:33.656297 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:51:33.625513 ignition[896]: Ignition finished successfully Jan 29 10:51:33.666940 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:51:33.679124 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:51:33.704136 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 10:51:33.787297 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 10:51:33.805647 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 10:51:33.821079 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 10:51:33.877897 kernel: EXT4-fs (sda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 29 10:51:33.878536 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 10:51:33.887520 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 10:51:33.931940 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:51:33.942188 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 10:51:33.954110 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 10:51:33.964385 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 10:51:33.964423 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:51:33.979347 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 10:51:34.018297 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 10:51:34.045499 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) Jan 29 10:51:34.045522 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:51:34.045532 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:51:34.039463 systemd-networkd[874]: eth0: Gained IPv6LL Jan 29 10:51:34.055323 kernel: BTRFS info (device sda6): using free space tree Jan 29 10:51:34.061991 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 10:51:34.063059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:51:34.282976 systemd-networkd[874]: enP17210s1: Gained IPv6LL Jan 29 10:51:34.453880 coreos-metadata[918]: Jan 29 10:51:34.453 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 10:51:34.462357 coreos-metadata[918]: Jan 29 10:51:34.462 INFO Fetch successful Jan 29 10:51:34.462357 coreos-metadata[918]: Jan 29 10:51:34.462 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 10:51:34.481021 coreos-metadata[918]: Jan 29 10:51:34.480 INFO Fetch successful Jan 29 10:51:34.496959 coreos-metadata[918]: Jan 29 10:51:34.496 INFO wrote hostname ci-4186.1.0-a-2e829ed2e0 to /sysroot/etc/hostname Jan 29 10:51:34.507020 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 10:51:34.755603 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 10:51:34.812428 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Jan 29 10:51:34.821196 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 10:51:34.829639 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 10:51:35.781881 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 10:51:35.797039 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 10:51:35.804170 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 10:51:35.828796 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 10:51:35.837941 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:51:35.856300 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 10:51:35.870938 ignition[1037]: INFO : Ignition 2.20.0 Jan 29 10:51:35.870938 ignition[1037]: INFO : Stage: mount Jan 29 10:51:35.878948 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:35.878948 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:35.878948 ignition[1037]: INFO : mount: mount passed Jan 29 10:51:35.878948 ignition[1037]: INFO : Ignition finished successfully Jan 29 10:51:35.879068 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 10:51:35.911068 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 10:51:35.928919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:51:35.953880 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1047) Jan 29 10:51:35.967008 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:51:35.967048 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:51:35.971282 kernel: BTRFS info (device sda6): using free space tree Jan 29 10:51:35.977874 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 10:51:35.980095 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:51:36.009962 ignition[1065]: INFO : Ignition 2.20.0 Jan 29 10:51:36.015361 ignition[1065]: INFO : Stage: files Jan 29 10:51:36.015361 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:36.015361 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:36.015361 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Jan 29 10:51:36.037368 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 10:51:36.037368 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 10:51:36.119255 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 10:51:36.126610 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 10:51:36.126610 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 10:51:36.119668 unknown[1065]: wrote ssh authorized keys file for user: core Jan 29 10:51:36.149768 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:51:36.160302 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 10:51:36.312273 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 10:51:37.291943 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 10:51:37.846632 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 10:51:38.039105 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:51:38.039105 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 10:51:38.090825 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: files passed Jan 29 10:51:38.103928 ignition[1065]: INFO : Ignition finished successfully Jan 29 10:51:38.104260 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 10:51:38.142640 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 10:51:38.168056 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 10:51:38.196328 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 10:51:38.196443 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 10:51:38.241547 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:51:38.241547 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:51:38.270518 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:51:38.253055 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:51:38.267175 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 10:51:38.303094 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 10:51:38.334566 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 10:51:38.334692 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 10:51:38.348682 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 10:51:38.361820 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 10:51:38.373129 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 10:51:38.392103 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 10:51:38.403639 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:51:38.418057 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 10:51:38.445273 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:51:38.452312 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:51:38.465644 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 10:51:38.477199 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 10:51:38.477318 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:51:38.495024 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 10:51:38.501200 systemd[1]: Stopped target basic.target - Basic System. Jan 29 10:51:38.512811 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 10:51:38.524882 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:51:38.536687 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 10:51:38.549164 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 10:51:38.560875 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:51:38.573587 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 10:51:38.584964 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 10:51:38.597269 systemd[1]: Stopped target swap.target - Swaps. Jan 29 10:51:38.607106 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 10:51:38.607226 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:51:38.621921 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:51:38.628028 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:51:38.639740 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 10:51:38.639806 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:51:38.652469 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 10:51:38.652587 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 10:51:38.670848 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 10:51:38.670991 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:51:38.683584 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 10:51:38.683678 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 10:51:38.697269 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 10:51:38.697365 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 10:51:38.729139 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 10:51:38.747073 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 10:51:38.793024 ignition[1117]: INFO : Ignition 2.20.0 Jan 29 10:51:38.793024 ignition[1117]: INFO : Stage: umount Jan 29 10:51:38.793024 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:38.793024 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:38.793024 ignition[1117]: INFO : umount: umount passed Jan 29 10:51:38.793024 ignition[1117]: INFO : Ignition finished successfully Jan 29 10:51:38.754488 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 10:51:38.754638 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:51:38.767331 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 10:51:38.767442 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:51:38.793284 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 10:51:38.793393 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 10:51:38.803809 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 10:51:38.804090 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 10:51:38.813784 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 10:51:38.813840 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 10:51:38.825525 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 10:51:38.825574 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 10:51:38.837988 systemd[1]: Stopped target network.target - Network. Jan 29 10:51:38.854515 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 10:51:38.854590 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:51:38.866016 systemd[1]: Stopped target paths.target - Path Units. Jan 29 10:51:38.878820 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 10:51:38.882885 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:51:38.893580 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 10:51:38.904991 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 10:51:38.916695 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 10:51:38.916743 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:51:38.923374 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 10:51:38.923423 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:51:38.935006 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 10:51:38.935062 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 10:51:38.947129 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 10:51:38.947183 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 10:51:38.958726 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 10:51:38.969757 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 10:51:39.196644 kernel: hv_netvsc 000d3ac4-e1dd-000d-3ac4-e1dd000d3ac4 eth0: Data path switched from VF: enP17210s1 Jan 29 10:51:38.974665 systemd-networkd[874]: eth0: DHCPv6 lease lost Jan 29 10:51:38.982703 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 10:51:38.983366 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 10:51:38.983532 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 10:51:38.995186 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 10:51:38.995276 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 10:51:39.005692 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 10:51:39.005755 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:51:39.028276 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 10:51:39.040684 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 10:51:39.040766 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:51:39.053744 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:51:39.069568 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 10:51:39.069663 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 10:51:39.092189 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 10:51:39.092373 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:51:39.126289 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 10:51:39.126372 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 10:51:39.136973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 10:51:39.137030 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:51:39.148621 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 10:51:39.148684 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:51:39.165160 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 10:51:39.165228 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 10:51:39.190715 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:51:39.190787 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:51:39.225128 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 10:51:39.240493 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:51:39.240568 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:51:39.256216 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 10:51:39.256272 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 10:51:39.271005 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 10:51:39.271066 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:51:39.284255 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 10:51:39.284313 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:51:39.296684 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 10:51:39.296738 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:51:39.316365 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 10:51:39.316423 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:51:39.330292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:51:39.330340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:39.342474 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 10:51:39.570901 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 29 10:51:39.342599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 10:51:39.353244 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 10:51:39.353346 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 10:51:39.416913 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 10:51:39.417227 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 10:51:39.427370 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 10:51:39.440226 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 10:51:39.440291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 10:51:39.472097 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 10:51:39.491769 systemd[1]: Switching root. Jan 29 10:51:39.623843 systemd-journald[218]: Journal stopped Jan 29 10:51:28.311759 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 10:51:28.311780 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 29 10:51:28.311788 kernel: KASLR enabled Jan 29 10:51:28.311794 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 29 10:51:28.311801 kernel: printk: bootconsole [pl11] enabled Jan 29 10:51:28.311806 kernel: efi: EFI v2.7 by EDK II Jan 29 10:51:28.311813 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f20e698 RNG=0x3fd5f998 MEMRESERVE=0x3e477598 Jan 29 10:51:28.311819 kernel: random: crng init done Jan 29 10:51:28.311825 kernel: secureboot: Secure boot disabled Jan 29 10:51:28.311830 kernel: ACPI: Early table checksum verification disabled Jan 29 10:51:28.311836 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 29 10:51:28.311842 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311847 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311855 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 29 10:51:28.311862 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311868 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311874 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311881 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311888 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311893 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311899 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 29 10:51:28.311905 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 10:51:28.311911 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 29 10:51:28.311917 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 29 10:51:28.311923 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 29 10:51:28.311929 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 29 10:51:28.311935 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 29 10:51:28.311941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 29 10:51:28.311949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 29 10:51:28.311955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 29 10:51:28.311961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 29 10:51:28.311967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 29 10:51:28.311973 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 29 10:51:28.311979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 29 10:51:28.311985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 29 10:51:28.311991 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 29 10:51:28.311997 kernel: Zone ranges: Jan 29 10:51:28.312003 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 29 10:51:28.312009 kernel: DMA32 empty Jan 29 10:51:28.312015 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 10:51:28.312025 kernel: Movable zone start for each node Jan 29 10:51:28.312032 kernel: Early memory node ranges Jan 29 10:51:28.312049 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 29 10:51:28.312056 kernel: node 0: [mem 0x0000000000824000-0x000000003e45ffff] Jan 29 10:51:28.312063 kernel: node 0: [mem 0x000000003e460000-0x000000003e46ffff] Jan 29 10:51:28.312071 kernel: node 0: [mem 0x000000003e470000-0x000000003e54ffff] Jan 29 10:51:28.312078 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 29 10:51:28.312084 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 29 10:51:28.312090 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 29 10:51:28.312097 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 29 10:51:28.312103 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 29 10:51:28.312109 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 29 10:51:28.312116 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 29 10:51:28.312122 kernel: psci: probing for conduit method from ACPI. Jan 29 10:51:28.312128 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 10:51:28.312135 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 10:51:28.312141 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 29 10:51:28.312149 kernel: psci: SMC Calling Convention v1.4 Jan 29 10:51:28.312155 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 29 10:51:28.312161 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 29 10:51:28.312168 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 10:51:28.312174 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 10:51:28.312181 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 10:51:28.312187 kernel: Detected PIPT I-cache on CPU0 Jan 29 10:51:28.312193 kernel: CPU features: detected: GIC system register CPU interface Jan 29 10:51:28.312200 kernel: CPU features: detected: Hardware dirty bit management Jan 29 10:51:28.312206 kernel: CPU features: detected: Spectre-BHB Jan 29 10:51:28.312212 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 10:51:28.312220 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 10:51:28.312227 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 10:51:28.312233 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 29 10:51:28.312239 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 10:51:28.312246 kernel: alternatives: applying boot alternatives Jan 29 10:51:28.312253 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:51:28.312260 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 10:51:28.312267 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 10:51:28.312273 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 10:51:28.312279 kernel: Fallback order for Node 0: 0 Jan 29 10:51:28.312286 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 29 10:51:28.312294 kernel: Policy zone: Normal Jan 29 10:51:28.312300 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 10:51:28.312306 kernel: software IO TLB: area num 2. Jan 29 10:51:28.312313 kernel: software IO TLB: mapped [mem 0x000000003a460000-0x000000003e460000] (64MB) Jan 29 10:51:28.312319 kernel: Memory: 3982052K/4194160K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 212108K reserved, 0K cma-reserved) Jan 29 10:51:28.312326 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 10:51:28.312332 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 10:51:28.312339 kernel: rcu: RCU event tracing is enabled. Jan 29 10:51:28.312346 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 10:51:28.312352 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 10:51:28.312359 kernel: Tracing variant of Tasks RCU enabled. Jan 29 10:51:28.312367 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 10:51:28.312373 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 10:51:28.312380 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 10:51:28.312386 kernel: GICv3: 960 SPIs implemented Jan 29 10:51:28.312392 kernel: GICv3: 0 Extended SPIs implemented Jan 29 10:51:28.312398 kernel: Root IRQ handler: gic_handle_irq Jan 29 10:51:28.312405 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 10:51:28.312411 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 29 10:51:28.312417 kernel: ITS: No ITS available, not enabling LPIs Jan 29 10:51:28.312424 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 10:51:28.312430 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:51:28.312436 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 10:51:28.312445 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 10:51:28.312451 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 10:51:28.312458 kernel: Console: colour dummy device 80x25 Jan 29 10:51:28.312465 kernel: printk: console [tty1] enabled Jan 29 10:51:28.312471 kernel: ACPI: Core revision 20230628 Jan 29 10:51:28.312478 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 10:51:28.312484 kernel: pid_max: default: 32768 minimum: 301 Jan 29 10:51:28.312491 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 10:51:28.312497 kernel: landlock: Up and running. Jan 29 10:51:28.312505 kernel: SELinux: Initializing. Jan 29 10:51:28.312512 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:51:28.312518 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 10:51:28.312525 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:51:28.312532 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 10:51:28.312538 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 29 10:51:28.312545 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 29 10:51:28.312558 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 10:51:28.312565 kernel: rcu: Hierarchical SRCU implementation. Jan 29 10:51:28.312571 kernel: rcu: Max phase no-delay instances is 400. Jan 29 10:51:28.312578 kernel: Remapping and enabling EFI services. Jan 29 10:51:28.312585 kernel: smp: Bringing up secondary CPUs ... Jan 29 10:51:28.312593 kernel: Detected PIPT I-cache on CPU1 Jan 29 10:51:28.312600 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 29 10:51:28.312607 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 10:51:28.312614 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 10:51:28.312621 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 10:51:28.312629 kernel: SMP: Total of 2 processors activated. Jan 29 10:51:28.312636 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 10:51:28.312643 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 29 10:51:28.312650 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 10:51:28.312657 kernel: CPU features: detected: CRC32 instructions Jan 29 10:51:28.312664 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 10:51:28.312671 kernel: CPU features: detected: LSE atomic instructions Jan 29 10:51:28.312678 kernel: CPU features: detected: Privileged Access Never Jan 29 10:51:28.312685 kernel: CPU: All CPU(s) started at EL1 Jan 29 10:51:28.312693 kernel: alternatives: applying system-wide alternatives Jan 29 10:51:28.312700 kernel: devtmpfs: initialized Jan 29 10:51:28.312707 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 10:51:28.312714 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 10:51:28.312721 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 10:51:28.312728 kernel: SMBIOS 3.1.0 present. Jan 29 10:51:28.312735 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 29 10:51:28.312742 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 10:51:28.312749 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 10:51:28.312757 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 10:51:28.312764 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 10:51:28.312771 kernel: audit: initializing netlink subsys (disabled) Jan 29 10:51:28.312778 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jan 29 10:51:28.312785 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 10:51:28.312792 kernel: cpuidle: using governor menu Jan 29 10:51:28.312799 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 10:51:28.312806 kernel: ASID allocator initialised with 32768 entries Jan 29 10:51:28.312813 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 10:51:28.312821 kernel: Serial: AMBA PL011 UART driver Jan 29 10:51:28.312828 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 10:51:28.312834 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 10:51:28.312841 kernel: Modules: 508880 pages in range for PLT usage Jan 29 10:51:28.312848 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 10:51:28.312855 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 10:51:28.312862 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 10:51:28.312869 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 10:51:28.312876 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 10:51:28.312884 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 10:51:28.312891 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 10:51:28.312898 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 10:51:28.312905 kernel: ACPI: Added _OSI(Module Device) Jan 29 10:51:28.312912 kernel: ACPI: Added _OSI(Processor Device) Jan 29 10:51:28.312919 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 10:51:28.312926 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 10:51:28.312932 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 10:51:28.312939 kernel: ACPI: Interpreter enabled Jan 29 10:51:28.312948 kernel: ACPI: Using GIC for interrupt routing Jan 29 10:51:28.312955 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 29 10:51:28.312962 kernel: printk: console [ttyAMA0] enabled Jan 29 10:51:28.312968 kernel: printk: bootconsole [pl11] disabled Jan 29 10:51:28.312976 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 29 10:51:28.312982 kernel: iommu: Default domain type: Translated Jan 29 10:51:28.312989 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 10:51:28.312996 kernel: efivars: Registered efivars operations Jan 29 10:51:28.313003 kernel: vgaarb: loaded Jan 29 10:51:28.313011 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 10:51:28.313018 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 10:51:28.313025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 10:51:28.313032 kernel: pnp: PnP ACPI init Jan 29 10:51:28.313047 kernel: pnp: PnP ACPI: found 0 devices Jan 29 10:51:28.313054 kernel: NET: Registered PF_INET protocol family Jan 29 10:51:28.313061 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 10:51:28.313068 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 10:51:28.313075 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 10:51:28.313085 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 10:51:28.313092 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 10:51:28.313099 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 10:51:28.313106 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:51:28.313113 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 10:51:28.313120 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 10:51:28.313127 kernel: PCI: CLS 0 bytes, default 64 Jan 29 10:51:28.313134 kernel: kvm [1]: HYP mode not available Jan 29 10:51:28.313141 kernel: Initialise system trusted keyrings Jan 29 10:51:28.313151 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 10:51:28.313159 kernel: Key type asymmetric registered Jan 29 10:51:28.313166 kernel: Asymmetric key parser 'x509' registered Jan 29 10:51:28.313172 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 10:51:28.313180 kernel: io scheduler mq-deadline registered Jan 29 10:51:28.313187 kernel: io scheduler kyber registered Jan 29 10:51:28.313193 kernel: io scheduler bfq registered Jan 29 10:51:28.313200 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 10:51:28.313207 kernel: thunder_xcv, ver 1.0 Jan 29 10:51:28.313216 kernel: thunder_bgx, ver 1.0 Jan 29 10:51:28.313223 kernel: nicpf, ver 1.0 Jan 29 10:51:28.313230 kernel: nicvf, ver 1.0 Jan 29 10:51:28.313367 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 10:51:28.313440 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T10:51:27 UTC (1738147887) Jan 29 10:51:28.313450 kernel: efifb: probing for efifb Jan 29 10:51:28.313458 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 10:51:28.313465 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 10:51:28.313474 kernel: efifb: scrolling: redraw Jan 29 10:51:28.313481 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 10:51:28.313488 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 10:51:28.313495 kernel: fb0: EFI VGA frame buffer device Jan 29 10:51:28.313502 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 29 10:51:28.313509 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 10:51:28.313516 kernel: No ACPI PMU IRQ for CPU0 Jan 29 10:51:28.313522 kernel: No ACPI PMU IRQ for CPU1 Jan 29 10:51:28.313529 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 29 10:51:28.313538 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 10:51:28.313545 kernel: watchdog: Hard watchdog permanently disabled Jan 29 10:51:28.313552 kernel: NET: Registered PF_INET6 protocol family Jan 29 10:51:28.313559 kernel: Segment Routing with IPv6 Jan 29 10:51:28.313566 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 10:51:28.313573 kernel: NET: Registered PF_PACKET protocol family Jan 29 10:51:28.313580 kernel: Key type dns_resolver registered Jan 29 10:51:28.313586 kernel: registered taskstats version 1 Jan 29 10:51:28.313593 kernel: Loading compiled-in X.509 certificates Jan 29 10:51:28.313602 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 29 10:51:28.313609 kernel: Key type .fscrypt registered Jan 29 10:51:28.313615 kernel: Key type fscrypt-provisioning registered Jan 29 10:51:28.313622 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 10:51:28.313629 kernel: ima: Allocated hash algorithm: sha1 Jan 29 10:51:28.313636 kernel: ima: No architecture policies found Jan 29 10:51:28.313643 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 10:51:28.313650 kernel: clk: Disabling unused clocks Jan 29 10:51:28.313657 kernel: Freeing unused kernel memory: 39936K Jan 29 10:51:28.313666 kernel: Run /init as init process Jan 29 10:51:28.313673 kernel: with arguments: Jan 29 10:51:28.313680 kernel: /init Jan 29 10:51:28.313686 kernel: with environment: Jan 29 10:51:28.313693 kernel: HOME=/ Jan 29 10:51:28.313700 kernel: TERM=linux Jan 29 10:51:28.313707 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 10:51:28.313716 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:51:28.313727 systemd[1]: Detected virtualization microsoft. Jan 29 10:51:28.313734 systemd[1]: Detected architecture arm64. Jan 29 10:51:28.313742 systemd[1]: Running in initrd. Jan 29 10:51:28.313749 systemd[1]: No hostname configured, using default hostname. Jan 29 10:51:28.313756 systemd[1]: Hostname set to . Jan 29 10:51:28.313764 systemd[1]: Initializing machine ID from random generator. Jan 29 10:51:28.313771 systemd[1]: Queued start job for default target initrd.target. Jan 29 10:51:28.313779 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:51:28.313788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:51:28.313796 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 10:51:28.313804 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:51:28.313812 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 10:51:28.313819 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 10:51:28.313829 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 10:51:28.313838 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 10:51:28.313846 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:51:28.313853 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:51:28.313861 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:51:28.313869 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:51:28.313877 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:51:28.313884 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:51:28.313892 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:51:28.313899 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:51:28.313908 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 10:51:28.313916 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 10:51:28.313923 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:51:28.313931 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:51:28.313939 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:51:28.313946 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:51:28.313961 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 10:51:28.313969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:51:28.313976 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 10:51:28.313985 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 10:51:28.313993 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:51:28.314000 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:51:28.314023 systemd-journald[218]: Collecting audit messages is disabled. Jan 29 10:51:28.314054 systemd-journald[218]: Journal started Jan 29 10:51:28.314076 systemd-journald[218]: Runtime Journal (/run/log/journal/0ff03d0a4c894b94b9507cd8c1a65b02) is 8.0M, max 78.5M, 70.5M free. Jan 29 10:51:28.336516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:51:28.337211 systemd-modules-load[219]: Inserted module 'overlay' Jan 29 10:51:28.349056 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:51:28.355636 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 10:51:28.367508 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:51:28.396128 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 10:51:28.396150 kernel: Bridge firewalling registered Jan 29 10:51:28.391571 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 10:51:28.394531 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 29 10:51:28.399872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:51:28.410321 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:28.435301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:51:28.450128 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:51:28.464107 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:51:28.490215 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:51:28.501057 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:51:28.514553 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:51:28.520724 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:51:28.543211 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:51:28.567540 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 10:51:28.583221 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:51:28.599413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:51:28.620660 dracut-cmdline[252]: dracut-dracut-053 Jan 29 10:51:28.620501 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:51:28.642790 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 10:51:28.681617 systemd-resolved[257]: Positive Trust Anchors: Jan 29 10:51:28.681639 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:51:28.681670 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:51:28.683752 systemd-resolved[257]: Defaulting to hostname 'linux'. Jan 29 10:51:28.686092 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:51:28.694980 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:51:28.809055 kernel: SCSI subsystem initialized Jan 29 10:51:28.816049 kernel: Loading iSCSI transport class v2.0-870. Jan 29 10:51:28.827049 kernel: iscsi: registered transport (tcp) Jan 29 10:51:28.844622 kernel: iscsi: registered transport (qla4xxx) Jan 29 10:51:28.844674 kernel: QLogic iSCSI HBA Driver Jan 29 10:51:28.877352 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 10:51:28.894272 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 10:51:28.926999 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 10:51:28.927053 kernel: device-mapper: uevent: version 1.0.3 Jan 29 10:51:28.933410 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 10:51:28.981061 kernel: raid6: neonx8 gen() 15769 MB/s Jan 29 10:51:29.001048 kernel: raid6: neonx4 gen() 15788 MB/s Jan 29 10:51:29.021049 kernel: raid6: neonx2 gen() 13207 MB/s Jan 29 10:51:29.042046 kernel: raid6: neonx1 gen() 10526 MB/s Jan 29 10:51:29.062044 kernel: raid6: int64x8 gen() 6795 MB/s Jan 29 10:51:29.082043 kernel: raid6: int64x4 gen() 7353 MB/s Jan 29 10:51:29.103046 kernel: raid6: int64x2 gen() 6112 MB/s Jan 29 10:51:29.128147 kernel: raid6: int64x1 gen() 5059 MB/s Jan 29 10:51:29.128160 kernel: raid6: using algorithm neonx4 gen() 15788 MB/s Jan 29 10:51:29.152141 kernel: raid6: .... xor() 12437 MB/s, rmw enabled Jan 29 10:51:29.152153 kernel: raid6: using neon recovery algorithm Jan 29 10:51:29.164579 kernel: xor: measuring software checksum speed Jan 29 10:51:29.164595 kernel: 8regs : 21653 MB/sec Jan 29 10:51:29.168888 kernel: 32regs : 21681 MB/sec Jan 29 10:51:29.172437 kernel: arm64_neon : 28041 MB/sec Jan 29 10:51:29.176675 kernel: xor: using function: arm64_neon (28041 MB/sec) Jan 29 10:51:29.226059 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 10:51:29.235292 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:51:29.253163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:51:29.275690 systemd-udevd[440]: Using default interface naming scheme 'v255'. Jan 29 10:51:29.281572 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:51:29.300235 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 10:51:29.312673 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jan 29 10:51:29.334668 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:51:29.350166 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:51:29.383250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:51:29.408312 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 10:51:29.430295 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 10:51:29.440505 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:51:29.460850 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:51:29.471313 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:51:29.507437 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 10:51:29.558930 kernel: hv_vmbus: Vmbus version:5.3 Jan 29 10:51:29.558953 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 10:51:29.558963 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 10:51:29.558972 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 29 10:51:29.558982 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 10:51:29.558990 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 10:51:29.534938 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:51:29.535112 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:51:29.632559 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 10:51:29.632583 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 29 10:51:29.632592 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 10:51:29.632602 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 10:51:29.632769 kernel: PTP clock support registered Jan 29 10:51:29.632780 kernel: scsi host0: storvsc_host_t Jan 29 10:51:29.632892 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 10:51:29.632990 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 10:51:29.633101 kernel: scsi host1: storvsc_host_t Jan 29 10:51:29.606318 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:51:29.643077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:51:29.643389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:29.652167 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:51:29.680350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:51:29.709437 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 10:51:29.709459 kernel: hv_vmbus: registering driver hv_utils Jan 29 10:51:29.687333 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:51:29.738455 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 10:51:29.738494 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 10:51:29.738530 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 10:51:29.738539 kernel: hv_netvsc 000d3ac4-e1dd-000d-3ac4-e1dd000d3ac4 eth0: VF slot 1 added Jan 29 10:51:29.709551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:51:29.703081 systemd-journald[218]: Time jumped backwards, rotating. Jan 29 10:51:29.709669 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:29.682900 systemd-resolved[257]: Clock change detected. Flushing caches. Jan 29 10:51:29.688104 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:51:29.732789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:29.753973 kernel: hv_vmbus: registering driver hv_pci Jan 29 10:51:29.754001 kernel: hv_pci 2b09882e-433a-4613-87e5-4e9d39899b22: PCI VMBus probing: Using version 0x10004 Jan 29 10:51:29.864926 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 10:51:29.865056 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 10:51:29.865065 kernel: hv_pci 2b09882e-433a-4613-87e5-4e9d39899b22: PCI host bridge to bus 433a:00 Jan 29 10:51:29.865146 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 10:51:29.865236 kernel: pci_bus 433a:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 29 10:51:29.865328 kernel: pci_bus 433a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 10:51:29.865403 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 10:51:29.865491 kernel: pci 433a:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 29 10:51:29.865583 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 10:51:29.865665 kernel: pci 433a:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 10:51:29.865745 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 10:51:29.865825 kernel: pci 433a:00:02.0: enabling Extended Tags Jan 29 10:51:29.865925 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 10:51:29.866007 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 10:51:29.866090 kernel: pci 433a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 433a:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 29 10:51:29.866173 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 10:51:29.866181 kernel: pci_bus 433a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 10:51:29.866255 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 10:51:29.866335 kernel: pci 433a:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 29 10:51:29.762447 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 10:51:29.848079 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:51:29.924495 kernel: mlx5_core 433a:00:02.0: enabling device (0000 -> 0002) Jan 29 10:51:30.143389 kernel: mlx5_core 433a:00:02.0: firmware version: 16.30.1284 Jan 29 10:51:30.143514 kernel: hv_netvsc 000d3ac4-e1dd-000d-3ac4-e1dd000d3ac4 eth0: VF registering: eth1 Jan 29 10:51:30.143609 kernel: mlx5_core 433a:00:02.0 eth1: joined to eth0 Jan 29 10:51:30.143700 kernel: mlx5_core 433a:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 29 10:51:30.151895 kernel: mlx5_core 433a:00:02.0 enP17210s1: renamed from eth1 Jan 29 10:51:30.464826 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 10:51:30.539898 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (486) Jan 29 10:51:30.553846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 10:51:30.602808 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 10:51:30.644887 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (505) Jan 29 10:51:30.658173 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 10:51:30.665810 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 10:51:30.699130 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 10:51:30.724883 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 10:51:30.732878 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 10:51:31.741099 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 10:51:31.741149 disk-uuid[604]: The operation has completed successfully. Jan 29 10:51:31.798656 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 10:51:31.798768 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 10:51:31.839069 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 10:51:31.852758 sh[690]: Success Jan 29 10:51:31.885933 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 10:51:32.079808 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 10:51:32.098997 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 10:51:32.105708 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 10:51:32.144943 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 29 10:51:32.144997 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:51:32.152371 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 10:51:32.157867 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 10:51:32.162424 kernel: BTRFS info (device dm-0): using free space tree Jan 29 10:51:32.497544 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 10:51:32.503202 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 10:51:32.523101 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 10:51:32.534980 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 10:51:32.567080 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:51:32.567146 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:51:32.567164 kernel: BTRFS info (device sda6): using free space tree Jan 29 10:51:32.596387 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 10:51:32.603634 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 10:51:32.617027 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:51:32.625282 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 10:51:32.639198 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 10:51:32.670904 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:51:32.688028 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:51:32.717130 systemd-networkd[874]: lo: Link UP Jan 29 10:51:32.717140 systemd-networkd[874]: lo: Gained carrier Jan 29 10:51:32.718738 systemd-networkd[874]: Enumeration completed Jan 29 10:51:32.719347 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:51:32.719350 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:51:32.724438 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:51:32.730769 systemd[1]: Reached target network.target - Network. Jan 29 10:51:32.782882 kernel: mlx5_core 433a:00:02.0 enP17210s1: Link up Jan 29 10:51:32.822878 kernel: hv_netvsc 000d3ac4-e1dd-000d-3ac4-e1dd000d3ac4 eth0: Data path switched to VF: enP17210s1 Jan 29 10:51:32.823450 systemd-networkd[874]: enP17210s1: Link UP Jan 29 10:51:32.823714 systemd-networkd[874]: eth0: Link UP Jan 29 10:51:32.824124 systemd-networkd[874]: eth0: Gained carrier Jan 29 10:51:32.824134 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:51:32.832372 systemd-networkd[874]: enP17210s1: Gained carrier Jan 29 10:51:32.858916 systemd-networkd[874]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 10:51:33.394107 ignition[849]: Ignition 2.20.0 Jan 29 10:51:33.394120 ignition[849]: Stage: fetch-offline Jan 29 10:51:33.398538 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:51:33.394155 ignition[849]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:33.413997 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 10:51:33.394164 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:33.394260 ignition[849]: parsed url from cmdline: "" Jan 29 10:51:33.394264 ignition[849]: no config URL provided Jan 29 10:51:33.394269 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:51:33.394276 ignition[849]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:51:33.394281 ignition[849]: failed to fetch config: resource requires networking Jan 29 10:51:33.394457 ignition[849]: Ignition finished successfully Jan 29 10:51:33.428155 ignition[883]: Ignition 2.20.0 Jan 29 10:51:33.428162 ignition[883]: Stage: fetch Jan 29 10:51:33.428343 ignition[883]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:33.428353 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:33.428454 ignition[883]: parsed url from cmdline: "" Jan 29 10:51:33.428458 ignition[883]: no config URL provided Jan 29 10:51:33.428462 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 10:51:33.428470 ignition[883]: no config at "/usr/lib/ignition/user.ign" Jan 29 10:51:33.428500 ignition[883]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 10:51:33.516011 ignition[883]: GET result: OK Jan 29 10:51:33.516077 ignition[883]: config has been read from IMDS userdata Jan 29 10:51:33.516121 ignition[883]: parsing config with SHA512: e2bf46c1ec955248931f0818a1d6fc5bb09b31dd0cd18d78f2cf39175caa315f0af335d27eaf1594a43c98a3c55c577ca17b8bea4e66144ff0df44edbe2f11c6 Jan 29 10:51:33.521084 unknown[883]: fetched base config from "system" Jan 29 10:51:33.521686 ignition[883]: fetch: fetch complete Jan 29 10:51:33.521102 unknown[883]: fetched base config from "system" Jan 29 10:51:33.521691 ignition[883]: fetch: fetch passed Jan 29 10:51:33.521117 unknown[883]: fetched user config from "azure" Jan 29 10:51:33.521743 ignition[883]: Ignition finished successfully Jan 29 10:51:33.524430 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 10:51:33.566611 ignition[890]: Ignition 2.20.0 Jan 29 10:51:33.548148 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 10:51:33.566618 ignition[890]: Stage: kargs Jan 29 10:51:33.574089 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 10:51:33.566878 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:33.566905 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:33.570419 ignition[890]: kargs: kargs passed Jan 29 10:51:33.599077 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 10:51:33.570490 ignition[890]: Ignition finished successfully Jan 29 10:51:33.623715 ignition[896]: Ignition 2.20.0 Jan 29 10:51:33.623722 ignition[896]: Stage: disks Jan 29 10:51:33.627126 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 10:51:33.624206 ignition[896]: no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:33.634841 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 10:51:33.624217 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:33.643970 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 10:51:33.625457 ignition[896]: disks: disks passed Jan 29 10:51:33.656297 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:51:33.625513 ignition[896]: Ignition finished successfully Jan 29 10:51:33.666940 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:51:33.679124 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:51:33.704136 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 10:51:33.787297 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 10:51:33.805647 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 10:51:33.821079 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 10:51:33.877897 kernel: EXT4-fs (sda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 29 10:51:33.878536 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 10:51:33.887520 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 10:51:33.931940 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:51:33.942188 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 10:51:33.954110 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 10:51:33.964385 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 10:51:33.964423 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:51:33.979347 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 10:51:34.018297 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 10:51:34.045499 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (916) Jan 29 10:51:34.045522 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:51:34.045532 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:51:34.039463 systemd-networkd[874]: eth0: Gained IPv6LL Jan 29 10:51:34.055323 kernel: BTRFS info (device sda6): using free space tree Jan 29 10:51:34.061991 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 10:51:34.063059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:51:34.282976 systemd-networkd[874]: enP17210s1: Gained IPv6LL Jan 29 10:51:34.453880 coreos-metadata[918]: Jan 29 10:51:34.453 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 10:51:34.462357 coreos-metadata[918]: Jan 29 10:51:34.462 INFO Fetch successful Jan 29 10:51:34.462357 coreos-metadata[918]: Jan 29 10:51:34.462 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 10:51:34.481021 coreos-metadata[918]: Jan 29 10:51:34.480 INFO Fetch successful Jan 29 10:51:34.496959 coreos-metadata[918]: Jan 29 10:51:34.496 INFO wrote hostname ci-4186.1.0-a-2e829ed2e0 to /sysroot/etc/hostname Jan 29 10:51:34.507020 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 10:51:34.755603 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 10:51:34.812428 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Jan 29 10:51:34.821196 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 10:51:34.829639 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 10:51:35.781881 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 10:51:35.797039 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 10:51:35.804170 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 10:51:35.828796 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 10:51:35.837941 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:51:35.856300 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 10:51:35.870938 ignition[1037]: INFO : Ignition 2.20.0 Jan 29 10:51:35.870938 ignition[1037]: INFO : Stage: mount Jan 29 10:51:35.878948 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:35.878948 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:35.878948 ignition[1037]: INFO : mount: mount passed Jan 29 10:51:35.878948 ignition[1037]: INFO : Ignition finished successfully Jan 29 10:51:35.879068 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 10:51:35.911068 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 10:51:35.928919 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 10:51:35.953880 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1047) Jan 29 10:51:35.967008 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 10:51:35.967048 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 10:51:35.971282 kernel: BTRFS info (device sda6): using free space tree Jan 29 10:51:35.977874 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 10:51:35.980095 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 10:51:36.009962 ignition[1065]: INFO : Ignition 2.20.0 Jan 29 10:51:36.015361 ignition[1065]: INFO : Stage: files Jan 29 10:51:36.015361 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:36.015361 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:36.015361 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Jan 29 10:51:36.037368 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 10:51:36.037368 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 10:51:36.119255 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 10:51:36.126610 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 10:51:36.126610 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 10:51:36.119668 unknown[1065]: wrote ssh authorized keys file for user: core Jan 29 10:51:36.149768 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:51:36.160302 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 10:51:36.312273 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 10:51:37.291943 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:51:37.303205 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 10:51:37.846632 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 10:51:38.039105 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 10:51:38.039105 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 10:51:38.090825 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 10:51:38.103928 ignition[1065]: INFO : files: files passed Jan 29 10:51:38.103928 ignition[1065]: INFO : Ignition finished successfully Jan 29 10:51:38.104260 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 10:51:38.142640 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 10:51:38.168056 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 10:51:38.196328 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 10:51:38.196443 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 10:51:38.241547 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:51:38.241547 initrd-setup-root-after-ignition[1093]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:51:38.270518 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 10:51:38.253055 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:51:38.267175 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 10:51:38.303094 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 10:51:38.334566 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 10:51:38.334692 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 10:51:38.348682 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 10:51:38.361820 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 10:51:38.373129 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 10:51:38.392103 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 10:51:38.403639 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:51:38.418057 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 10:51:38.445273 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:51:38.452312 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:51:38.465644 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 10:51:38.477199 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 10:51:38.477318 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 10:51:38.495024 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 10:51:38.501200 systemd[1]: Stopped target basic.target - Basic System. Jan 29 10:51:38.512811 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 10:51:38.524882 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 10:51:38.536687 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 10:51:38.549164 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 10:51:38.560875 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 10:51:38.573587 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 10:51:38.584964 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 10:51:38.597269 systemd[1]: Stopped target swap.target - Swaps. Jan 29 10:51:38.607106 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 10:51:38.607226 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 10:51:38.621921 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:51:38.628028 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:51:38.639740 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 10:51:38.639806 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:51:38.652469 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 10:51:38.652587 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 10:51:38.670848 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 10:51:38.670991 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 10:51:38.683584 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 10:51:38.683678 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 10:51:38.697269 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 10:51:38.697365 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 10:51:38.729139 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 10:51:38.747073 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 10:51:38.793024 ignition[1117]: INFO : Ignition 2.20.0 Jan 29 10:51:38.793024 ignition[1117]: INFO : Stage: umount Jan 29 10:51:38.793024 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 10:51:38.793024 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 10:51:38.793024 ignition[1117]: INFO : umount: umount passed Jan 29 10:51:38.793024 ignition[1117]: INFO : Ignition finished successfully Jan 29 10:51:38.754488 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 10:51:38.754638 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:51:38.767331 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 10:51:38.767442 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 10:51:38.793284 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 10:51:38.793393 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 10:51:38.803809 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 10:51:38.804090 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 10:51:38.813784 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 10:51:38.813840 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 10:51:38.825525 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 10:51:38.825574 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 10:51:38.837988 systemd[1]: Stopped target network.target - Network. Jan 29 10:51:38.854515 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 10:51:38.854590 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 10:51:38.866016 systemd[1]: Stopped target paths.target - Path Units. Jan 29 10:51:38.878820 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 10:51:38.882885 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:51:38.893580 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 10:51:38.904991 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 10:51:38.916695 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 10:51:38.916743 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 10:51:38.923374 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 10:51:38.923423 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 10:51:38.935006 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 10:51:38.935062 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 10:51:38.947129 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 10:51:38.947183 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 10:51:38.958726 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 10:51:38.969757 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 10:51:39.196644 kernel: hv_netvsc 000d3ac4-e1dd-000d-3ac4-e1dd000d3ac4 eth0: Data path switched from VF: enP17210s1 Jan 29 10:51:38.974665 systemd-networkd[874]: eth0: DHCPv6 lease lost Jan 29 10:51:38.982703 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 10:51:38.983366 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 10:51:38.983532 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 10:51:38.995186 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 10:51:38.995276 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 10:51:39.005692 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 10:51:39.005755 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:51:39.028276 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 10:51:39.040684 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 10:51:39.040766 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 10:51:39.053744 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:51:39.069568 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 10:51:39.069663 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 10:51:39.092189 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 10:51:39.092373 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:51:39.126289 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 10:51:39.126372 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 10:51:39.136973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 10:51:39.137030 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:51:39.148621 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 10:51:39.148684 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 10:51:39.165160 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 10:51:39.165228 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 10:51:39.190715 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 10:51:39.190787 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 10:51:39.225128 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 10:51:39.240493 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 10:51:39.240568 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:51:39.256216 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 10:51:39.256272 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 10:51:39.271005 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 10:51:39.271066 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:51:39.284255 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 10:51:39.284313 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:51:39.296684 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 10:51:39.296738 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:51:39.316365 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 10:51:39.316423 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:51:39.330292 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:51:39.330340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:39.342474 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 10:51:39.570901 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 29 10:51:39.342599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 10:51:39.353244 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 10:51:39.353346 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 10:51:39.416913 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 10:51:39.417227 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 10:51:39.427370 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 10:51:39.440226 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 10:51:39.440291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 10:51:39.472097 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 10:51:39.491769 systemd[1]: Switching root. Jan 29 10:51:39.623843 systemd-journald[218]: Journal stopped Jan 29 10:51:45.414246 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 10:51:45.414273 kernel: SELinux: policy capability open_perms=1 Jan 29 10:51:45.414283 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 10:51:45.414291 kernel: SELinux: policy capability always_check_network=0 Jan 29 10:51:45.414302 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 10:51:45.414310 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 10:51:45.414319 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 10:51:45.414327 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 10:51:45.414335 kernel: audit: type=1403 audit(1738147900.786:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 10:51:45.414345 systemd[1]: Successfully loaded SELinux policy in 127.289ms. Jan 29 10:51:45.414357 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.179ms. Jan 29 10:51:45.414367 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 10:51:45.414376 systemd[1]: Detected virtualization microsoft. Jan 29 10:51:45.414385 systemd[1]: Detected architecture arm64. Jan 29 10:51:45.414396 systemd[1]: Detected first boot. Jan 29 10:51:45.414408 systemd[1]: Hostname set to . Jan 29 10:51:45.414417 systemd[1]: Initializing machine ID from random generator. Jan 29 10:51:45.414433 zram_generator::config[1158]: No configuration found. Jan 29 10:51:45.414444 systemd[1]: Populated /etc with preset unit settings. Jan 29 10:51:45.414454 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 10:51:45.414468 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 10:51:45.414479 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 10:51:45.414491 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 10:51:45.414500 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 10:51:45.414510 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 10:51:45.414519 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 10:51:45.414528 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 10:51:45.414538 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 10:51:45.414547 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 10:51:45.414558 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 10:51:45.414567 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 10:51:45.414576 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 10:51:45.414586 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 10:51:45.414595 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 10:51:45.414604 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 10:51:45.414615 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 10:51:45.414624 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 10:51:45.414635 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 10:51:45.414644 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 10:51:45.414653 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 10:51:45.414665 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 10:51:45.414675 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 10:51:45.414685 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 10:51:45.414694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 10:51:45.414704 systemd[1]: Reached target slices.target - Slice Units. Jan 29 10:51:45.414715 systemd[1]: Reached target swap.target - Swaps. Jan 29 10:51:45.414724 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 10:51:45.414734 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 10:51:45.414747 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 10:51:45.414760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 10:51:45.414770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 10:51:45.414786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 10:51:45.414797 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 10:51:45.414807 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 10:51:45.414816 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 10:51:45.414826 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 10:51:45.414837 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 10:51:45.414846 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 10:51:45.414990 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 10:51:45.415003 systemd[1]: Reached target machines.target - Containers. Jan 29 10:51:45.415018 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 10:51:45.415029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:51:45.415039 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 10:51:45.415053 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 10:51:45.415063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:51:45.415073 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:51:45.415085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:51:45.415095 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 10:51:45.415105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:51:45.415115 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 10:51:45.415124 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 10:51:45.415134 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 10:51:45.415144 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 10:51:45.415153 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 10:51:45.415164 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 10:51:45.415195 systemd-journald[1238]: Collecting audit messages is disabled. Jan 29 10:51:45.415220 systemd-journald[1238]: Journal started Jan 29 10:51:45.415248 systemd-journald[1238]: Runtime Journal (/run/log/journal/9273f287d30c47388ce5ca579df061c2) is 8.0M, max 78.5M, 70.5M free. Jan 29 10:51:44.176797 systemd[1]: Queued start job for default target multi-user.target. Jan 29 10:51:44.301758 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 10:51:44.302150 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 10:51:44.302465 systemd[1]: systemd-journald.service: Consumed 3.301s CPU time. Jan 29 10:51:45.582875 kernel: loop: module loaded Jan 29 10:51:45.582924 kernel: fuse: init (API version 7.39) Jan 29 10:51:45.739971 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 10:51:45.756005 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 10:51:45.771340 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 10:51:45.791386 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 10:51:45.801565 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 10:51:45.801637 kernel: ACPI: bus type drm_connector registered Jan 29 10:51:45.801651 systemd[1]: Stopped verity-setup.service. Jan 29 10:51:45.822324 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 10:51:45.823133 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 10:51:45.829199 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 10:51:45.835654 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 10:51:45.841288 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 10:51:45.847678 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 10:51:45.854081 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 10:51:45.861886 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 10:51:45.870052 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 10:51:45.870209 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 10:51:45.878016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:51:45.878148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:51:45.886366 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:51:45.886501 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:51:45.893222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:51:45.893344 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:51:45.900389 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 10:51:45.900501 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 10:51:45.907399 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:51:45.907531 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:51:45.913914 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 10:51:45.920628 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 10:51:45.942026 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 10:51:45.949565 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 10:51:45.956092 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 10:51:45.956132 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 10:51:45.962982 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 10:51:45.971730 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 10:51:45.979214 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 10:51:45.984846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:51:46.072135 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 10:51:46.079548 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 10:51:46.087714 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:51:46.089010 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 10:51:46.095412 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:51:46.096476 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 10:51:46.105064 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 10:51:46.114373 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 10:51:46.123192 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 10:51:46.133484 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 10:51:46.141135 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 10:51:46.148706 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 10:51:46.158535 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 10:51:46.188963 systemd-journald[1238]: Time spent on flushing to /var/log/journal/9273f287d30c47388ce5ca579df061c2 is 1.648727s for 902 entries. Jan 29 10:51:46.188963 systemd-journald[1238]: System Journal (/var/log/journal/9273f287d30c47388ce5ca579df061c2) is 11.8M, max 2.6G, 2.6G free. Jan 29 10:51:50.855596 systemd-journald[1238]: Received client request to flush runtime journal. Jan 29 10:51:50.855644 kernel: loop0: detected capacity change from 0 to 113552 Jan 29 10:51:50.855665 systemd-journald[1238]: /var/log/journal/9273f287d30c47388ce5ca579df061c2/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 29 10:51:50.855688 systemd-journald[1238]: Rotating system journal. Jan 29 10:51:50.855713 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 10:51:50.855728 kernel: loop1: detected capacity change from 0 to 189592 Jan 29 10:51:50.855743 kernel: loop2: detected capacity change from 0 to 28752 Jan 29 10:51:46.207964 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 10:51:46.220032 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 10:51:46.229140 udevadm[1292]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 10:51:46.868998 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 10:51:46.950765 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 10:51:46.958566 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 10:51:46.968203 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Jan 29 10:51:46.968213 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Jan 29 10:51:46.975593 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 10:51:46.983385 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 10:51:47.612991 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 10:51:47.624103 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 10:51:50.857163 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 10:51:50.865168 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 10:51:50.878998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 10:51:50.893157 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Jan 29 10:51:50.893442 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Jan 29 10:51:50.897558 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 10:51:50.949661 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 10:51:50.950344 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 10:51:52.853979 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 10:51:52.867027 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 10:51:52.888003 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Jan 29 10:51:52.894888 kernel: loop3: detected capacity change from 0 to 116784 Jan 29 10:51:54.168838 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 10:51:54.186081 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 10:51:54.229601 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 10:51:54.388877 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 10:51:54.430550 kernel: hv_vmbus: registering driver hyperv_fb Jan 29 10:51:54.430655 kernel: hv_vmbus: registering driver hv_balloon Jan 29 10:51:54.443196 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 29 10:51:54.443270 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 29 10:51:54.448057 kernel: Console: switching to colour dummy device 80x25 Jan 29 10:51:54.448118 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 29 10:51:54.448135 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 29 10:51:54.456124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:51:54.465864 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 10:51:54.483416 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 10:51:54.515935 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 10:51:54.672947 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 10:51:54.674920 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:54.688067 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 10:51:55.093879 kernel: loop4: detected capacity change from 0 to 113552 Jan 29 10:51:55.103870 kernel: loop5: detected capacity change from 0 to 189592 Jan 29 10:51:55.115876 kernel: loop6: detected capacity change from 0 to 28752 Jan 29 10:51:55.125877 kernel: loop7: detected capacity change from 0 to 116784 Jan 29 10:51:55.130894 (sd-merge)[1382]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 29 10:51:55.131338 (sd-merge)[1382]: Merged extensions into '/usr'. Jan 29 10:51:55.134479 systemd[1]: Reloading requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 10:51:55.134606 systemd[1]: Reloading... Jan 29 10:51:55.200896 zram_generator::config[1411]: No configuration found. Jan 29 10:51:55.232403 systemd-networkd[1334]: lo: Link UP Jan 29 10:51:55.232411 systemd-networkd[1334]: lo: Gained carrier Jan 29 10:51:55.234402 systemd-networkd[1334]: Enumeration completed Jan 29 10:51:55.234996 systemd-networkd[1334]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:51:55.235009 systemd-networkd[1334]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:51:55.293875 kernel: mlx5_core 433a:00:02.0 enP17210s1: Link up Jan 29 10:51:55.321319 kernel: hv_netvsc 000d3ac4-e1dd-000d-3ac4-e1dd000d3ac4 eth0: Data path switched to VF: enP17210s1 Jan 29 10:51:55.322000 systemd-networkd[1334]: enP17210s1: Link UP Jan 29 10:51:55.322104 systemd-networkd[1334]: eth0: Link UP Jan 29 10:51:55.322107 systemd-networkd[1334]: eth0: Gained carrier Jan 29 10:51:55.322121 systemd-networkd[1334]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:51:55.327258 systemd-networkd[1334]: enP17210s1: Gained carrier Jan 29 10:51:55.333918 systemd-networkd[1334]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 10:51:55.562949 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1325) Jan 29 10:51:55.574762 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:51:55.651895 systemd[1]: Reloading finished in 516 ms. Jan 29 10:51:55.688431 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 10:51:55.694908 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 10:51:55.703375 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 10:51:55.722522 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 10:51:55.738059 systemd[1]: Starting ensure-sysext.service... Jan 29 10:51:55.746052 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 10:51:55.755751 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 10:51:55.763261 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 10:51:55.770707 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 10:51:55.778997 systemd[1]: Reloading requested from client PID 1522 ('systemctl') (unit ensure-sysext.service)... Jan 29 10:51:55.779014 systemd[1]: Reloading... Jan 29 10:51:55.790137 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 10:51:55.790636 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 10:51:55.791381 systemd-tmpfiles[1526]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 10:51:55.791685 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Jan 29 10:51:55.791787 systemd-tmpfiles[1526]: ACLs are not supported, ignoring. Jan 29 10:51:55.795013 systemd-tmpfiles[1526]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:51:55.795132 systemd-tmpfiles[1526]: Skipping /boot Jan 29 10:51:55.806199 systemd-tmpfiles[1526]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 10:51:55.806306 systemd-tmpfiles[1526]: Skipping /boot Jan 29 10:51:55.855903 zram_generator::config[1562]: No configuration found. Jan 29 10:51:55.957319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:51:56.035769 systemd[1]: Reloading finished in 256 ms. Jan 29 10:51:56.060312 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 10:51:56.075475 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:51:56.085018 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 10:51:56.093530 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 10:51:56.103162 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 10:51:56.111168 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 10:51:56.122061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:51:56.124171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:51:56.133309 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:51:56.146001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:51:56.151681 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:51:56.152749 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 10:51:56.160362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:51:56.163101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:51:56.169781 lvm[1523]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:51:56.171691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:51:56.172949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:51:56.181609 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:51:56.182924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:51:56.197340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:51:56.201129 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:51:56.209163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:51:56.224119 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:51:56.230408 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:51:56.232535 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:51:56.233927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:51:56.241729 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:51:56.241905 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:51:56.249413 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:51:56.249556 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:51:56.258019 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 10:51:56.269745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 10:51:56.276139 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 10:51:56.282954 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 10:51:56.291207 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 10:51:56.300115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 10:51:56.305662 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 10:51:56.305843 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 10:51:56.312075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 10:51:56.312226 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 10:51:56.319535 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 10:51:56.319669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 10:51:56.326108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 10:51:56.326246 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 10:51:56.333500 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 10:51:56.333627 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 10:51:56.341967 systemd[1]: Finished ensure-sysext.service. Jan 29 10:51:56.348692 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 10:51:56.348767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 10:51:56.474826 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 10:51:56.482181 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 10:51:56.493045 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 10:51:56.503896 lvm[1649]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 10:51:56.539564 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 10:51:56.570519 systemd-resolved[1620]: Positive Trust Anchors: Jan 29 10:51:56.570534 systemd-resolved[1620]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 10:51:56.570566 systemd-resolved[1620]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 10:51:56.579918 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 10:51:56.746987 systemd-networkd[1334]: enP17210s1: Gained IPv6LL Jan 29 10:51:56.869058 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 10:51:56.986836 systemd-resolved[1620]: Using system hostname 'ci-4186.1.0-a-2e829ed2e0'. Jan 29 10:51:56.988542 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 10:51:56.995079 systemd[1]: Reached target network.target - Network. Jan 29 10:51:57.000102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 10:51:57.259012 systemd-networkd[1334]: eth0: Gained IPv6LL Jan 29 10:51:57.260728 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 10:51:57.268659 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 10:51:57.343612 augenrules[1673]: No rules Jan 29 10:51:57.344420 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:51:57.344605 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:52:01.164100 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 10:52:01.172511 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 10:52:04.169253 ldconfig[1271]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 10:52:04.177902 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 10:52:04.190087 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 10:52:04.203235 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 10:52:04.210146 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 10:52:04.215960 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 10:52:04.224717 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 10:52:04.231678 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 10:52:04.237628 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 10:52:04.244601 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 10:52:04.251608 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 10:52:04.251644 systemd[1]: Reached target paths.target - Path Units. Jan 29 10:52:04.256712 systemd[1]: Reached target timers.target - Timer Units. Jan 29 10:52:04.263039 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 10:52:04.270954 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 10:52:04.283515 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 10:52:04.289973 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 10:52:04.296401 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 10:52:04.302127 systemd[1]: Reached target basic.target - Basic System. Jan 29 10:52:04.307246 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:52:04.307284 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 10:52:04.317966 systemd[1]: Starting chronyd.service - NTP client/server... Jan 29 10:52:04.327013 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 10:52:04.340081 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 10:52:04.348277 (chronyd)[1686]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 29 10:52:04.352077 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 10:52:04.358445 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 10:52:04.365246 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 10:52:04.369877 jq[1693]: false Jan 29 10:52:04.370842 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 10:52:04.370893 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 29 10:52:04.378964 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 29 10:52:04.380301 chronyd[1697]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 29 10:52:04.383238 KVP[1695]: KVP starting; pid is:1695 Jan 29 10:52:04.385077 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 29 10:52:04.386727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:52:04.401183 KVP[1695]: KVP LIC Version: 3.1 Jan 29 10:52:04.403871 kernel: hv_utils: KVP IC version 4.0 Jan 29 10:52:04.404508 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 10:52:04.408980 chronyd[1697]: Timezone right/UTC failed leap second check, ignoring Jan 29 10:52:04.410029 chronyd[1697]: Loaded seccomp filter (level 2) Jan 29 10:52:04.411988 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 10:52:04.419388 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 10:52:04.428035 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 10:52:04.437011 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 10:52:04.448621 dbus-daemon[1689]: [system] SELinux support is enabled Jan 29 10:52:04.456214 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 10:52:04.463172 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 10:52:04.463648 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 10:52:04.466048 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 10:52:04.488035 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 10:52:04.494719 extend-filesystems[1694]: Found loop4 Jan 29 10:52:04.507137 extend-filesystems[1694]: Found loop5 Jan 29 10:52:04.507137 extend-filesystems[1694]: Found loop6 Jan 29 10:52:04.507137 extend-filesystems[1694]: Found loop7 Jan 29 10:52:04.507137 extend-filesystems[1694]: Found sda Jan 29 10:52:04.507137 extend-filesystems[1694]: Found sda1 Jan 29 10:52:04.507137 extend-filesystems[1694]: Found sda2 Jan 29 10:52:04.507137 extend-filesystems[1694]: Found sda3 Jan 29 10:52:04.507137 extend-filesystems[1694]: Found usr Jan 29 10:52:04.507137 extend-filesystems[1694]: Found sda4 Jan 29 10:52:04.507137 extend-filesystems[1694]: Found sda6 Jan 29 10:52:04.507137 extend-filesystems[1694]: Found sda7 Jan 29 10:52:04.507137 extend-filesystems[1694]: Found sda9 Jan 29 10:52:04.507137 extend-filesystems[1694]: Checking size of /dev/sda9 Jan 29 10:52:04.497515 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 10:52:04.505419 systemd[1]: Started chronyd.service - NTP client/server. Jan 29 10:52:04.581889 jq[1716]: true Jan 29 10:52:04.526327 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 10:52:04.527684 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 10:52:04.529189 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 10:52:04.529348 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 10:52:04.582415 jq[1725]: true Jan 29 10:52:04.544053 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 10:52:04.544226 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 10:52:04.583291 (ntainerd)[1726]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 10:52:04.593220 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 10:52:04.593274 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 10:52:04.601172 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 10:52:04.601196 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 10:52:04.823460 coreos-metadata[1688]: Jan 29 10:52:04.823 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 10:52:04.826176 coreos-metadata[1688]: Jan 29 10:52:04.826 INFO Fetch successful Jan 29 10:52:04.826391 coreos-metadata[1688]: Jan 29 10:52:04.826 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 29 10:52:04.831945 coreos-metadata[1688]: Jan 29 10:52:04.831 INFO Fetch successful Jan 29 10:52:04.833420 coreos-metadata[1688]: Jan 29 10:52:04.833 INFO Fetching http://168.63.129.16/machine/a3b27e58-c142-45a9-8a1c-e300939dffb9/55d051e1%2D796a%2D47f5%2D9ed6%2Da9f3947361e8.%5Fci%2D4186.1.0%2Da%2D2e829ed2e0?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 29 10:52:04.833956 coreos-metadata[1688]: Jan 29 10:52:04.833 INFO Fetch successful Jan 29 10:52:04.834129 coreos-metadata[1688]: Jan 29 10:52:04.834 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 29 10:52:04.848067 coreos-metadata[1688]: Jan 29 10:52:04.847 INFO Fetch successful Jan 29 10:52:04.880576 systemd-logind[1710]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 10:52:04.881231 systemd-logind[1710]: New seat seat0. Jan 29 10:52:04.884502 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 10:52:04.895362 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 10:52:04.906925 update_engine[1714]: I20250129 10:52:04.906060 1714 main.cc:92] Flatcar Update Engine starting Jan 29 10:52:04.908156 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 10:52:04.914206 systemd[1]: Started update-engine.service - Update Engine. Jan 29 10:52:04.919786 update_engine[1714]: I20250129 10:52:04.919715 1714 update_check_scheduler.cc:74] Next update check in 11m35s Jan 29 10:52:04.933499 extend-filesystems[1694]: Old size kept for /dev/sda9 Jan 29 10:52:04.943947 extend-filesystems[1694]: Found sr0 Jan 29 10:52:04.942160 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 10:52:04.953579 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 10:52:04.953744 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 10:52:05.016876 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1760) Jan 29 10:52:05.236899 tar[1724]: linux-arm64/helm Jan 29 10:52:05.290159 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 10:52:05.374508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:52:05.383151 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:52:05.862386 kubelet[1831]: E0129 10:52:05.862335 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:52:05.864845 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:52:05.865001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:52:05.944135 tar[1724]: linux-arm64/LICENSE Jan 29 10:52:05.944263 tar[1724]: linux-arm64/README.md Jan 29 10:52:05.953895 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 10:52:06.376071 locksmithd[1764]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 10:52:06.485392 sshd_keygen[1711]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 10:52:06.503436 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 10:52:06.515110 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 10:52:06.524135 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 29 10:52:06.530279 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 10:52:06.530520 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 10:52:06.543185 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 10:52:06.551466 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 29 10:52:06.565003 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 10:52:06.578106 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 10:52:06.584575 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 10:52:06.591347 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 10:52:07.103579 containerd[1726]: time="2025-01-29T10:52:07.103481700Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 10:52:07.126460 containerd[1726]: time="2025-01-29T10:52:07.126406180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:52:07.127913 containerd[1726]: time="2025-01-29T10:52:07.127790660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:52:07.127913 containerd[1726]: time="2025-01-29T10:52:07.127826500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 10:52:07.127913 containerd[1726]: time="2025-01-29T10:52:07.127844060Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 10:52:07.403112 containerd[1726]: time="2025-01-29T10:52:07.403041940Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 10:52:07.403112 containerd[1726]: time="2025-01-29T10:52:07.403089700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 10:52:07.448152 containerd[1726]: time="2025-01-29T10:52:07.448105380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:52:07.448152 containerd[1726]: time="2025-01-29T10:52:07.448148780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:52:07.448402 containerd[1726]: time="2025-01-29T10:52:07.448376020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:52:07.448402 containerd[1726]: time="2025-01-29T10:52:07.448399620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 10:52:07.448460 containerd[1726]: time="2025-01-29T10:52:07.448413460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:52:07.448460 containerd[1726]: time="2025-01-29T10:52:07.448422700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 10:52:07.448522 containerd[1726]: time="2025-01-29T10:52:07.448502980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:52:07.448722 containerd[1726]: time="2025-01-29T10:52:07.448701140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 10:52:07.448825 containerd[1726]: time="2025-01-29T10:52:07.448804260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 10:52:07.448877 containerd[1726]: time="2025-01-29T10:52:07.448823780Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 10:52:07.448950 containerd[1726]: time="2025-01-29T10:52:07.448929140Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 10:52:07.448998 containerd[1726]: time="2025-01-29T10:52:07.448981900Z" level=info msg="metadata content store policy set" policy=shared Jan 29 10:52:07.450012 bash[1751]: Updated "/home/core/.ssh/authorized_keys" Jan 29 10:52:07.451453 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 10:52:07.459505 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 10:52:07.853900 containerd[1726]: time="2025-01-29T10:52:07.853719420Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 10:52:07.853900 containerd[1726]: time="2025-01-29T10:52:07.853806660Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 10:52:07.853900 containerd[1726]: time="2025-01-29T10:52:07.853832860Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 10:52:07.853900 containerd[1726]: time="2025-01-29T10:52:07.853885060Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 10:52:07.853900 containerd[1726]: time="2025-01-29T10:52:07.853911980Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 10:52:07.854231 containerd[1726]: time="2025-01-29T10:52:07.854095060Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 10:52:07.854386 containerd[1726]: time="2025-01-29T10:52:07.854358940Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 10:52:07.854495 containerd[1726]: time="2025-01-29T10:52:07.854473820Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 10:52:07.854495 containerd[1726]: time="2025-01-29T10:52:07.854498220Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 10:52:07.854549 containerd[1726]: time="2025-01-29T10:52:07.854513940Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 10:52:07.854549 containerd[1726]: time="2025-01-29T10:52:07.854526740Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 10:52:07.854549 containerd[1726]: time="2025-01-29T10:52:07.854538580Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 10:52:07.854607 containerd[1726]: time="2025-01-29T10:52:07.854550820Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 10:52:07.854607 containerd[1726]: time="2025-01-29T10:52:07.854563220Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 10:52:07.854607 containerd[1726]: time="2025-01-29T10:52:07.854577420Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 10:52:07.854607 containerd[1726]: time="2025-01-29T10:52:07.854591980Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 10:52:07.854674 containerd[1726]: time="2025-01-29T10:52:07.854611100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 10:52:07.854674 containerd[1726]: time="2025-01-29T10:52:07.854622700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 10:52:07.854674 containerd[1726]: time="2025-01-29T10:52:07.854641740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854674 containerd[1726]: time="2025-01-29T10:52:07.854655500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854674 containerd[1726]: time="2025-01-29T10:52:07.854670500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854763 containerd[1726]: time="2025-01-29T10:52:07.854682940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854763 containerd[1726]: time="2025-01-29T10:52:07.854693580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854763 containerd[1726]: time="2025-01-29T10:52:07.854709460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854763 containerd[1726]: time="2025-01-29T10:52:07.854720340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854763 containerd[1726]: time="2025-01-29T10:52:07.854731820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854763 containerd[1726]: time="2025-01-29T10:52:07.854743340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854763 containerd[1726]: time="2025-01-29T10:52:07.854757980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854898 containerd[1726]: time="2025-01-29T10:52:07.854769300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854898 containerd[1726]: time="2025-01-29T10:52:07.854780540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854898 containerd[1726]: time="2025-01-29T10:52:07.854792260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854898 containerd[1726]: time="2025-01-29T10:52:07.854806860Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 10:52:07.854898 containerd[1726]: time="2025-01-29T10:52:07.854825980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854898 containerd[1726]: time="2025-01-29T10:52:07.854840500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.854898 containerd[1726]: time="2025-01-29T10:52:07.854863980Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 10:52:07.855137 containerd[1726]: time="2025-01-29T10:52:07.854926580Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 10:52:07.855137 containerd[1726]: time="2025-01-29T10:52:07.854945820Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 10:52:07.855137 containerd[1726]: time="2025-01-29T10:52:07.854955380Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 10:52:07.855137 containerd[1726]: time="2025-01-29T10:52:07.854967740Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 10:52:07.855137 containerd[1726]: time="2025-01-29T10:52:07.854977980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.855137 containerd[1726]: time="2025-01-29T10:52:07.854994660Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 10:52:07.855137 containerd[1726]: time="2025-01-29T10:52:07.855005420Z" level=info msg="NRI interface is disabled by configuration." Jan 29 10:52:07.855137 containerd[1726]: time="2025-01-29T10:52:07.855015260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 10:52:07.855320 containerd[1726]: time="2025-01-29T10:52:07.855281900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 10:52:07.855439 containerd[1726]: time="2025-01-29T10:52:07.855326180Z" level=info msg="Connect containerd service" Jan 29 10:52:07.855439 containerd[1726]: time="2025-01-29T10:52:07.855354300Z" level=info msg="using legacy CRI server" Jan 29 10:52:07.855439 containerd[1726]: time="2025-01-29T10:52:07.855360740Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 10:52:07.855514 containerd[1726]: time="2025-01-29T10:52:07.855467500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 10:52:07.856060 containerd[1726]: time="2025-01-29T10:52:07.856032740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 10:52:07.857875 containerd[1726]: time="2025-01-29T10:52:07.856196620Z" level=info msg="Start subscribing containerd event" Jan 29 10:52:07.857875 containerd[1726]: time="2025-01-29T10:52:07.856243020Z" level=info msg="Start recovering state" Jan 29 10:52:07.857875 containerd[1726]: time="2025-01-29T10:52:07.856306660Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 10:52:07.857875 containerd[1726]: time="2025-01-29T10:52:07.856311780Z" level=info msg="Start event monitor" Jan 29 10:52:07.857875 containerd[1726]: time="2025-01-29T10:52:07.856336660Z" level=info msg="Start snapshots syncer" Jan 29 10:52:07.857875 containerd[1726]: time="2025-01-29T10:52:07.856345460Z" level=info msg="Start cni network conf syncer for default" Jan 29 10:52:07.857875 containerd[1726]: time="2025-01-29T10:52:07.856353500Z" level=info msg="Start streaming server" Jan 29 10:52:07.857875 containerd[1726]: time="2025-01-29T10:52:07.856345420Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 10:52:07.857875 containerd[1726]: time="2025-01-29T10:52:07.856432780Z" level=info msg="containerd successfully booted in 0.873205s" Jan 29 10:52:07.856583 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 10:52:07.863621 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 10:52:07.880103 systemd[1]: Startup finished in 668ms (kernel) + 12.971s (initrd) + 27.219s (userspace) = 40.860s. Jan 29 10:52:08.519363 agetty[1873]: failed to open credentials directory Jan 29 10:52:08.519533 agetty[1874]: failed to open credentials directory Jan 29 10:52:09.625171 login[1873]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:09.627629 login[1874]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:09.638143 systemd-logind[1710]: New session 2 of user core. Jan 29 10:52:09.639952 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 10:52:09.644095 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 10:52:09.647912 systemd-logind[1710]: New session 1 of user core. Jan 29 10:52:09.656591 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 10:52:09.668222 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 10:52:09.871764 (systemd)[1886]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 10:52:10.690759 systemd[1886]: Queued start job for default target default.target. Jan 29 10:52:10.701789 systemd[1886]: Created slice app.slice - User Application Slice. Jan 29 10:52:10.701822 systemd[1886]: Reached target paths.target - Paths. Jan 29 10:52:10.701834 systemd[1886]: Reached target timers.target - Timers. Jan 29 10:52:10.703178 systemd[1886]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 10:52:10.712993 systemd[1886]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 10:52:10.713559 systemd[1886]: Reached target sockets.target - Sockets. Jan 29 10:52:10.713574 systemd[1886]: Reached target basic.target - Basic System. Jan 29 10:52:10.713613 systemd[1886]: Reached target default.target - Main User Target. Jan 29 10:52:10.713639 systemd[1886]: Startup finished in 836ms. Jan 29 10:52:10.713720 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 10:52:10.723095 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 10:52:10.723770 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 10:52:15.783207 waagent[1870]: 2025-01-29T10:52:15.783115Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 29 10:52:15.789325 waagent[1870]: 2025-01-29T10:52:15.789252Z INFO Daemon Daemon OS: flatcar 4186.1.0 Jan 29 10:52:15.794526 waagent[1870]: 2025-01-29T10:52:15.794469Z INFO Daemon Daemon Python: 3.11.10 Jan 29 10:52:15.799918 waagent[1870]: 2025-01-29T10:52:15.799837Z INFO Daemon Daemon Run daemon Jan 29 10:52:15.804694 waagent[1870]: 2025-01-29T10:52:15.804640Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4186.1.0' Jan 29 10:52:15.814690 waagent[1870]: 2025-01-29T10:52:15.814627Z INFO Daemon Daemon Using waagent for provisioning Jan 29 10:52:15.820541 waagent[1870]: 2025-01-29T10:52:15.820495Z INFO Daemon Daemon Activate resource disk Jan 29 10:52:15.825856 waagent[1870]: 2025-01-29T10:52:15.825801Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 29 10:52:15.842145 waagent[1870]: 2025-01-29T10:52:15.842082Z INFO Daemon Daemon Found device: None Jan 29 10:52:15.847389 waagent[1870]: 2025-01-29T10:52:15.847340Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 29 10:52:15.856803 waagent[1870]: 2025-01-29T10:52:15.856750Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 29 10:52:15.871181 waagent[1870]: 2025-01-29T10:52:15.871123Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 10:52:15.878459 waagent[1870]: 2025-01-29T10:52:15.878408Z INFO Daemon Daemon Running default provisioning handler Jan 29 10:52:15.889999 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 10:52:15.900957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:52:15.904899 waagent[1870]: 2025-01-29T10:52:15.904651Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 29 10:52:15.921899 waagent[1870]: 2025-01-29T10:52:15.920838Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 29 10:52:15.932192 waagent[1870]: 2025-01-29T10:52:15.932118Z INFO Daemon Daemon cloud-init is enabled: False Jan 29 10:52:15.937588 waagent[1870]: 2025-01-29T10:52:15.937528Z INFO Daemon Daemon Copying ovf-env.xml Jan 29 10:52:16.623093 waagent[1870]: 2025-01-29T10:52:16.621982Z INFO Daemon Daemon Successfully mounted dvd Jan 29 10:52:16.728407 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 29 10:52:16.730883 waagent[1870]: 2025-01-29T10:52:16.730564Z INFO Daemon Daemon Detect protocol endpoint Jan 29 10:52:16.735958 waagent[1870]: 2025-01-29T10:52:16.735900Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 10:52:16.742270 waagent[1870]: 2025-01-29T10:52:16.742222Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 29 10:52:16.749197 waagent[1870]: 2025-01-29T10:52:16.749151Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 29 10:52:16.755427 waagent[1870]: 2025-01-29T10:52:16.755379Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 29 10:52:16.762919 waagent[1870]: 2025-01-29T10:52:16.762867Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 29 10:52:16.922403 waagent[1870]: 2025-01-29T10:52:16.922311Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 29 10:52:16.930376 waagent[1870]: 2025-01-29T10:52:16.930339Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 29 10:52:17.010256 waagent[1870]: 2025-01-29T10:52:16.938164Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 29 10:52:17.598789 waagent[1870]: 2025-01-29T10:52:17.590826Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 29 10:52:17.599542 waagent[1870]: 2025-01-29T10:52:17.599473Z INFO Daemon Daemon Forcing an update of the goal state. Jan 29 10:52:17.609348 waagent[1870]: 2025-01-29T10:52:17.609300Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 10:52:17.651161 waagent[1870]: 2025-01-29T10:52:17.651112Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 29 10:52:17.658324 waagent[1870]: 2025-01-29T10:52:17.658274Z INFO Daemon Jan 29 10:52:17.662077 waagent[1870]: 2025-01-29T10:52:17.662028Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: a5a65837-4d2a-4112-b500-903a4570f992 eTag: 10843431322532838539 source: Fabric] Jan 29 10:52:17.677336 waagent[1870]: 2025-01-29T10:52:17.677284Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 29 10:52:17.686371 waagent[1870]: 2025-01-29T10:52:17.686321Z INFO Daemon Jan 29 10:52:17.691167 waagent[1870]: 2025-01-29T10:52:17.691102Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 29 10:52:17.708696 waagent[1870]: 2025-01-29T10:52:17.708656Z INFO Daemon Daemon Downloading artifacts profile blob Jan 29 10:52:17.791351 waagent[1870]: 2025-01-29T10:52:17.791275Z INFO Daemon Downloaded certificate {'thumbprint': 'DA2C0538A51216EC56823BF31BF6FE12E8C200E3', 'hasPrivateKey': False} Jan 29 10:52:17.804408 waagent[1870]: 2025-01-29T10:52:17.804357Z INFO Daemon Downloaded certificate {'thumbprint': 'C01D2B3B818DF566F98761B2D8E1100E5DAE3A37', 'hasPrivateKey': True} Jan 29 10:52:17.816232 waagent[1870]: 2025-01-29T10:52:17.816181Z INFO Daemon Fetch goal state completed Jan 29 10:52:17.829019 waagent[1870]: 2025-01-29T10:52:17.828973Z INFO Daemon Daemon Starting provisioning Jan 29 10:52:17.834804 waagent[1870]: 2025-01-29T10:52:17.834749Z INFO Daemon Daemon Handle ovf-env.xml. Jan 29 10:52:17.840110 waagent[1870]: 2025-01-29T10:52:17.840064Z INFO Daemon Daemon Set hostname [ci-4186.1.0-a-2e829ed2e0] Jan 29 10:52:18.775668 waagent[1870]: 2025-01-29T10:52:18.775583Z INFO Daemon Daemon Publish hostname [ci-4186.1.0-a-2e829ed2e0] Jan 29 10:52:18.783086 waagent[1870]: 2025-01-29T10:52:18.783024Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 29 10:52:18.790407 waagent[1870]: 2025-01-29T10:52:18.790354Z INFO Daemon Daemon Primary interface is [eth0] Jan 29 10:52:18.923984 systemd-networkd[1334]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 10:52:18.923993 systemd-networkd[1334]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 10:52:18.924021 systemd-networkd[1334]: eth0: DHCP lease lost Jan 29 10:52:18.924932 waagent[1870]: 2025-01-29T10:52:18.924823Z INFO Daemon Daemon Create user account if not exists Jan 29 10:52:18.931740 waagent[1870]: 2025-01-29T10:52:18.931668Z INFO Daemon Daemon User core already exists, skip useradd Jan 29 10:52:18.938708 systemd-networkd[1334]: eth0: DHCPv6 lease lost Jan 29 10:52:18.939402 waagent[1870]: 2025-01-29T10:52:18.939327Z INFO Daemon Daemon Configure sudoer Jan 29 10:52:18.944967 waagent[1870]: 2025-01-29T10:52:18.944898Z INFO Daemon Daemon Configure sshd Jan 29 10:52:18.965632 waagent[1870]: 2025-01-29T10:52:18.950171Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 29 10:52:18.966318 waagent[1870]: 2025-01-29T10:52:18.966206Z INFO Daemon Daemon Deploy ssh public key. Jan 29 10:52:18.977928 systemd-networkd[1334]: eth0: DHCPv4 address 10.200.20.37/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 29 10:52:19.127370 waagent[1870]: 2025-01-29T10:52:19.127247Z INFO Daemon Daemon Provisioning complete Jan 29 10:52:19.148379 waagent[1870]: 2025-01-29T10:52:19.148331Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 29 10:52:19.156181 waagent[1870]: 2025-01-29T10:52:19.156117Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 29 10:52:19.168878 waagent[1870]: 2025-01-29T10:52:19.168619Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 29 10:52:19.298636 waagent[1948]: 2025-01-29T10:52:19.298118Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 29 10:52:19.298636 waagent[1948]: 2025-01-29T10:52:19.298273Z INFO ExtHandler ExtHandler OS: flatcar 4186.1.0 Jan 29 10:52:19.298636 waagent[1948]: 2025-01-29T10:52:19.298326Z INFO ExtHandler ExtHandler Python: 3.11.10 Jan 29 10:52:21.535885 waagent[1948]: 2025-01-29T10:52:21.535064Z INFO ExtHandler ExtHandler Distro: flatcar-4186.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.10; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 29 10:52:21.535885 waagent[1948]: 2025-01-29T10:52:21.535316Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 10:52:21.535885 waagent[1948]: 2025-01-29T10:52:21.535380Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 10:52:21.544842 waagent[1948]: 2025-01-29T10:52:21.543921Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 10:52:21.550951 waagent[1948]: 2025-01-29T10:52:21.549975Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 29 10:52:21.550951 waagent[1948]: 2025-01-29T10:52:21.550506Z INFO ExtHandler Jan 29 10:52:21.550951 waagent[1948]: 2025-01-29T10:52:21.550579Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c390c4ff-4584-429e-b515-00c4363fab2e eTag: 10843431322532838539 source: Fabric] Jan 29 10:52:21.550951 waagent[1948]: 2025-01-29T10:52:21.550850Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 29 10:52:21.563666 waagent[1948]: 2025-01-29T10:52:21.562571Z INFO ExtHandler Jan 29 10:52:21.563666 waagent[1948]: 2025-01-29T10:52:21.562734Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 29 10:52:21.567889 waagent[1948]: 2025-01-29T10:52:21.566966Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 29 10:52:21.715206 waagent[1948]: 2025-01-29T10:52:21.715118Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DA2C0538A51216EC56823BF31BF6FE12E8C200E3', 'hasPrivateKey': False} Jan 29 10:52:21.715801 waagent[1948]: 2025-01-29T10:52:21.715761Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C01D2B3B818DF566F98761B2D8E1100E5DAE3A37', 'hasPrivateKey': True} Jan 29 10:52:21.716403 waagent[1948]: 2025-01-29T10:52:21.716361Z INFO ExtHandler Fetch goal state completed Jan 29 10:52:21.734723 waagent[1948]: 2025-01-29T10:52:21.734646Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1948 Jan 29 10:52:21.734913 waagent[1948]: 2025-01-29T10:52:21.734844Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 29 10:52:21.736593 waagent[1948]: 2025-01-29T10:52:21.736543Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4186.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 29 10:52:21.737004 waagent[1948]: 2025-01-29T10:52:21.736962Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 29 10:52:21.801558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:52:21.809217 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:52:21.839718 waagent[1948]: 2025-01-29T10:52:21.839669Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 29 10:52:21.840140 waagent[1948]: 2025-01-29T10:52:21.839900Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 29 10:52:21.846878 waagent[1948]: 2025-01-29T10:52:21.846814Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 29 10:52:21.853303 systemd[1]: Reloading requested from client PID 1976 ('systemctl') (unit waagent.service)... Jan 29 10:52:21.853316 systemd[1]: Reloading... Jan 29 10:52:21.900006 kubelet[1969]: E0129 10:52:21.899931 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:52:21.939884 zram_generator::config[2012]: No configuration found. Jan 29 10:52:22.039313 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:52:22.112965 systemd[1]: Reloading finished in 259 ms. Jan 29 10:52:22.140264 waagent[1948]: 2025-01-29T10:52:22.139837Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 29 10:52:22.141387 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:52:22.141509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:52:22.148801 systemd[1]: Reloading requested from client PID 2065 ('systemctl') (unit waagent.service)... Jan 29 10:52:22.148816 systemd[1]: Reloading... Jan 29 10:52:22.216888 zram_generator::config[2098]: No configuration found. Jan 29 10:52:22.325593 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:52:22.399478 systemd[1]: Reloading finished in 250 ms. Jan 29 10:52:22.420063 waagent[1948]: 2025-01-29T10:52:22.419235Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 29 10:52:22.420063 waagent[1948]: 2025-01-29T10:52:22.419408Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 29 10:52:22.721727 waagent[1948]: 2025-01-29T10:52:22.721605Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 29 10:52:22.722617 waagent[1948]: 2025-01-29T10:52:22.722574Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 29 10:52:22.723417 waagent[1948]: 2025-01-29T10:52:22.723368Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 29 10:52:22.723533 waagent[1948]: 2025-01-29T10:52:22.723487Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 10:52:22.723716 waagent[1948]: 2025-01-29T10:52:22.723666Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 10:52:22.724111 waagent[1948]: 2025-01-29T10:52:22.724055Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 29 10:52:22.724503 waagent[1948]: 2025-01-29T10:52:22.724444Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 29 10:52:22.724824 waagent[1948]: 2025-01-29T10:52:22.724653Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 29 10:52:22.724824 waagent[1948]: 2025-01-29T10:52:22.724747Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 29 10:52:22.725287 waagent[1948]: 2025-01-29T10:52:22.725228Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 29 10:52:22.725438 waagent[1948]: 2025-01-29T10:52:22.725375Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 29 10:52:22.725512 waagent[1948]: 2025-01-29T10:52:22.725447Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 10:52:22.725698 waagent[1948]: 2025-01-29T10:52:22.725658Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 10:52:22.725843 waagent[1948]: 2025-01-29T10:52:22.725804Z INFO EnvHandler ExtHandler Configure routes Jan 29 10:52:22.725935 waagent[1948]: 2025-01-29T10:52:22.725900Z INFO EnvHandler ExtHandler Gateway:None Jan 29 10:52:22.725990 waagent[1948]: 2025-01-29T10:52:22.725962Z INFO EnvHandler ExtHandler Routes:None Jan 29 10:52:22.726664 waagent[1948]: 2025-01-29T10:52:22.726602Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 29 10:52:22.726664 waagent[1948]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 29 10:52:22.726664 waagent[1948]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 29 10:52:22.726664 waagent[1948]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 29 10:52:22.726664 waagent[1948]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 29 10:52:22.726664 waagent[1948]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 10:52:22.726664 waagent[1948]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 10:52:22.726817 waagent[1948]: 2025-01-29T10:52:22.726733Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 29 10:52:22.747084 waagent[1948]: 2025-01-29T10:52:22.746362Z INFO ExtHandler ExtHandler Jan 29 10:52:22.747191 waagent[1948]: 2025-01-29T10:52:22.747155Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a8c743c9-7f74-4622-aec5-e1aa77c3c645 correlation 75c49c94-fb85-4bda-bb9a-86408bca07ca created: 2025-01-29T10:50:40.488167Z] Jan 29 10:52:22.747715 waagent[1948]: 2025-01-29T10:52:22.747664Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 29 10:52:22.748324 waagent[1948]: 2025-01-29T10:52:22.748283Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 29 10:52:22.771896 waagent[1948]: 2025-01-29T10:52:22.771476Z INFO MonitorHandler ExtHandler Network interfaces: Jan 29 10:52:22.771896 waagent[1948]: Executing ['ip', '-a', '-o', 'link']: Jan 29 10:52:22.771896 waagent[1948]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 29 10:52:22.771896 waagent[1948]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c4:e1:dd brd ff:ff:ff:ff:ff:ff Jan 29 10:52:22.771896 waagent[1948]: 3: enP17210s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c4:e1:dd brd ff:ff:ff:ff:ff:ff\ altname enP17210p0s2 Jan 29 10:52:22.771896 waagent[1948]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 29 10:52:22.771896 waagent[1948]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 29 10:52:22.771896 waagent[1948]: 2: eth0 inet 10.200.20.37/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 29 10:52:22.771896 waagent[1948]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 29 10:52:22.771896 waagent[1948]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 29 10:52:22.771896 waagent[1948]: 2: eth0 inet6 fe80::20d:3aff:fec4:e1dd/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 10:52:22.771896 waagent[1948]: 3: enP17210s1 inet6 fe80::20d:3aff:fec4:e1dd/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 10:52:22.813698 waagent[1948]: 2025-01-29T10:52:22.813579Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E4CF585A-F93F-40E6-9684-ACE16F6D9D77;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 29 10:52:22.834900 waagent[1948]: 2025-01-29T10:52:22.834249Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 29 10:52:22.834900 waagent[1948]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 10:52:22.834900 waagent[1948]: pkts bytes target prot opt in out source destination Jan 29 10:52:22.834900 waagent[1948]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 10:52:22.834900 waagent[1948]: pkts bytes target prot opt in out source destination Jan 29 10:52:22.834900 waagent[1948]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 10:52:22.834900 waagent[1948]: pkts bytes target prot opt in out source destination Jan 29 10:52:22.834900 waagent[1948]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 10:52:22.834900 waagent[1948]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 10:52:22.834900 waagent[1948]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 10:52:22.837736 waagent[1948]: 2025-01-29T10:52:22.837619Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 29 10:52:22.837736 waagent[1948]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 10:52:22.837736 waagent[1948]: pkts bytes target prot opt in out source destination Jan 29 10:52:22.837736 waagent[1948]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 10:52:22.837736 waagent[1948]: pkts bytes target prot opt in out source destination Jan 29 10:52:22.837736 waagent[1948]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 10:52:22.837736 waagent[1948]: pkts bytes target prot opt in out source destination Jan 29 10:52:22.837736 waagent[1948]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 10:52:22.837736 waagent[1948]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 10:52:22.837736 waagent[1948]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 10:52:22.838068 waagent[1948]: 2025-01-29T10:52:22.838016Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 29 10:52:28.202535 chronyd[1697]: Selected source PHC0 Jan 29 10:52:32.246933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 10:52:32.255127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:52:32.800491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:52:32.804699 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:52:32.842394 kubelet[2194]: E0129 10:52:32.842322 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:52:32.844560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:52:32.844762 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:52:42.546940 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 29 10:52:42.997077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 10:52:43.006052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:52:43.179000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:52:43.182524 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:52:43.215081 kubelet[2209]: E0129 10:52:43.215013 2209 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:52:43.217582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:52:43.217812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:52:47.240747 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 10:52:47.242519 systemd[1]: Started sshd@0-10.200.20.37:22-10.200.16.10:58004.service - OpenSSH per-connection server daemon (10.200.16.10:58004). Jan 29 10:52:47.857998 sshd[2217]: Accepted publickey for core from 10.200.16.10 port 58004 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:52:47.859383 sshd-session[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:47.863297 systemd-logind[1710]: New session 3 of user core. Jan 29 10:52:47.872309 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 10:52:48.249413 systemd[1]: Started sshd@1-10.200.20.37:22-10.200.16.10:58006.service - OpenSSH per-connection server daemon (10.200.16.10:58006). Jan 29 10:52:48.691768 sshd[2222]: Accepted publickey for core from 10.200.16.10 port 58006 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:52:48.693008 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:48.696640 systemd-logind[1710]: New session 4 of user core. Jan 29 10:52:48.704986 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 10:52:49.029906 sshd[2224]: Connection closed by 10.200.16.10 port 58006 Jan 29 10:52:49.030402 sshd-session[2222]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:49.033789 systemd[1]: sshd@1-10.200.20.37:22-10.200.16.10:58006.service: Deactivated successfully. Jan 29 10:52:49.035245 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 10:52:49.035833 systemd-logind[1710]: Session 4 logged out. Waiting for processes to exit. Jan 29 10:52:49.036767 systemd-logind[1710]: Removed session 4. Jan 29 10:52:49.119402 systemd[1]: Started sshd@2-10.200.20.37:22-10.200.16.10:58008.service - OpenSSH per-connection server daemon (10.200.16.10:58008). Jan 29 10:52:49.545801 sshd[2229]: Accepted publickey for core from 10.200.16.10 port 58008 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:52:49.547131 sshd-session[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:49.550814 systemd-logind[1710]: New session 5 of user core. Jan 29 10:52:49.558014 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 10:52:49.869188 sshd[2231]: Connection closed by 10.200.16.10 port 58008 Jan 29 10:52:49.869649 sshd-session[2229]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:49.873099 systemd[1]: sshd@2-10.200.20.37:22-10.200.16.10:58008.service: Deactivated successfully. Jan 29 10:52:49.874738 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 10:52:49.875468 systemd-logind[1710]: Session 5 logged out. Waiting for processes to exit. Jan 29 10:52:49.876390 systemd-logind[1710]: Removed session 5. Jan 29 10:52:49.948334 systemd[1]: Started sshd@3-10.200.20.37:22-10.200.16.10:58024.service - OpenSSH per-connection server daemon (10.200.16.10:58024). Jan 29 10:52:50.020641 update_engine[1714]: I20250129 10:52:50.020561 1714 update_attempter.cc:509] Updating boot flags... Jan 29 10:52:50.071926 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2253) Jan 29 10:52:50.183956 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2252) Jan 29 10:52:50.401246 sshd[2236]: Accepted publickey for core from 10.200.16.10 port 58024 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:52:50.402518 sshd-session[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:50.406579 systemd-logind[1710]: New session 6 of user core. Jan 29 10:52:50.412998 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 10:52:50.740455 sshd[2352]: Connection closed by 10.200.16.10 port 58024 Jan 29 10:52:50.739554 sshd-session[2236]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:50.743385 systemd[1]: sshd@3-10.200.20.37:22-10.200.16.10:58024.service: Deactivated successfully. Jan 29 10:52:50.745007 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 10:52:50.746300 systemd-logind[1710]: Session 6 logged out. Waiting for processes to exit. Jan 29 10:52:50.747121 systemd-logind[1710]: Removed session 6. Jan 29 10:52:50.815969 systemd[1]: Started sshd@4-10.200.20.37:22-10.200.16.10:58032.service - OpenSSH per-connection server daemon (10.200.16.10:58032). Jan 29 10:52:51.241110 sshd[2357]: Accepted publickey for core from 10.200.16.10 port 58032 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:52:51.242413 sshd-session[2357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:51.246279 systemd-logind[1710]: New session 7 of user core. Jan 29 10:52:51.255046 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 10:52:51.596679 sudo[2360]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 10:52:51.596982 sudo[2360]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:52:51.627392 sudo[2360]: pam_unix(sudo:session): session closed for user root Jan 29 10:52:51.706194 sshd[2359]: Connection closed by 10.200.16.10 port 58032 Jan 29 10:52:51.706947 sshd-session[2357]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:51.710818 systemd[1]: sshd@4-10.200.20.37:22-10.200.16.10:58032.service: Deactivated successfully. Jan 29 10:52:51.712594 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 10:52:51.713376 systemd-logind[1710]: Session 7 logged out. Waiting for processes to exit. Jan 29 10:52:51.714397 systemd-logind[1710]: Removed session 7. Jan 29 10:52:51.786627 systemd[1]: Started sshd@5-10.200.20.37:22-10.200.16.10:58036.service - OpenSSH per-connection server daemon (10.200.16.10:58036). Jan 29 10:52:52.215847 sshd[2365]: Accepted publickey for core from 10.200.16.10 port 58036 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:52:52.217220 sshd-session[2365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:52.220954 systemd-logind[1710]: New session 8 of user core. Jan 29 10:52:52.229025 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 10:52:52.458589 sudo[2369]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 10:52:52.458845 sudo[2369]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:52:52.461797 sudo[2369]: pam_unix(sudo:session): session closed for user root Jan 29 10:52:52.466224 sudo[2368]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 10:52:52.466470 sudo[2368]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:52:52.484217 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 10:52:52.506266 augenrules[2391]: No rules Jan 29 10:52:52.507373 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 10:52:52.508921 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 10:52:52.509973 sudo[2368]: pam_unix(sudo:session): session closed for user root Jan 29 10:52:52.580151 sshd[2367]: Connection closed by 10.200.16.10 port 58036 Jan 29 10:52:52.580693 sshd-session[2365]: pam_unix(sshd:session): session closed for user core Jan 29 10:52:52.584209 systemd[1]: sshd@5-10.200.20.37:22-10.200.16.10:58036.service: Deactivated successfully. Jan 29 10:52:52.585717 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 10:52:52.586427 systemd-logind[1710]: Session 8 logged out. Waiting for processes to exit. Jan 29 10:52:52.587454 systemd-logind[1710]: Removed session 8. Jan 29 10:52:52.673198 systemd[1]: Started sshd@6-10.200.20.37:22-10.200.16.10:58042.service - OpenSSH per-connection server daemon (10.200.16.10:58042). Jan 29 10:52:53.100403 sshd[2399]: Accepted publickey for core from 10.200.16.10 port 58042 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:52:53.101696 sshd-session[2399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:52:53.105373 systemd-logind[1710]: New session 9 of user core. Jan 29 10:52:53.112068 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 10:52:53.246805 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 10:52:53.254114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:52:53.345029 sudo[2405]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 10:52:53.345306 sudo[2405]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 10:52:53.738902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:52:53.750200 (kubelet)[2415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:52:53.785665 kubelet[2415]: E0129 10:52:53.785609 2415 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:52:53.788165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:52:53.788417 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:52:54.857111 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 10:52:54.857264 (dockerd)[2434]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 10:52:55.546748 dockerd[2434]: time="2025-01-29T10:52:55.546193932Z" level=info msg="Starting up" Jan 29 10:52:55.918612 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1364801496-merged.mount: Deactivated successfully. Jan 29 10:52:56.027864 dockerd[2434]: time="2025-01-29T10:52:56.027809312Z" level=info msg="Loading containers: start." Jan 29 10:52:56.240881 kernel: Initializing XFRM netlink socket Jan 29 10:52:56.304804 systemd-networkd[1334]: docker0: Link UP Jan 29 10:52:56.347652 dockerd[2434]: time="2025-01-29T10:52:56.347078220Z" level=info msg="Loading containers: done." Jan 29 10:52:56.358490 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3543999797-merged.mount: Deactivated successfully. Jan 29 10:52:56.366145 dockerd[2434]: time="2025-01-29T10:52:56.366103965Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 10:52:56.366428 dockerd[2434]: time="2025-01-29T10:52:56.366409605Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 10:52:56.366602 dockerd[2434]: time="2025-01-29T10:52:56.366587084Z" level=info msg="Daemon has completed initialization" Jan 29 10:52:56.414257 dockerd[2434]: time="2025-01-29T10:52:56.414202967Z" level=info msg="API listen on /run/docker.sock" Jan 29 10:52:56.414407 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 10:52:57.259027 containerd[1726]: time="2025-01-29T10:52:57.258977220Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 10:52:58.169892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054637289.mount: Deactivated successfully. Jan 29 10:52:59.664061 containerd[1726]: time="2025-01-29T10:52:59.664001480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:52:59.666252 containerd[1726]: time="2025-01-29T10:52:59.666003239Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618070" Jan 29 10:52:59.669889 containerd[1726]: time="2025-01-29T10:52:59.669825836Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:52:59.675104 containerd[1726]: time="2025-01-29T10:52:59.675027911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:52:59.676360 containerd[1726]: time="2025-01-29T10:52:59.676181430Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.41716037s" Jan 29 10:52:59.676360 containerd[1726]: time="2025-01-29T10:52:59.676225550Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 29 10:52:59.677278 containerd[1726]: time="2025-01-29T10:52:59.677220590Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 10:53:02.505653 containerd[1726]: time="2025-01-29T10:53:02.505578776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:02.509257 containerd[1726]: time="2025-01-29T10:53:02.509208255Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469467" Jan 29 10:53:02.552013 containerd[1726]: time="2025-01-29T10:53:02.551949640Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:02.559469 containerd[1726]: time="2025-01-29T10:53:02.559374958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:02.560559 containerd[1726]: time="2025-01-29T10:53:02.560431918Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 2.883118088s" Jan 29 10:53:02.560559 containerd[1726]: time="2025-01-29T10:53:02.560465598Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 29 10:53:02.561572 containerd[1726]: time="2025-01-29T10:53:02.561405317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 10:53:03.996823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 10:53:04.003078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:53:04.090783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:53:04.105177 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:53:04.137209 kubelet[2687]: E0129 10:53:04.137167 2687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:53:04.139627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:53:04.139772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:53:14.247070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 10:53:14.254244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:53:14.358596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:53:14.362830 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:53:14.395010 kubelet[2706]: E0129 10:53:14.394924 2706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:53:14.397023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:53:14.397171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:53:17.879950 containerd[1726]: time="2025-01-29T10:53:17.879048481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:17.882500 containerd[1726]: time="2025-01-29T10:53:17.882419160Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024217" Jan 29 10:53:17.885903 containerd[1726]: time="2025-01-29T10:53:17.885834998Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:17.892163 containerd[1726]: time="2025-01-29T10:53:17.892101474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:17.893181 containerd[1726]: time="2025-01-29T10:53:17.893053154Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 15.331616677s" Jan 29 10:53:17.893181 containerd[1726]: time="2025-01-29T10:53:17.893086754Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 29 10:53:17.893688 containerd[1726]: time="2025-01-29T10:53:17.893659874Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 10:53:19.062988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2036739421.mount: Deactivated successfully. Jan 29 10:53:19.404575 containerd[1726]: time="2025-01-29T10:53:19.404458886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:19.406589 containerd[1726]: time="2025-01-29T10:53:19.406545925Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772117" Jan 29 10:53:19.411160 containerd[1726]: time="2025-01-29T10:53:19.411121283Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:19.415888 containerd[1726]: time="2025-01-29T10:53:19.415790160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:19.416691 containerd[1726]: time="2025-01-29T10:53:19.416565400Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.522871766s" Jan 29 10:53:19.416691 containerd[1726]: time="2025-01-29T10:53:19.416597880Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 10:53:19.417343 containerd[1726]: time="2025-01-29T10:53:19.417294959Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 10:53:20.961000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1001248880.mount: Deactivated successfully. Jan 29 10:53:24.496925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 10:53:24.505127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:53:24.606436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:53:24.618113 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:53:24.655231 kubelet[2749]: E0129 10:53:24.655172 2749 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:53:24.657125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:53:24.657244 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:53:31.849696 containerd[1726]: time="2025-01-29T10:53:31.849646186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:31.911790 containerd[1726]: time="2025-01-29T10:53:31.911736658Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 29 10:53:31.959170 containerd[1726]: time="2025-01-29T10:53:31.959121621Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:32.006880 containerd[1726]: time="2025-01-29T10:53:32.005662265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:32.006880 containerd[1726]: time="2025-01-29T10:53:32.006747944Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 12.589300425s" Jan 29 10:53:32.006880 containerd[1726]: time="2025-01-29T10:53:32.006775224Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 10:53:32.007997 containerd[1726]: time="2025-01-29T10:53:32.007976823Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 10:53:33.308483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792411434.mount: Deactivated successfully. Jan 29 10:53:33.507773 containerd[1726]: time="2025-01-29T10:53:33.507718738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:33.551059 containerd[1726]: time="2025-01-29T10:53:33.550986865Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 29 10:53:33.554975 containerd[1726]: time="2025-01-29T10:53:33.554925582Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:33.601103 containerd[1726]: time="2025-01-29T10:53:33.600945666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:33.602022 containerd[1726]: time="2025-01-29T10:53:33.601891025Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.593798002s" Jan 29 10:53:33.602022 containerd[1726]: time="2025-01-29T10:53:33.601921945Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 10:53:33.602651 containerd[1726]: time="2025-01-29T10:53:33.602449185Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 10:53:34.746840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 10:53:34.757330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:53:37.037195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:53:37.041310 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 10:53:37.074528 kubelet[2792]: E0129 10:53:37.074469 2792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 10:53:37.076918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 10:53:37.077170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 10:53:37.793420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1089361983.mount: Deactivated successfully. Jan 29 10:53:39.456417 containerd[1726]: time="2025-01-29T10:53:39.456363556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:39.461751 containerd[1726]: time="2025-01-29T10:53:39.461506512Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Jan 29 10:53:39.464833 containerd[1726]: time="2025-01-29T10:53:39.464787710Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:39.470038 containerd[1726]: time="2025-01-29T10:53:39.469990746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:53:39.471964 containerd[1726]: time="2025-01-29T10:53:39.471160025Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 5.8686816s" Jan 29 10:53:39.471964 containerd[1726]: time="2025-01-29T10:53:39.471193505Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 29 10:53:44.230429 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:53:44.236051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:53:44.259198 systemd[1]: Reloading requested from client PID 2878 ('systemctl') (unit session-9.scope)... Jan 29 10:53:44.259221 systemd[1]: Reloading... Jan 29 10:53:44.349900 zram_generator::config[2930]: No configuration found. Jan 29 10:53:44.436315 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:53:44.511825 systemd[1]: Reloading finished in 252 ms. Jan 29 10:53:45.014364 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 10:53:45.014454 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 10:53:45.015009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:53:45.021234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:53:45.308380 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:53:45.312318 (kubelet)[2982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:53:45.351529 kubelet[2982]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:53:45.353245 kubelet[2982]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:53:45.353245 kubelet[2982]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:53:45.353245 kubelet[2982]: I0129 10:53:45.351956 2982 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:53:46.452013 kubelet[2982]: I0129 10:53:46.451972 2982 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 10:53:46.452013 kubelet[2982]: I0129 10:53:46.452003 2982 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:53:46.452378 kubelet[2982]: I0129 10:53:46.452353 2982 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 10:53:46.473615 kubelet[2982]: E0129 10:53:46.473560 2982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:46.474704 kubelet[2982]: I0129 10:53:46.474678 2982 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:53:46.480464 kubelet[2982]: E0129 10:53:46.480438 2982 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 10:53:46.480615 kubelet[2982]: I0129 10:53:46.480604 2982 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 10:53:46.484276 kubelet[2982]: I0129 10:53:46.484261 2982 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:53:46.485159 kubelet[2982]: I0129 10:53:46.485144 2982 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 10:53:46.485389 kubelet[2982]: I0129 10:53:46.485363 2982 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:53:46.485604 kubelet[2982]: I0129 10:53:46.485441 2982 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-2e829ed2e0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 10:53:46.485735 kubelet[2982]: I0129 10:53:46.485723 2982 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:53:46.485794 kubelet[2982]: I0129 10:53:46.485785 2982 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 10:53:46.485983 kubelet[2982]: I0129 10:53:46.485971 2982 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:53:46.487508 kubelet[2982]: I0129 10:53:46.487494 2982 kubelet.go:408] "Attempting to sync node with API server" Jan 29 10:53:46.487991 kubelet[2982]: I0129 10:53:46.487980 2982 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:53:46.488080 kubelet[2982]: I0129 10:53:46.488070 2982 kubelet.go:314] "Adding apiserver pod source" Jan 29 10:53:46.488135 kubelet[2982]: I0129 10:53:46.488126 2982 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:53:46.492795 kubelet[2982]: W0129 10:53:46.492750 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-2e829ed2e0&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:46.492886 kubelet[2982]: E0129 10:53:46.492803 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-2e829ed2e0&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:46.493424 kubelet[2982]: W0129 10:53:46.493147 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:46.493424 kubelet[2982]: E0129 10:53:46.493195 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:46.493517 kubelet[2982]: I0129 10:53:46.493493 2982 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:53:46.495038 kubelet[2982]: I0129 10:53:46.495013 2982 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:53:46.495917 kubelet[2982]: W0129 10:53:46.495895 2982 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 10:53:46.496448 kubelet[2982]: I0129 10:53:46.496427 2982 server.go:1269] "Started kubelet" Jan 29 10:53:46.497319 kubelet[2982]: I0129 10:53:46.497092 2982 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:53:46.498191 kubelet[2982]: I0129 10:53:46.497895 2982 server.go:460] "Adding debug handlers to kubelet server" Jan 29 10:53:46.499695 kubelet[2982]: I0129 10:53:46.499372 2982 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:53:46.503848 kubelet[2982]: E0129 10:53:46.501109 2982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-2e829ed2e0.181f246bc65d54be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-2e829ed2e0,UID:ci-4186.1.0-a-2e829ed2e0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-2e829ed2e0,},FirstTimestamp:2025-01-29 10:53:46.496406718 +0000 UTC m=+1.180998446,LastTimestamp:2025-01-29 10:53:46.496406718 +0000 UTC m=+1.180998446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-2e829ed2e0,}" Jan 29 10:53:46.503848 kubelet[2982]: I0129 10:53:46.503029 2982 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:53:46.505874 kubelet[2982]: I0129 10:53:46.505828 2982 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:53:46.506186 kubelet[2982]: I0129 10:53:46.506152 2982 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 10:53:46.507406 kubelet[2982]: I0129 10:53:46.507000 2982 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 10:53:46.507406 kubelet[2982]: E0129 10:53:46.507272 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:46.508379 kubelet[2982]: I0129 10:53:46.508354 2982 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 10:53:46.508435 kubelet[2982]: I0129 10:53:46.508415 2982 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:53:46.508915 kubelet[2982]: W0129 10:53:46.508717 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:46.508915 kubelet[2982]: E0129 10:53:46.508770 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:46.508915 kubelet[2982]: E0129 10:53:46.508825 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-2e829ed2e0?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="200ms" Jan 29 10:53:46.509743 kubelet[2982]: I0129 10:53:46.509508 2982 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:53:46.510239 kubelet[2982]: I0129 10:53:46.509909 2982 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:53:46.512260 kubelet[2982]: I0129 10:53:46.512229 2982 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:53:46.535386 kubelet[2982]: E0129 10:53:46.535341 2982 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 10:53:46.608011 kubelet[2982]: E0129 10:53:46.607975 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:46.640158 kubelet[2982]: I0129 10:53:46.640133 2982 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:53:46.640158 kubelet[2982]: I0129 10:53:46.640151 2982 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:53:46.640288 kubelet[2982]: I0129 10:53:46.640169 2982 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:53:46.710094 kubelet[2982]: E0129 10:53:46.708636 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:46.710094 kubelet[2982]: E0129 10:53:46.710048 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-2e829ed2e0?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="400ms" Jan 29 10:53:46.809358 kubelet[2982]: E0129 10:53:46.809331 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:46.909889 kubelet[2982]: E0129 10:53:46.909844 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:47.010548 kubelet[2982]: E0129 10:53:47.010519 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:47.111154 kubelet[2982]: E0129 10:53:47.110950 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:47.111243 kubelet[2982]: E0129 10:53:47.111223 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-2e829ed2e0?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="800ms" Jan 29 10:53:47.211684 kubelet[2982]: E0129 10:53:47.211655 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:47.312175 kubelet[2982]: E0129 10:53:47.312097 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:47.323637 kubelet[2982]: W0129 10:53:47.323583 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:47.323689 kubelet[2982]: E0129 10:53:47.323650 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:47.413209 kubelet[2982]: E0129 10:53:47.413156 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:47.513381 kubelet[2982]: E0129 10:53:47.513350 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:47.567993 kubelet[2982]: W0129 10:53:47.567818 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-2e829ed2e0&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:47.567993 kubelet[2982]: E0129 10:53:47.567904 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-2e829ed2e0&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:47.614359 kubelet[2982]: E0129 10:53:47.614332 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:47.714654 kubelet[2982]: E0129 10:53:47.714623 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:47.814969 kubelet[2982]: E0129 10:53:47.814947 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:47.912704 kubelet[2982]: E0129 10:53:47.912579 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-2e829ed2e0?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="1.6s" Jan 29 10:53:47.916020 kubelet[2982]: E0129 10:53:47.915983 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:47.925470 kubelet[2982]: W0129 10:53:47.925409 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:47.925470 kubelet[2982]: E0129 10:53:47.925443 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:48.016370 kubelet[2982]: E0129 10:53:48.016335 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:48.116645 kubelet[2982]: E0129 10:53:48.116618 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:48.217151 kubelet[2982]: E0129 10:53:48.217079 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:48.317486 kubelet[2982]: E0129 10:53:48.317440 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:48.418042 kubelet[2982]: E0129 10:53:48.418007 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:48.518272 kubelet[2982]: E0129 10:53:48.518234 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:48.531926 kubelet[2982]: I0129 10:53:48.531885 2982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:53:48.533008 kubelet[2982]: I0129 10:53:48.532988 2982 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:53:48.533247 kubelet[2982]: I0129 10:53:48.533190 2982 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:53:48.533247 kubelet[2982]: I0129 10:53:48.533213 2982 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 10:53:48.533449 kubelet[2982]: E0129 10:53:48.533389 2982 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:53:48.534447 kubelet[2982]: W0129 10:53:48.534375 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:48.534447 kubelet[2982]: E0129 10:53:48.534424 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:48.558544 kubelet[2982]: E0129 10:53:48.558509 2982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:48.558978 kubelet[2982]: I0129 10:53:48.558873 2982 policy_none.go:49] "None policy: Start" Jan 29 10:53:48.559602 kubelet[2982]: I0129 10:53:48.559576 2982 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:53:48.559602 kubelet[2982]: I0129 10:53:48.559605 2982 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:53:48.612934 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 10:53:48.620320 kubelet[2982]: E0129 10:53:48.619282 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:48.623276 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 10:53:48.626191 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 10:53:48.630491 kubelet[2982]: I0129 10:53:48.630459 2982 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:53:48.630663 kubelet[2982]: I0129 10:53:48.630644 2982 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 10:53:48.630696 kubelet[2982]: I0129 10:53:48.630663 2982 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:53:48.631104 kubelet[2982]: I0129 10:53:48.631077 2982 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:53:48.633257 kubelet[2982]: E0129 10:53:48.633229 2982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:48.641899 systemd[1]: Created slice kubepods-burstable-pod5c13c9c1d38867adf8284196326c3e7a.slice - libcontainer container kubepods-burstable-pod5c13c9c1d38867adf8284196326c3e7a.slice. Jan 29 10:53:48.669390 systemd[1]: Created slice kubepods-burstable-pod05fdaf09478fbff2c5087468ec0165ac.slice - libcontainer container kubepods-burstable-pod05fdaf09478fbff2c5087468ec0165ac.slice. Jan 29 10:53:48.673572 systemd[1]: Created slice kubepods-burstable-podbec0b05c71a40223d8fd4e839e90166d.slice - libcontainer container kubepods-burstable-podbec0b05c71a40223d8fd4e839e90166d.slice. Jan 29 10:53:48.719296 kubelet[2982]: I0129 10:53:48.719262 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c13c9c1d38867adf8284196326c3e7a-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-2e829ed2e0\" (UID: \"5c13c9c1d38867adf8284196326c3e7a\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.719296 kubelet[2982]: I0129 10:53:48.719298 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05fdaf09478fbff2c5087468ec0165ac-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-2e829ed2e0\" (UID: \"05fdaf09478fbff2c5087468ec0165ac\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.719580 kubelet[2982]: I0129 10:53:48.719316 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bec0b05c71a40223d8fd4e839e90166d-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-2e829ed2e0\" (UID: \"bec0b05c71a40223d8fd4e839e90166d\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.719580 kubelet[2982]: I0129 10:53:48.719330 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c13c9c1d38867adf8284196326c3e7a-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-2e829ed2e0\" (UID: \"5c13c9c1d38867adf8284196326c3e7a\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.719580 kubelet[2982]: I0129 10:53:48.719344 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c13c9c1d38867adf8284196326c3e7a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-2e829ed2e0\" (UID: \"5c13c9c1d38867adf8284196326c3e7a\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.719580 kubelet[2982]: I0129 10:53:48.719359 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05fdaf09478fbff2c5087468ec0165ac-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-2e829ed2e0\" (UID: \"05fdaf09478fbff2c5087468ec0165ac\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.719580 kubelet[2982]: I0129 10:53:48.719386 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/05fdaf09478fbff2c5087468ec0165ac-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-2e829ed2e0\" (UID: \"05fdaf09478fbff2c5087468ec0165ac\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.719715 kubelet[2982]: I0129 10:53:48.719403 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05fdaf09478fbff2c5087468ec0165ac-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-2e829ed2e0\" (UID: \"05fdaf09478fbff2c5087468ec0165ac\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.719715 kubelet[2982]: I0129 10:53:48.719429 2982 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05fdaf09478fbff2c5087468ec0165ac-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-2e829ed2e0\" (UID: \"05fdaf09478fbff2c5087468ec0165ac\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.733197 kubelet[2982]: I0129 10:53:48.732885 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.733297 kubelet[2982]: E0129 10:53:48.733219 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.935076 kubelet[2982]: I0129 10:53:48.934987 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.935759 kubelet[2982]: E0129 10:53:48.935414 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:48.967622 containerd[1726]: time="2025-01-29T10:53:48.967578918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-2e829ed2e0,Uid:5c13c9c1d38867adf8284196326c3e7a,Namespace:kube-system,Attempt:0,}" Jan 29 10:53:48.972474 containerd[1726]: time="2025-01-29T10:53:48.972440401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-2e829ed2e0,Uid:05fdaf09478fbff2c5087468ec0165ac,Namespace:kube-system,Attempt:0,}" Jan 29 10:53:48.976219 containerd[1726]: time="2025-01-29T10:53:48.976188803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-2e829ed2e0,Uid:bec0b05c71a40223d8fd4e839e90166d,Namespace:kube-system,Attempt:0,}" Jan 29 10:53:49.212032 kubelet[2982]: W0129 10:53:49.211914 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-2e829ed2e0&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:49.212032 kubelet[2982]: E0129 10:53:49.211959 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-2e829ed2e0&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:49.663177 kubelet[2982]: I0129 10:53:49.337164 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:49.663177 kubelet[2982]: E0129 10:53:49.337422 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:49.663177 kubelet[2982]: E0129 10:53:49.513051 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-2e829ed2e0?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="3.2s" Jan 29 10:53:49.663177 kubelet[2982]: W0129 10:53:49.629075 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:49.663177 kubelet[2982]: E0129 10:53:49.629112 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:50.010415 kubelet[2982]: W0129 10:53:50.010350 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:50.010415 kubelet[2982]: E0129 10:53:50.010393 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:50.139588 kubelet[2982]: I0129 10:53:50.139531 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:50.139882 kubelet[2982]: E0129 10:53:50.139840 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:50.249750 kubelet[2982]: W0129 10:53:50.249713 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:50.249930 kubelet[2982]: E0129 10:53:50.249759 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:51.741739 kubelet[2982]: I0129 10:53:51.741477 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:51.742130 kubelet[2982]: E0129 10:53:51.741776 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:52.137115 kubelet[2982]: W0129 10:53:52.137041 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:52.137115 kubelet[2982]: E0129 10:53:52.137084 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:52.579889 kubelet[2982]: E0129 10:53:52.578502 2982 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:52.714344 kubelet[2982]: E0129 10:53:52.714297 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-2e829ed2e0?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="6.4s" Jan 29 10:53:54.773129 kubelet[2982]: W0129 10:53:54.773029 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:54.773129 kubelet[2982]: E0129 10:53:54.773096 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:54.944370 kubelet[2982]: I0129 10:53:54.944337 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:54.944707 kubelet[2982]: E0129 10:53:54.944679 2982 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.20.37:6443/api/v1/nodes\": dial tcp 10.200.20.37:6443: connect: connection refused" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:53:55.540156 kubelet[2982]: W0129 10:53:55.540097 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-2e829ed2e0&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:55.540295 kubelet[2982]: E0129 10:53:55.540162 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186.1.0-a-2e829ed2e0&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:55.744631 kubelet[2982]: W0129 10:53:55.744535 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:55.744631 kubelet[2982]: E0129 10:53:55.744602 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:55.963122 kubelet[2982]: W0129 10:53:55.962969 2982 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.37:6443: connect: connection refused Jan 29 10:53:55.963122 kubelet[2982]: E0129 10:53:55.963035 2982 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.37:6443: connect: connection refused" logger="UnhandledError" Jan 29 10:53:56.346971 kubelet[2982]: E0129 10:53:56.346830 2982 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.37:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186.1.0-a-2e829ed2e0.181f246bc65d54be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186.1.0-a-2e829ed2e0,UID:ci-4186.1.0-a-2e829ed2e0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186.1.0-a-2e829ed2e0,},FirstTimestamp:2025-01-29 10:53:46.496406718 +0000 UTC m=+1.180998446,LastTimestamp:2025-01-29 10:53:46.496406718 +0000 UTC m=+1.180998446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186.1.0-a-2e829ed2e0,}" Jan 29 10:53:58.633690 kubelet[2982]: E0129 10:53:58.633595 2982 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:53:59.114900 kubelet[2982]: E0129 10:53:59.114816 2982 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186.1.0-a-2e829ed2e0?timeout=10s\": dial tcp 10.200.20.37:6443: connect: connection refused" interval="7s" Jan 29 10:53:59.560506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784168170.mount: Deactivated successfully. Jan 29 10:53:59.609575 containerd[1726]: time="2025-01-29T10:53:59.609519499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:53:59.622117 containerd[1726]: time="2025-01-29T10:53:59.621946331Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 29 10:53:59.626881 containerd[1726]: time="2025-01-29T10:53:59.626507568Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:53:59.631880 containerd[1726]: time="2025-01-29T10:53:59.631400565Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:53:59.641148 containerd[1726]: time="2025-01-29T10:53:59.640865199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:53:59.645881 containerd[1726]: time="2025-01-29T10:53:59.645158036Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:53:59.651578 containerd[1726]: time="2025-01-29T10:53:59.651534232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 10:53:59.652522 containerd[1726]: time="2025-01-29T10:53:59.652493592Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 10.684835514s" Jan 29 10:53:59.654546 containerd[1726]: time="2025-01-29T10:53:59.654461590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 10:53:59.658566 containerd[1726]: time="2025-01-29T10:53:59.658526428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 10.686027147s" Jan 29 10:53:59.689489 containerd[1726]: time="2025-01-29T10:53:59.689440888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 10.713202165s" Jan 29 10:54:00.439931 containerd[1726]: time="2025-01-29T10:54:00.439137410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:00.439931 containerd[1726]: time="2025-01-29T10:54:00.439194210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:00.439931 containerd[1726]: time="2025-01-29T10:54:00.439210570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:00.439931 containerd[1726]: time="2025-01-29T10:54:00.439280090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:00.442321 containerd[1726]: time="2025-01-29T10:54:00.442074248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:00.442321 containerd[1726]: time="2025-01-29T10:54:00.442123848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:00.442439 containerd[1726]: time="2025-01-29T10:54:00.442136128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:00.442439 containerd[1726]: time="2025-01-29T10:54:00.442244168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:00.448847 containerd[1726]: time="2025-01-29T10:54:00.448731684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:00.449125 containerd[1726]: time="2025-01-29T10:54:00.448956564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:00.449471 containerd[1726]: time="2025-01-29T10:54:00.449408923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:00.449820 containerd[1726]: time="2025-01-29T10:54:00.449770763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:00.467027 systemd[1]: Started cri-containerd-5ce14c4c1ccd1b71eba67fea0db102f1b26211f47e362fc04f9a4bbb5dbbce46.scope - libcontainer container 5ce14c4c1ccd1b71eba67fea0db102f1b26211f47e362fc04f9a4bbb5dbbce46. Jan 29 10:54:00.468006 systemd[1]: Started cri-containerd-d9d97be30dfef3f932ee29a6b330d69ee221c11160e9f3bb4548565815fd616b.scope - libcontainer container d9d97be30dfef3f932ee29a6b330d69ee221c11160e9f3bb4548565815fd616b. Jan 29 10:54:00.472047 systemd[1]: Started cri-containerd-db7952e5e1bb156fa79e051f3aa684d10107f3f3d8283d38db533d328621ebe9.scope - libcontainer container db7952e5e1bb156fa79e051f3aa684d10107f3f3d8283d38db533d328621ebe9. Jan 29 10:54:00.515126 containerd[1726]: time="2025-01-29T10:54:00.515083922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186.1.0-a-2e829ed2e0,Uid:5c13c9c1d38867adf8284196326c3e7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9d97be30dfef3f932ee29a6b330d69ee221c11160e9f3bb4548565815fd616b\"" Jan 29 10:54:00.522167 containerd[1726]: time="2025-01-29T10:54:00.522133517Z" level=info msg="CreateContainer within sandbox \"d9d97be30dfef3f932ee29a6b330d69ee221c11160e9f3bb4548565815fd616b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 10:54:00.529825 containerd[1726]: time="2025-01-29T10:54:00.529794912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186.1.0-a-2e829ed2e0,Uid:05fdaf09478fbff2c5087468ec0165ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ce14c4c1ccd1b71eba67fea0db102f1b26211f47e362fc04f9a4bbb5dbbce46\"" Jan 29 10:54:00.532701 containerd[1726]: time="2025-01-29T10:54:00.532568950Z" level=info msg="CreateContainer within sandbox \"5ce14c4c1ccd1b71eba67fea0db102f1b26211f47e362fc04f9a4bbb5dbbce46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 10:54:00.535557 containerd[1726]: time="2025-01-29T10:54:00.535525788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186.1.0-a-2e829ed2e0,Uid:bec0b05c71a40223d8fd4e839e90166d,Namespace:kube-system,Attempt:0,} returns sandbox id \"db7952e5e1bb156fa79e051f3aa684d10107f3f3d8283d38db533d328621ebe9\"" Jan 29 10:54:00.539589 containerd[1726]: time="2025-01-29T10:54:00.539403586Z" level=info msg="CreateContainer within sandbox \"db7952e5e1bb156fa79e051f3aa684d10107f3f3d8283d38db533d328621ebe9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 10:54:00.563936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320534119.mount: Deactivated successfully. Jan 29 10:54:00.584878 containerd[1726]: time="2025-01-29T10:54:00.584826037Z" level=info msg="CreateContainer within sandbox \"d9d97be30dfef3f932ee29a6b330d69ee221c11160e9f3bb4548565815fd616b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a406786cd766c401f20544913e87e4ed838d531def1da3732a39d6d6580ddeab\"" Jan 29 10:54:00.585499 containerd[1726]: time="2025-01-29T10:54:00.585470517Z" level=info msg="StartContainer for \"a406786cd766c401f20544913e87e4ed838d531def1da3732a39d6d6580ddeab\"" Jan 29 10:54:00.599687 containerd[1726]: time="2025-01-29T10:54:00.599648428Z" level=info msg="CreateContainer within sandbox \"5ce14c4c1ccd1b71eba67fea0db102f1b26211f47e362fc04f9a4bbb5dbbce46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"788e25a5cb06da9c879df5a0ea7bc2c7646ab5affa330b829cf78c246d32bd5f\"" Jan 29 10:54:00.601041 containerd[1726]: time="2025-01-29T10:54:00.600903307Z" level=info msg="StartContainer for \"788e25a5cb06da9c879df5a0ea7bc2c7646ab5affa330b829cf78c246d32bd5f\"" Jan 29 10:54:00.610004 systemd[1]: Started cri-containerd-a406786cd766c401f20544913e87e4ed838d531def1da3732a39d6d6580ddeab.scope - libcontainer container a406786cd766c401f20544913e87e4ed838d531def1da3732a39d6d6580ddeab. Jan 29 10:54:00.610589 containerd[1726]: time="2025-01-29T10:54:00.610327341Z" level=info msg="CreateContainer within sandbox \"db7952e5e1bb156fa79e051f3aa684d10107f3f3d8283d38db533d328621ebe9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1622faca8b7f2b47897e25ab8517caa1c2f2f509b914c5ed5f511297cc4ea88d\"" Jan 29 10:54:00.611444 containerd[1726]: time="2025-01-29T10:54:00.611410100Z" level=info msg="StartContainer for \"1622faca8b7f2b47897e25ab8517caa1c2f2f509b914c5ed5f511297cc4ea88d\"" Jan 29 10:54:00.640307 systemd[1]: Started cri-containerd-788e25a5cb06da9c879df5a0ea7bc2c7646ab5affa330b829cf78c246d32bd5f.scope - libcontainer container 788e25a5cb06da9c879df5a0ea7bc2c7646ab5affa330b829cf78c246d32bd5f. Jan 29 10:54:00.655195 systemd[1]: Started cri-containerd-1622faca8b7f2b47897e25ab8517caa1c2f2f509b914c5ed5f511297cc4ea88d.scope - libcontainer container 1622faca8b7f2b47897e25ab8517caa1c2f2f509b914c5ed5f511297cc4ea88d. Jan 29 10:54:00.675061 containerd[1726]: time="2025-01-29T10:54:00.674835820Z" level=info msg="StartContainer for \"a406786cd766c401f20544913e87e4ed838d531def1da3732a39d6d6580ddeab\" returns successfully" Jan 29 10:54:00.707164 containerd[1726]: time="2025-01-29T10:54:00.707039159Z" level=info msg="StartContainer for \"788e25a5cb06da9c879df5a0ea7bc2c7646ab5affa330b829cf78c246d32bd5f\" returns successfully" Jan 29 10:54:00.734570 containerd[1726]: time="2025-01-29T10:54:00.734456342Z" level=info msg="StartContainer for \"1622faca8b7f2b47897e25ab8517caa1c2f2f509b914c5ed5f511297cc4ea88d\" returns successfully" Jan 29 10:54:01.348291 kubelet[2982]: I0129 10:54:01.348024 2982 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:02.884865 kubelet[2982]: I0129 10:54:02.882699 2982 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:02.885422 kubelet[2982]: E0129 10:54:02.885244 2982 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4186.1.0-a-2e829ed2e0\": node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:03.018303 kubelet[2982]: E0129 10:54:03.018255 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:03.118737 kubelet[2982]: E0129 10:54:03.118677 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:03.219191 kubelet[2982]: E0129 10:54:03.219077 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:03.319711 kubelet[2982]: E0129 10:54:03.319670 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:03.419839 kubelet[2982]: E0129 10:54:03.419797 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:03.519926 kubelet[2982]: E0129 10:54:03.519890 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:03.620843 kubelet[2982]: E0129 10:54:03.620797 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:03.721234 kubelet[2982]: E0129 10:54:03.721192 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:03.822000 kubelet[2982]: E0129 10:54:03.821883 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:03.922463 kubelet[2982]: E0129 10:54:03.922411 2982 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:04.300337 kubelet[2982]: W0129 10:54:04.300291 2982 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 10:54:04.502685 kubelet[2982]: I0129 10:54:04.502650 2982 apiserver.go:52] "Watching apiserver" Jan 29 10:54:04.509401 kubelet[2982]: I0129 10:54:04.509354 2982 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 10:54:05.258728 systemd[1]: Reloading requested from client PID 3262 ('systemctl') (unit session-9.scope)... Jan 29 10:54:05.258742 systemd[1]: Reloading... Jan 29 10:54:05.346334 zram_generator::config[3299]: No configuration found. Jan 29 10:54:05.449789 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 10:54:05.538604 systemd[1]: Reloading finished in 279 ms. Jan 29 10:54:05.578949 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:54:05.596992 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 10:54:05.597211 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:54:05.597268 systemd[1]: kubelet.service: Consumed 1.507s CPU time, 115.0M memory peak, 0B memory swap peak. Jan 29 10:54:05.602173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 10:54:05.903993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 10:54:05.915299 (kubelet)[3366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 10:54:05.960152 kubelet[3366]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:54:05.960152 kubelet[3366]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 10:54:05.960152 kubelet[3366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 10:54:05.960152 kubelet[3366]: I0129 10:54:05.959050 3366 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 10:54:05.966333 kubelet[3366]: I0129 10:54:05.966186 3366 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 10:54:05.966333 kubelet[3366]: I0129 10:54:05.966214 3366 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 10:54:05.966475 kubelet[3366]: I0129 10:54:05.966451 3366 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 10:54:05.967886 kubelet[3366]: I0129 10:54:05.967842 3366 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 10:54:05.970014 kubelet[3366]: I0129 10:54:05.969871 3366 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 10:54:05.974175 kubelet[3366]: E0129 10:54:05.974144 3366 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 10:54:05.974402 kubelet[3366]: I0129 10:54:05.974388 3366 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 10:54:05.977737 kubelet[3366]: I0129 10:54:05.977650 3366 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 10:54:05.977941 kubelet[3366]: I0129 10:54:05.977884 3366 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 10:54:05.978544 kubelet[3366]: I0129 10:54:05.978084 3366 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 10:54:05.978544 kubelet[3366]: I0129 10:54:05.978110 3366 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186.1.0-a-2e829ed2e0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 10:54:05.978544 kubelet[3366]: I0129 10:54:05.978271 3366 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 10:54:05.978544 kubelet[3366]: I0129 10:54:05.978280 3366 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 10:54:05.978718 kubelet[3366]: I0129 10:54:05.978311 3366 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:54:05.978718 kubelet[3366]: I0129 10:54:05.978417 3366 kubelet.go:408] "Attempting to sync node with API server" Jan 29 10:54:05.978718 kubelet[3366]: I0129 10:54:05.978429 3366 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 10:54:05.978718 kubelet[3366]: I0129 10:54:05.978450 3366 kubelet.go:314] "Adding apiserver pod source" Jan 29 10:54:05.978718 kubelet[3366]: I0129 10:54:05.978491 3366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 10:54:05.982792 kubelet[3366]: I0129 10:54:05.982722 3366 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 10:54:05.983316 kubelet[3366]: I0129 10:54:05.983288 3366 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 10:54:05.984057 kubelet[3366]: I0129 10:54:05.984034 3366 server.go:1269] "Started kubelet" Jan 29 10:54:05.995911 kubelet[3366]: I0129 10:54:05.995089 3366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 10:54:06.003898 kubelet[3366]: I0129 10:54:06.001625 3366 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 10:54:06.003898 kubelet[3366]: I0129 10:54:06.002541 3366 server.go:460] "Adding debug handlers to kubelet server" Jan 29 10:54:06.003898 kubelet[3366]: I0129 10:54:06.003381 3366 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 10:54:06.003898 kubelet[3366]: I0129 10:54:06.003587 3366 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 10:54:06.003898 kubelet[3366]: I0129 10:54:06.003867 3366 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 10:54:06.004495 kubelet[3366]: I0129 10:54:06.004355 3366 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 10:54:06.004537 kubelet[3366]: E0129 10:54:06.004521 3366 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186.1.0-a-2e829ed2e0\" not found" Jan 29 10:54:06.006918 kubelet[3366]: I0129 10:54:06.006371 3366 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 10:54:06.006918 kubelet[3366]: I0129 10:54:06.006498 3366 reconciler.go:26] "Reconciler: start to sync state" Jan 29 10:54:06.008921 kubelet[3366]: I0129 10:54:06.007933 3366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 10:54:06.008921 kubelet[3366]: I0129 10:54:06.008672 3366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 10:54:06.008921 kubelet[3366]: I0129 10:54:06.008690 3366 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 10:54:06.008921 kubelet[3366]: I0129 10:54:06.008706 3366 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 10:54:06.008921 kubelet[3366]: E0129 10:54:06.008739 3366 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 10:54:06.024887 kubelet[3366]: I0129 10:54:06.024536 3366 factory.go:221] Registration of the containerd container factory successfully Jan 29 10:54:06.024887 kubelet[3366]: I0129 10:54:06.024554 3366 factory.go:221] Registration of the systemd container factory successfully Jan 29 10:54:06.024887 kubelet[3366]: I0129 10:54:06.024622 3366 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 10:54:06.072109 kubelet[3366]: I0129 10:54:06.072089 3366 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 10:54:06.072261 kubelet[3366]: I0129 10:54:06.072249 3366 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 10:54:06.072341 kubelet[3366]: I0129 10:54:06.072333 3366 state_mem.go:36] "Initialized new in-memory state store" Jan 29 10:54:06.072618 kubelet[3366]: I0129 10:54:06.072523 3366 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 10:54:06.072618 kubelet[3366]: I0129 10:54:06.072538 3366 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 10:54:06.072618 kubelet[3366]: I0129 10:54:06.072556 3366 policy_none.go:49] "None policy: Start" Jan 29 10:54:06.073641 kubelet[3366]: I0129 10:54:06.073372 3366 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 10:54:06.073641 kubelet[3366]: I0129 10:54:06.073397 3366 state_mem.go:35] "Initializing new in-memory state store" Jan 29 10:54:06.073641 kubelet[3366]: I0129 10:54:06.073558 3366 state_mem.go:75] "Updated machine memory state" Jan 29 10:54:06.077481 kubelet[3366]: I0129 10:54:06.077456 3366 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 10:54:06.077625 kubelet[3366]: I0129 10:54:06.077605 3366 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 10:54:06.077667 kubelet[3366]: I0129 10:54:06.077622 3366 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 10:54:06.078232 kubelet[3366]: I0129 10:54:06.078169 3366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 10:54:06.117201 kubelet[3366]: W0129 10:54:06.117041 3366 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 10:54:06.121051 kubelet[3366]: W0129 10:54:06.120911 3366 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 10:54:06.121522 kubelet[3366]: W0129 10:54:06.121508 3366 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 10:54:06.121693 kubelet[3366]: E0129 10:54:06.121642 3366 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186.1.0-a-2e829ed2e0\" already exists" pod="kube-system/kube-scheduler-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.180912 kubelet[3366]: I0129 10:54:06.180270 3366 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.199948 kubelet[3366]: I0129 10:54:06.199878 3366 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.200225 kubelet[3366]: I0129 10:54:06.200146 3366 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.207970 kubelet[3366]: I0129 10:54:06.207927 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/05fdaf09478fbff2c5087468ec0165ac-flexvolume-dir\") pod \"kube-controller-manager-ci-4186.1.0-a-2e829ed2e0\" (UID: \"05fdaf09478fbff2c5087468ec0165ac\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.207970 kubelet[3366]: I0129 10:54:06.207968 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05fdaf09478fbff2c5087468ec0165ac-k8s-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-2e829ed2e0\" (UID: \"05fdaf09478fbff2c5087468ec0165ac\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.208117 kubelet[3366]: I0129 10:54:06.207989 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/05fdaf09478fbff2c5087468ec0165ac-kubeconfig\") pod \"kube-controller-manager-ci-4186.1.0-a-2e829ed2e0\" (UID: \"05fdaf09478fbff2c5087468ec0165ac\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.208117 kubelet[3366]: I0129 10:54:06.208005 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c13c9c1d38867adf8284196326c3e7a-k8s-certs\") pod \"kube-apiserver-ci-4186.1.0-a-2e829ed2e0\" (UID: \"5c13c9c1d38867adf8284196326c3e7a\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.208117 kubelet[3366]: I0129 10:54:06.208022 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c13c9c1d38867adf8284196326c3e7a-ca-certs\") pod \"kube-apiserver-ci-4186.1.0-a-2e829ed2e0\" (UID: \"5c13c9c1d38867adf8284196326c3e7a\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.208117 kubelet[3366]: I0129 10:54:06.208037 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c13c9c1d38867adf8284196326c3e7a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186.1.0-a-2e829ed2e0\" (UID: \"5c13c9c1d38867adf8284196326c3e7a\") " pod="kube-system/kube-apiserver-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.208117 kubelet[3366]: I0129 10:54:06.208052 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05fdaf09478fbff2c5087468ec0165ac-ca-certs\") pod \"kube-controller-manager-ci-4186.1.0-a-2e829ed2e0\" (UID: \"05fdaf09478fbff2c5087468ec0165ac\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.208236 kubelet[3366]: I0129 10:54:06.208067 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05fdaf09478fbff2c5087468ec0165ac-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186.1.0-a-2e829ed2e0\" (UID: \"05fdaf09478fbff2c5087468ec0165ac\") " pod="kube-system/kube-controller-manager-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.208236 kubelet[3366]: I0129 10:54:06.208084 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bec0b05c71a40223d8fd4e839e90166d-kubeconfig\") pod \"kube-scheduler-ci-4186.1.0-a-2e829ed2e0\" (UID: \"bec0b05c71a40223d8fd4e839e90166d\") " pod="kube-system/kube-scheduler-ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:06.980801 kubelet[3366]: I0129 10:54:06.980744 3366 apiserver.go:52] "Watching apiserver" Jan 29 10:54:07.007541 kubelet[3366]: I0129 10:54:07.007499 3366 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 10:54:07.133334 kubelet[3366]: I0129 10:54:07.133062 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186.1.0-a-2e829ed2e0" podStartSLOduration=3.133044354 podStartE2EDuration="3.133044354s" podCreationTimestamp="2025-01-29 10:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:54:07.091512101 +0000 UTC m=+1.173328682" watchObservedRunningTime="2025-01-29 10:54:07.133044354 +0000 UTC m=+1.214860975" Jan 29 10:54:07.161261 kubelet[3366]: I0129 10:54:07.161117 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186.1.0-a-2e829ed2e0" podStartSLOduration=1.161100576 podStartE2EDuration="1.161100576s" podCreationTimestamp="2025-01-29 10:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:54:07.133580754 +0000 UTC m=+1.215397455" watchObservedRunningTime="2025-01-29 10:54:07.161100576 +0000 UTC m=+1.242917197" Jan 29 10:54:07.187557 kubelet[3366]: I0129 10:54:07.186275 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186.1.0-a-2e829ed2e0" podStartSLOduration=1.186257319 podStartE2EDuration="1.186257319s" podCreationTimestamp="2025-01-29 10:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:54:07.161897335 +0000 UTC m=+1.243713956" watchObservedRunningTime="2025-01-29 10:54:07.186257319 +0000 UTC m=+1.268073940" Jan 29 10:54:10.865807 kubelet[3366]: I0129 10:54:10.865765 3366 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 10:54:10.866190 containerd[1726]: time="2025-01-29T10:54:10.866147978Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 10:54:10.866377 kubelet[3366]: I0129 10:54:10.866310 3366 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 10:54:10.883247 sudo[2405]: pam_unix(sudo:session): session closed for user root Jan 29 10:54:10.972313 sshd[2401]: Connection closed by 10.200.16.10 port 58042 Jan 29 10:54:10.972901 sshd-session[2399]: pam_unix(sshd:session): session closed for user core Jan 29 10:54:10.977000 systemd-logind[1710]: Session 9 logged out. Waiting for processes to exit. Jan 29 10:54:10.977823 systemd[1]: sshd@6-10.200.20.37:22-10.200.16.10:58042.service: Deactivated successfully. Jan 29 10:54:10.981303 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 10:54:10.981559 systemd[1]: session-9.scope: Consumed 5.866s CPU time, 154.3M memory peak, 0B memory swap peak. Jan 29 10:54:10.982279 systemd-logind[1710]: Removed session 9. Jan 29 10:54:11.703324 systemd[1]: Created slice kubepods-besteffort-pod7e85f6c9_258e_48f5_82ee_01b50002bac7.slice - libcontainer container kubepods-besteffort-pod7e85f6c9_258e_48f5_82ee_01b50002bac7.slice. Jan 29 10:54:11.743242 kubelet[3366]: I0129 10:54:11.743204 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e85f6c9-258e-48f5-82ee-01b50002bac7-xtables-lock\") pod \"kube-proxy-p9jpl\" (UID: \"7e85f6c9-258e-48f5-82ee-01b50002bac7\") " pod="kube-system/kube-proxy-p9jpl" Jan 29 10:54:11.743387 kubelet[3366]: I0129 10:54:11.743288 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e85f6c9-258e-48f5-82ee-01b50002bac7-lib-modules\") pod \"kube-proxy-p9jpl\" (UID: \"7e85f6c9-258e-48f5-82ee-01b50002bac7\") " pod="kube-system/kube-proxy-p9jpl" Jan 29 10:54:11.743387 kubelet[3366]: I0129 10:54:11.743310 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn5nt\" (UniqueName: \"kubernetes.io/projected/7e85f6c9-258e-48f5-82ee-01b50002bac7-kube-api-access-rn5nt\") pod \"kube-proxy-p9jpl\" (UID: \"7e85f6c9-258e-48f5-82ee-01b50002bac7\") " pod="kube-system/kube-proxy-p9jpl" Jan 29 10:54:11.743387 kubelet[3366]: I0129 10:54:11.743343 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e85f6c9-258e-48f5-82ee-01b50002bac7-kube-proxy\") pod \"kube-proxy-p9jpl\" (UID: \"7e85f6c9-258e-48f5-82ee-01b50002bac7\") " pod="kube-system/kube-proxy-p9jpl" Jan 29 10:54:11.924410 systemd[1]: Created slice kubepods-besteffort-pod2cd15321_79f5_4bee_a2b8_9bca1e54826e.slice - libcontainer container kubepods-besteffort-pod2cd15321_79f5_4bee_a2b8_9bca1e54826e.slice. Jan 29 10:54:11.944143 kubelet[3366]: I0129 10:54:11.944089 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbwj\" (UniqueName: \"kubernetes.io/projected/2cd15321-79f5-4bee-a2b8-9bca1e54826e-kube-api-access-qhbwj\") pod \"tigera-operator-76c4976dd7-lqt7t\" (UID: \"2cd15321-79f5-4bee-a2b8-9bca1e54826e\") " pod="tigera-operator/tigera-operator-76c4976dd7-lqt7t" Jan 29 10:54:11.944143 kubelet[3366]: I0129 10:54:11.944139 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2cd15321-79f5-4bee-a2b8-9bca1e54826e-var-lib-calico\") pod \"tigera-operator-76c4976dd7-lqt7t\" (UID: \"2cd15321-79f5-4bee-a2b8-9bca1e54826e\") " pod="tigera-operator/tigera-operator-76c4976dd7-lqt7t" Jan 29 10:54:12.015472 containerd[1726]: time="2025-01-29T10:54:12.015423954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p9jpl,Uid:7e85f6c9-258e-48f5-82ee-01b50002bac7,Namespace:kube-system,Attempt:0,}" Jan 29 10:54:12.068939 containerd[1726]: time="2025-01-29T10:54:12.068724239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:12.068939 containerd[1726]: time="2025-01-29T10:54:12.068779759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:12.068939 containerd[1726]: time="2025-01-29T10:54:12.068795119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:12.069280 containerd[1726]: time="2025-01-29T10:54:12.068911879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:12.088099 systemd[1]: Started cri-containerd-86a9e0dfae0fbed1ca73c191cc7714b31d4fd588aa1b428f1db19d11ffc8581f.scope - libcontainer container 86a9e0dfae0fbed1ca73c191cc7714b31d4fd588aa1b428f1db19d11ffc8581f. Jan 29 10:54:12.109601 containerd[1726]: time="2025-01-29T10:54:12.109435613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p9jpl,Uid:7e85f6c9-258e-48f5-82ee-01b50002bac7,Namespace:kube-system,Attempt:0,} returns sandbox id \"86a9e0dfae0fbed1ca73c191cc7714b31d4fd588aa1b428f1db19d11ffc8581f\"" Jan 29 10:54:12.112557 containerd[1726]: time="2025-01-29T10:54:12.112428691Z" level=info msg="CreateContainer within sandbox \"86a9e0dfae0fbed1ca73c191cc7714b31d4fd588aa1b428f1db19d11ffc8581f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 10:54:12.195884 containerd[1726]: time="2025-01-29T10:54:12.195813437Z" level=info msg="CreateContainer within sandbox \"86a9e0dfae0fbed1ca73c191cc7714b31d4fd588aa1b428f1db19d11ffc8581f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"41088051fcc3b2c4db22c763f8be3cc2d89263dee31ab846419c1a9a5408eb8a\"" Jan 29 10:54:12.197166 containerd[1726]: time="2025-01-29T10:54:12.197126156Z" level=info msg="StartContainer for \"41088051fcc3b2c4db22c763f8be3cc2d89263dee31ab846419c1a9a5408eb8a\"" Jan 29 10:54:12.224004 systemd[1]: Started cri-containerd-41088051fcc3b2c4db22c763f8be3cc2d89263dee31ab846419c1a9a5408eb8a.scope - libcontainer container 41088051fcc3b2c4db22c763f8be3cc2d89263dee31ab846419c1a9a5408eb8a. Jan 29 10:54:12.231422 containerd[1726]: time="2025-01-29T10:54:12.231175374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-lqt7t,Uid:2cd15321-79f5-4bee-a2b8-9bca1e54826e,Namespace:tigera-operator,Attempt:0,}" Jan 29 10:54:12.251981 containerd[1726]: time="2025-01-29T10:54:12.251896281Z" level=info msg="StartContainer for \"41088051fcc3b2c4db22c763f8be3cc2d89263dee31ab846419c1a9a5408eb8a\" returns successfully" Jan 29 10:54:12.279850 containerd[1726]: time="2025-01-29T10:54:12.279481023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:12.279850 containerd[1726]: time="2025-01-29T10:54:12.279570743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:12.279850 containerd[1726]: time="2025-01-29T10:54:12.279588583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:12.279850 containerd[1726]: time="2025-01-29T10:54:12.279698663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:12.293081 systemd[1]: Started cri-containerd-2b4b552cbc963974996b78b7e21a374c3d12f6d59762eb10d35d5bc91ab94a31.scope - libcontainer container 2b4b552cbc963974996b78b7e21a374c3d12f6d59762eb10d35d5bc91ab94a31. Jan 29 10:54:12.327593 containerd[1726]: time="2025-01-29T10:54:12.327538392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-lqt7t,Uid:2cd15321-79f5-4bee-a2b8-9bca1e54826e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2b4b552cbc963974996b78b7e21a374c3d12f6d59762eb10d35d5bc91ab94a31\"" Jan 29 10:54:12.329522 containerd[1726]: time="2025-01-29T10:54:12.329355151Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 10:54:13.074958 kubelet[3366]: I0129 10:54:13.074508 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p9jpl" podStartSLOduration=2.074491444 podStartE2EDuration="2.074491444s" podCreationTimestamp="2025-01-29 10:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:54:13.074154284 +0000 UTC m=+7.155970905" watchObservedRunningTime="2025-01-29 10:54:13.074491444 +0000 UTC m=+7.156308065" Jan 29 10:54:13.610402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820854033.mount: Deactivated successfully. Jan 29 10:54:13.966240 containerd[1726]: time="2025-01-29T10:54:13.966123184Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:13.972520 containerd[1726]: time="2025-01-29T10:54:13.972478062Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 10:54:13.975778 containerd[1726]: time="2025-01-29T10:54:13.975713941Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:13.979840 containerd[1726]: time="2025-01-29T10:54:13.979776220Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:13.981118 containerd[1726]: time="2025-01-29T10:54:13.980621979Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.651236308s" Jan 29 10:54:13.981118 containerd[1726]: time="2025-01-29T10:54:13.980654899Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 10:54:13.983561 containerd[1726]: time="2025-01-29T10:54:13.983531059Z" level=info msg="CreateContainer within sandbox \"2b4b552cbc963974996b78b7e21a374c3d12f6d59762eb10d35d5bc91ab94a31\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 10:54:14.010286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927477303.mount: Deactivated successfully. Jan 29 10:54:14.020087 containerd[1726]: time="2025-01-29T10:54:14.020028928Z" level=info msg="CreateContainer within sandbox \"2b4b552cbc963974996b78b7e21a374c3d12f6d59762eb10d35d5bc91ab94a31\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6cd22d8d5a15a239502745f51f6d878da25f06159c28d07975fed7ae94f7cc4e\"" Jan 29 10:54:14.020770 containerd[1726]: time="2025-01-29T10:54:14.020729048Z" level=info msg="StartContainer for \"6cd22d8d5a15a239502745f51f6d878da25f06159c28d07975fed7ae94f7cc4e\"" Jan 29 10:54:14.048028 systemd[1]: Started cri-containerd-6cd22d8d5a15a239502745f51f6d878da25f06159c28d07975fed7ae94f7cc4e.scope - libcontainer container 6cd22d8d5a15a239502745f51f6d878da25f06159c28d07975fed7ae94f7cc4e. Jan 29 10:54:14.075815 containerd[1726]: time="2025-01-29T10:54:14.075754592Z" level=info msg="StartContainer for \"6cd22d8d5a15a239502745f51f6d878da25f06159c28d07975fed7ae94f7cc4e\" returns successfully" Jan 29 10:54:15.080733 kubelet[3366]: I0129 10:54:15.080271 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-lqt7t" podStartSLOduration=2.42760355 podStartE2EDuration="4.080255938s" podCreationTimestamp="2025-01-29 10:54:11 +0000 UTC" firstStartedPulling="2025-01-29 10:54:12.328777711 +0000 UTC m=+6.410594332" lastFinishedPulling="2025-01-29 10:54:13.981430139 +0000 UTC m=+8.063246720" observedRunningTime="2025-01-29 10:54:15.080092338 +0000 UTC m=+9.161908959" watchObservedRunningTime="2025-01-29 10:54:15.080255938 +0000 UTC m=+9.162072559" Jan 29 10:54:18.669371 systemd[1]: Created slice kubepods-besteffort-podb0f49bb1_d1bc_40a4_a800_1e982896c834.slice - libcontainer container kubepods-besteffort-podb0f49bb1_d1bc_40a4_a800_1e982896c834.slice. Jan 29 10:54:18.687133 kubelet[3366]: I0129 10:54:18.686690 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nzw6\" (UniqueName: \"kubernetes.io/projected/b0f49bb1-d1bc-40a4-a800-1e982896c834-kube-api-access-8nzw6\") pod \"calico-typha-6df65b8b5-jkktp\" (UID: \"b0f49bb1-d1bc-40a4-a800-1e982896c834\") " pod="calico-system/calico-typha-6df65b8b5-jkktp" Jan 29 10:54:18.687133 kubelet[3366]: I0129 10:54:18.686731 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b0f49bb1-d1bc-40a4-a800-1e982896c834-typha-certs\") pod \"calico-typha-6df65b8b5-jkktp\" (UID: \"b0f49bb1-d1bc-40a4-a800-1e982896c834\") " pod="calico-system/calico-typha-6df65b8b5-jkktp" Jan 29 10:54:18.687133 kubelet[3366]: I0129 10:54:18.686751 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f49bb1-d1bc-40a4-a800-1e982896c834-tigera-ca-bundle\") pod \"calico-typha-6df65b8b5-jkktp\" (UID: \"b0f49bb1-d1bc-40a4-a800-1e982896c834\") " pod="calico-system/calico-typha-6df65b8b5-jkktp" Jan 29 10:54:18.839694 systemd[1]: Created slice kubepods-besteffort-pode7f32801_8aa8_4610_8a5d_35d0b182610b.slice - libcontainer container kubepods-besteffort-pode7f32801_8aa8_4610_8a5d_35d0b182610b.slice. Jan 29 10:54:18.888239 kubelet[3366]: I0129 10:54:18.888197 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7f32801-8aa8-4610-8a5d-35d0b182610b-xtables-lock\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.888239 kubelet[3366]: I0129 10:54:18.888239 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e7f32801-8aa8-4610-8a5d-35d0b182610b-cni-bin-dir\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.888401 kubelet[3366]: I0129 10:54:18.888258 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e7f32801-8aa8-4610-8a5d-35d0b182610b-flexvol-driver-host\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.888401 kubelet[3366]: I0129 10:54:18.888279 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84x94\" (UniqueName: \"kubernetes.io/projected/e7f32801-8aa8-4610-8a5d-35d0b182610b-kube-api-access-84x94\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.888401 kubelet[3366]: I0129 10:54:18.888296 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7f32801-8aa8-4610-8a5d-35d0b182610b-tigera-ca-bundle\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.888401 kubelet[3366]: I0129 10:54:18.888326 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e7f32801-8aa8-4610-8a5d-35d0b182610b-var-run-calico\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.888401 kubelet[3366]: I0129 10:54:18.888343 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e7f32801-8aa8-4610-8a5d-35d0b182610b-cni-net-dir\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.888515 kubelet[3366]: I0129 10:54:18.888361 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7f32801-8aa8-4610-8a5d-35d0b182610b-lib-modules\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.888515 kubelet[3366]: I0129 10:54:18.888376 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e7f32801-8aa8-4610-8a5d-35d0b182610b-node-certs\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.888515 kubelet[3366]: I0129 10:54:18.888391 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e7f32801-8aa8-4610-8a5d-35d0b182610b-cni-log-dir\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.888515 kubelet[3366]: I0129 10:54:18.888406 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7f32801-8aa8-4610-8a5d-35d0b182610b-var-lib-calico\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.888515 kubelet[3366]: I0129 10:54:18.888420 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e7f32801-8aa8-4610-8a5d-35d0b182610b-policysync\") pod \"calico-node-grvgj\" (UID: \"e7f32801-8aa8-4610-8a5d-35d0b182610b\") " pod="calico-system/calico-node-grvgj" Jan 29 10:54:18.974652 containerd[1726]: time="2025-01-29T10:54:18.974090303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6df65b8b5-jkktp,Uid:b0f49bb1-d1bc-40a4-a800-1e982896c834,Namespace:calico-system,Attempt:0,}" Jan 29 10:54:18.999845 kubelet[3366]: E0129 10:54:18.999517 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:18.999845 kubelet[3366]: W0129 10:54:18.999538 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:18.999845 kubelet[3366]: E0129 10:54:18.999558 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.025519 kubelet[3366]: E0129 10:54:19.024989 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.025519 kubelet[3366]: W0129 10:54:19.025010 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.025519 kubelet[3366]: E0129 10:54:19.025030 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.035585 containerd[1726]: time="2025-01-29T10:54:19.035313984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:19.035585 containerd[1726]: time="2025-01-29T10:54:19.035369024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:19.035585 containerd[1726]: time="2025-01-29T10:54:19.035380344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:19.035585 containerd[1726]: time="2025-01-29T10:54:19.035483464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:19.040551 kubelet[3366]: E0129 10:54:19.039309 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct4dr" podUID="85d938fd-edbc-4618-8500-89676d3770ef" Jan 29 10:54:19.066070 systemd[1]: Started cri-containerd-0098a34369b56363d980ef6d04bd4fbdba790e59be91c70dea1a33aeadb7092f.scope - libcontainer container 0098a34369b56363d980ef6d04bd4fbdba790e59be91c70dea1a33aeadb7092f. Jan 29 10:54:19.080045 kubelet[3366]: E0129 10:54:19.080007 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.080425 kubelet[3366]: W0129 10:54:19.080216 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.080425 kubelet[3366]: E0129 10:54:19.080270 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.080721 kubelet[3366]: E0129 10:54:19.080591 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.080721 kubelet[3366]: W0129 10:54:19.080605 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.080721 kubelet[3366]: E0129 10:54:19.080617 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.081212 kubelet[3366]: E0129 10:54:19.081073 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.081212 kubelet[3366]: W0129 10:54:19.081088 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.081212 kubelet[3366]: E0129 10:54:19.081109 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.081882 kubelet[3366]: E0129 10:54:19.081675 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.081882 kubelet[3366]: W0129 10:54:19.081689 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.081882 kubelet[3366]: E0129 10:54:19.081702 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.082308 kubelet[3366]: E0129 10:54:19.082166 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.082308 kubelet[3366]: W0129 10:54:19.082213 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.082308 kubelet[3366]: E0129 10:54:19.082228 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.082680 kubelet[3366]: E0129 10:54:19.082575 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.082680 kubelet[3366]: W0129 10:54:19.082603 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.082680 kubelet[3366]: E0129 10:54:19.082615 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.083008 kubelet[3366]: E0129 10:54:19.082905 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.083008 kubelet[3366]: W0129 10:54:19.082916 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.083008 kubelet[3366]: E0129 10:54:19.082927 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.083306 kubelet[3366]: E0129 10:54:19.083232 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.083306 kubelet[3366]: W0129 10:54:19.083241 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.083306 kubelet[3366]: E0129 10:54:19.083251 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.083705 kubelet[3366]: E0129 10:54:19.083609 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.083705 kubelet[3366]: W0129 10:54:19.083621 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.083705 kubelet[3366]: E0129 10:54:19.083632 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.084060 kubelet[3366]: E0129 10:54:19.083943 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.084060 kubelet[3366]: W0129 10:54:19.083955 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.084060 kubelet[3366]: E0129 10:54:19.083977 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.084757 kubelet[3366]: E0129 10:54:19.084634 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.084757 kubelet[3366]: W0129 10:54:19.084673 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.084757 kubelet[3366]: E0129 10:54:19.084686 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.085271 kubelet[3366]: E0129 10:54:19.085257 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.085432 kubelet[3366]: W0129 10:54:19.085333 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.085432 kubelet[3366]: E0129 10:54:19.085347 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.085700 kubelet[3366]: E0129 10:54:19.085688 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.085929 kubelet[3366]: W0129 10:54:19.085759 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.085929 kubelet[3366]: E0129 10:54:19.085772 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.086241 kubelet[3366]: E0129 10:54:19.086139 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.086241 kubelet[3366]: W0129 10:54:19.086151 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.086241 kubelet[3366]: E0129 10:54:19.086161 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.086586 kubelet[3366]: E0129 10:54:19.086455 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.086586 kubelet[3366]: W0129 10:54:19.086466 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.086586 kubelet[3366]: E0129 10:54:19.086478 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.086790 kubelet[3366]: E0129 10:54:19.086778 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.086929 kubelet[3366]: W0129 10:54:19.086844 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.086929 kubelet[3366]: E0129 10:54:19.086881 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.087308 kubelet[3366]: E0129 10:54:19.087202 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.087308 kubelet[3366]: W0129 10:54:19.087214 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.087308 kubelet[3366]: E0129 10:54:19.087225 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.087651 kubelet[3366]: E0129 10:54:19.087536 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.087651 kubelet[3366]: W0129 10:54:19.087548 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.087651 kubelet[3366]: E0129 10:54:19.087557 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.087927 kubelet[3366]: E0129 10:54:19.087880 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.088104 kubelet[3366]: W0129 10:54:19.087991 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.088104 kubelet[3366]: E0129 10:54:19.088010 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.088266 kubelet[3366]: E0129 10:54:19.088240 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.088343 kubelet[3366]: W0129 10:54:19.088317 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.088404 kubelet[3366]: E0129 10:54:19.088389 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.090566 kubelet[3366]: E0129 10:54:19.090544 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.090566 kubelet[3366]: W0129 10:54:19.090560 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.090747 kubelet[3366]: E0129 10:54:19.090573 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.090747 kubelet[3366]: I0129 10:54:19.090602 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/85d938fd-edbc-4618-8500-89676d3770ef-socket-dir\") pod \"csi-node-driver-ct4dr\" (UID: \"85d938fd-edbc-4618-8500-89676d3770ef\") " pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:19.091097 kubelet[3366]: E0129 10:54:19.091071 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.091097 kubelet[3366]: W0129 10:54:19.091091 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.091590 kubelet[3366]: E0129 10:54:19.091109 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.091590 kubelet[3366]: I0129 10:54:19.091127 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/85d938fd-edbc-4618-8500-89676d3770ef-varrun\") pod \"csi-node-driver-ct4dr\" (UID: \"85d938fd-edbc-4618-8500-89676d3770ef\") " pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:19.091590 kubelet[3366]: E0129 10:54:19.091464 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.091590 kubelet[3366]: W0129 10:54:19.091480 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.091590 kubelet[3366]: E0129 10:54:19.091498 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.091590 kubelet[3366]: I0129 10:54:19.091514 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85d938fd-edbc-4618-8500-89676d3770ef-kubelet-dir\") pod \"csi-node-driver-ct4dr\" (UID: \"85d938fd-edbc-4618-8500-89676d3770ef\") " pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:19.091879 kubelet[3366]: E0129 10:54:19.091810 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.091879 kubelet[3366]: W0129 10:54:19.091823 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.091879 kubelet[3366]: E0129 10:54:19.091843 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.092236 kubelet[3366]: E0129 10:54:19.092128 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.092236 kubelet[3366]: W0129 10:54:19.092142 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.092236 kubelet[3366]: E0129 10:54:19.092154 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.092514 kubelet[3366]: E0129 10:54:19.092432 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.092514 kubelet[3366]: W0129 10:54:19.092442 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.092514 kubelet[3366]: E0129 10:54:19.092453 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.093002 kubelet[3366]: E0129 10:54:19.092968 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.093002 kubelet[3366]: W0129 10:54:19.092986 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.093191 kubelet[3366]: E0129 10:54:19.093147 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.093320 kubelet[3366]: E0129 10:54:19.093309 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.093489 kubelet[3366]: W0129 10:54:19.093395 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.093489 kubelet[3366]: E0129 10:54:19.093434 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.093489 kubelet[3366]: I0129 10:54:19.093459 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwqml\" (UniqueName: \"kubernetes.io/projected/85d938fd-edbc-4618-8500-89676d3770ef-kube-api-access-wwqml\") pod \"csi-node-driver-ct4dr\" (UID: \"85d938fd-edbc-4618-8500-89676d3770ef\") " pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:19.093732 kubelet[3366]: E0129 10:54:19.093647 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.093732 kubelet[3366]: W0129 10:54:19.093659 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.093732 kubelet[3366]: E0129 10:54:19.093689 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.094598 kubelet[3366]: E0129 10:54:19.094457 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.094598 kubelet[3366]: W0129 10:54:19.094486 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.094598 kubelet[3366]: E0129 10:54:19.094508 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.095066 kubelet[3366]: E0129 10:54:19.094942 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.095066 kubelet[3366]: W0129 10:54:19.094954 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.095066 kubelet[3366]: E0129 10:54:19.094968 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.095066 kubelet[3366]: I0129 10:54:19.094985 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/85d938fd-edbc-4618-8500-89676d3770ef-registration-dir\") pod \"csi-node-driver-ct4dr\" (UID: \"85d938fd-edbc-4618-8500-89676d3770ef\") " pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:19.095365 kubelet[3366]: E0129 10:54:19.095293 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.095365 kubelet[3366]: W0129 10:54:19.095306 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.095365 kubelet[3366]: E0129 10:54:19.095346 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.095722 kubelet[3366]: E0129 10:54:19.095646 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.095722 kubelet[3366]: W0129 10:54:19.095659 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.095722 kubelet[3366]: E0129 10:54:19.095681 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.096188 kubelet[3366]: E0129 10:54:19.096021 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.096188 kubelet[3366]: W0129 10:54:19.096035 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.096188 kubelet[3366]: E0129 10:54:19.096046 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.096500 kubelet[3366]: E0129 10:54:19.096440 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.096500 kubelet[3366]: W0129 10:54:19.096453 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.096500 kubelet[3366]: E0129 10:54:19.096476 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.111702 containerd[1726]: time="2025-01-29T10:54:19.111651097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6df65b8b5-jkktp,Uid:b0f49bb1-d1bc-40a4-a800-1e982896c834,Namespace:calico-system,Attempt:0,} returns sandbox id \"0098a34369b56363d980ef6d04bd4fbdba790e59be91c70dea1a33aeadb7092f\"" Jan 29 10:54:19.113876 containerd[1726]: time="2025-01-29T10:54:19.113825015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 10:54:19.142478 containerd[1726]: time="2025-01-29T10:54:19.142438517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-grvgj,Uid:e7f32801-8aa8-4610-8a5d-35d0b182610b,Namespace:calico-system,Attempt:0,}" Jan 29 10:54:19.191232 containerd[1726]: time="2025-01-29T10:54:19.190923167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:19.191539 containerd[1726]: time="2025-01-29T10:54:19.190984767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:19.191539 containerd[1726]: time="2025-01-29T10:54:19.191073887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:19.191539 containerd[1726]: time="2025-01-29T10:54:19.191171447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:19.196502 kubelet[3366]: E0129 10:54:19.196372 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.196502 kubelet[3366]: W0129 10:54:19.196403 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.196502 kubelet[3366]: E0129 10:54:19.196422 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.197614 kubelet[3366]: E0129 10:54:19.196967 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.197614 kubelet[3366]: W0129 10:54:19.197204 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.197614 kubelet[3366]: E0129 10:54:19.197235 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.197967 kubelet[3366]: E0129 10:54:19.197947 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.198309 kubelet[3366]: W0129 10:54:19.198278 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.199146 kubelet[3366]: E0129 10:54:19.198928 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.199146 kubelet[3366]: E0129 10:54:19.198388 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.199628 kubelet[3366]: W0129 10:54:19.198949 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.199628 kubelet[3366]: E0129 10:54:19.199215 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.199628 kubelet[3366]: E0129 10:54:19.199405 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.199628 kubelet[3366]: W0129 10:54:19.199417 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.199628 kubelet[3366]: E0129 10:54:19.199434 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.200156 kubelet[3366]: E0129 10:54:19.199818 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.200156 kubelet[3366]: W0129 10:54:19.199835 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.200156 kubelet[3366]: E0129 10:54:19.199870 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.200986 kubelet[3366]: E0129 10:54:19.200967 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.201519 kubelet[3366]: W0129 10:54:19.201061 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.201519 kubelet[3366]: E0129 10:54:19.201090 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.201519 kubelet[3366]: E0129 10:54:19.201306 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.201519 kubelet[3366]: W0129 10:54:19.201315 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.201519 kubelet[3366]: E0129 10:54:19.201341 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.201519 kubelet[3366]: E0129 10:54:19.201453 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.201519 kubelet[3366]: W0129 10:54:19.201461 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.201519 kubelet[3366]: E0129 10:54:19.201488 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.201833 kubelet[3366]: E0129 10:54:19.201818 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.202247 kubelet[3366]: W0129 10:54:19.202002 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.202247 kubelet[3366]: E0129 10:54:19.202063 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.202390 kubelet[3366]: E0129 10:54:19.202373 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.202482 kubelet[3366]: W0129 10:54:19.202469 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.202799 kubelet[3366]: E0129 10:54:19.202658 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.203085 kubelet[3366]: E0129 10:54:19.202995 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.203085 kubelet[3366]: W0129 10:54:19.203009 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.203085 kubelet[3366]: E0129 10:54:19.203035 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.203894 kubelet[3366]: E0129 10:54:19.203583 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.203894 kubelet[3366]: W0129 10:54:19.203601 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.204068 kubelet[3366]: E0129 10:54:19.204052 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.204471 kubelet[3366]: E0129 10:54:19.204346 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.204471 kubelet[3366]: W0129 10:54:19.204359 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.204799 kubelet[3366]: E0129 10:54:19.204663 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.206015 kubelet[3366]: E0129 10:54:19.204923 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.206015 kubelet[3366]: W0129 10:54:19.204936 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.206015 kubelet[3366]: E0129 10:54:19.205732 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.206015 kubelet[3366]: E0129 10:54:19.205822 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.206015 kubelet[3366]: W0129 10:54:19.205829 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.206015 kubelet[3366]: E0129 10:54:19.205926 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.206264 kubelet[3366]: E0129 10:54:19.206235 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.206264 kubelet[3366]: W0129 10:54:19.206250 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.206416 kubelet[3366]: E0129 10:54:19.206403 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.206636 kubelet[3366]: E0129 10:54:19.206624 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.206710 kubelet[3366]: W0129 10:54:19.206699 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.206808 kubelet[3366]: E0129 10:54:19.206783 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.207794 kubelet[3366]: E0129 10:54:19.207766 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.207891 kubelet[3366]: W0129 10:54:19.207877 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.207958 kubelet[3366]: E0129 10:54:19.207946 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.208351 kubelet[3366]: E0129 10:54:19.208323 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.208351 kubelet[3366]: W0129 10:54:19.208336 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.209182 kubelet[3366]: E0129 10:54:19.208575 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.209389 kubelet[3366]: E0129 10:54:19.209300 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.209389 kubelet[3366]: W0129 10:54:19.209316 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.209514 kubelet[3366]: E0129 10:54:19.209503 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.209582 kubelet[3366]: W0129 10:54:19.209569 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.209828 kubelet[3366]: E0129 10:54:19.209816 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.210717 kubelet[3366]: W0129 10:54:19.209886 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.210717 kubelet[3366]: E0129 10:54:19.209900 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.210888 kubelet[3366]: E0129 10:54:19.210845 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.211307 kubelet[3366]: E0129 10:54:19.211238 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.211557 kubelet[3366]: W0129 10:54:19.211531 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.211776 kubelet[3366]: E0129 10:54:19.211712 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.212265 kubelet[3366]: E0129 10:54:19.211903 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.212265 kubelet[3366]: E0129 10:54:19.212113 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.212265 kubelet[3366]: W0129 10:54:19.212123 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.212265 kubelet[3366]: E0129 10:54:19.212133 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.213072 systemd[1]: Started cri-containerd-99f36701f52cb5c1deee9adde3d141f982d76aac76eb8b83a97409705fb45dc4.scope - libcontainer container 99f36701f52cb5c1deee9adde3d141f982d76aac76eb8b83a97409705fb45dc4. Jan 29 10:54:19.227190 kubelet[3366]: E0129 10:54:19.227071 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:19.227190 kubelet[3366]: W0129 10:54:19.227096 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:19.231165 kubelet[3366]: E0129 10:54:19.228966 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:19.248905 containerd[1726]: time="2025-01-29T10:54:19.248818811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-grvgj,Uid:e7f32801-8aa8-4610-8a5d-35d0b182610b,Namespace:calico-system,Attempt:0,} returns sandbox id \"99f36701f52cb5c1deee9adde3d141f982d76aac76eb8b83a97409705fb45dc4\"" Jan 29 10:54:20.392782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834063419.mount: Deactivated successfully. Jan 29 10:54:20.714533 containerd[1726]: time="2025-01-29T10:54:20.714408256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:20.719346 containerd[1726]: time="2025-01-29T10:54:20.719278775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 29 10:54:20.721808 containerd[1726]: time="2025-01-29T10:54:20.721749174Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:20.725803 containerd[1726]: time="2025-01-29T10:54:20.725738853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:20.726883 containerd[1726]: time="2025-01-29T10:54:20.726388573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.612529158s" Jan 29 10:54:20.726883 containerd[1726]: time="2025-01-29T10:54:20.726422413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 10:54:20.727669 containerd[1726]: time="2025-01-29T10:54:20.727630733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 10:54:20.741377 containerd[1726]: time="2025-01-29T10:54:20.741343409Z" level=info msg="CreateContainer within sandbox \"0098a34369b56363d980ef6d04bd4fbdba790e59be91c70dea1a33aeadb7092f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 10:54:20.778651 containerd[1726]: time="2025-01-29T10:54:20.778601759Z" level=info msg="CreateContainer within sandbox \"0098a34369b56363d980ef6d04bd4fbdba790e59be91c70dea1a33aeadb7092f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"377f52ba8e82844719930bab2725440535908ae8d499b5f506708205fc8f9499\"" Jan 29 10:54:20.779389 containerd[1726]: time="2025-01-29T10:54:20.779336199Z" level=info msg="StartContainer for \"377f52ba8e82844719930bab2725440535908ae8d499b5f506708205fc8f9499\"" Jan 29 10:54:20.808019 systemd[1]: Started cri-containerd-377f52ba8e82844719930bab2725440535908ae8d499b5f506708205fc8f9499.scope - libcontainer container 377f52ba8e82844719930bab2725440535908ae8d499b5f506708205fc8f9499. Jan 29 10:54:20.841043 containerd[1726]: time="2025-01-29T10:54:20.840963422Z" level=info msg="StartContainer for \"377f52ba8e82844719930bab2725440535908ae8d499b5f506708205fc8f9499\" returns successfully" Jan 29 10:54:21.009041 kubelet[3366]: E0129 10:54:21.008989 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct4dr" podUID="85d938fd-edbc-4618-8500-89676d3770ef" Jan 29 10:54:21.101161 kubelet[3366]: E0129 10:54:21.101055 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.101161 kubelet[3366]: W0129 10:54:21.101080 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.101161 kubelet[3366]: E0129 10:54:21.101101 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.101381 kubelet[3366]: E0129 10:54:21.101291 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.101381 kubelet[3366]: W0129 10:54:21.101299 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.101381 kubelet[3366]: E0129 10:54:21.101309 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.101491 kubelet[3366]: E0129 10:54:21.101475 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.101491 kubelet[3366]: W0129 10:54:21.101484 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.101541 kubelet[3366]: E0129 10:54:21.101509 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.101875 kubelet[3366]: E0129 10:54:21.101716 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.101875 kubelet[3366]: W0129 10:54:21.101762 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.101875 kubelet[3366]: E0129 10:54:21.101773 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.102264 kubelet[3366]: E0129 10:54:21.102240 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.102264 kubelet[3366]: W0129 10:54:21.102257 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.102361 kubelet[3366]: E0129 10:54:21.102280 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.103611 kubelet[3366]: E0129 10:54:21.102796 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.103611 kubelet[3366]: W0129 10:54:21.102815 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.103611 kubelet[3366]: E0129 10:54:21.102933 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.103611 kubelet[3366]: E0129 10:54:21.103276 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.103611 kubelet[3366]: W0129 10:54:21.103287 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.103611 kubelet[3366]: E0129 10:54:21.103300 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.103862 kubelet[3366]: E0129 10:54:21.103789 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.103862 kubelet[3366]: W0129 10:54:21.103808 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.103977 kubelet[3366]: E0129 10:54:21.103821 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.104441 kubelet[3366]: E0129 10:54:21.104340 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.104441 kubelet[3366]: W0129 10:54:21.104359 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.104527 kubelet[3366]: E0129 10:54:21.104371 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.104878 kubelet[3366]: E0129 10:54:21.104786 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.104878 kubelet[3366]: W0129 10:54:21.104803 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.104878 kubelet[3366]: E0129 10:54:21.104828 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.105349 kubelet[3366]: E0129 10:54:21.105328 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.105349 kubelet[3366]: W0129 10:54:21.105346 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.105638 kubelet[3366]: E0129 10:54:21.105359 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.105793 kubelet[3366]: E0129 10:54:21.105770 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.105793 kubelet[3366]: W0129 10:54:21.105786 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.106162 kubelet[3366]: E0129 10:54:21.105797 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.106581 kubelet[3366]: E0129 10:54:21.106560 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.106581 kubelet[3366]: W0129 10:54:21.106577 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.106676 kubelet[3366]: E0129 10:54:21.106590 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.107096 kubelet[3366]: E0129 10:54:21.106937 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.107183 kubelet[3366]: W0129 10:54:21.107133 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.107183 kubelet[3366]: E0129 10:54:21.107151 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.107676 kubelet[3366]: E0129 10:54:21.107654 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.107726 kubelet[3366]: W0129 10:54:21.107696 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.107726 kubelet[3366]: E0129 10:54:21.107711 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.113885 kubelet[3366]: E0129 10:54:21.113765 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.113885 kubelet[3366]: W0129 10:54:21.113783 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.113885 kubelet[3366]: E0129 10:54:21.113797 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.114073 kubelet[3366]: E0129 10:54:21.114018 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.114073 kubelet[3366]: W0129 10:54:21.114027 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.114073 kubelet[3366]: E0129 10:54:21.114043 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.114301 kubelet[3366]: E0129 10:54:21.114191 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.114301 kubelet[3366]: W0129 10:54:21.114199 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.114301 kubelet[3366]: E0129 10:54:21.114213 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.114483 kubelet[3366]: E0129 10:54:21.114469 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.114615 kubelet[3366]: W0129 10:54:21.114536 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.114615 kubelet[3366]: E0129 10:54:21.114567 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.114842 kubelet[3366]: E0129 10:54:21.114830 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.115000 kubelet[3366]: W0129 10:54:21.114935 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.115000 kubelet[3366]: E0129 10:54:21.114961 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.115454 kubelet[3366]: E0129 10:54:21.115317 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.115454 kubelet[3366]: W0129 10:54:21.115329 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.115454 kubelet[3366]: E0129 10:54:21.115349 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.115644 kubelet[3366]: E0129 10:54:21.115633 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.115696 kubelet[3366]: W0129 10:54:21.115686 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.115835 kubelet[3366]: E0129 10:54:21.115762 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.116108 kubelet[3366]: E0129 10:54:21.116020 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.116108 kubelet[3366]: W0129 10:54:21.116032 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.116108 kubelet[3366]: E0129 10:54:21.116051 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.116385 kubelet[3366]: E0129 10:54:21.116291 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.116385 kubelet[3366]: W0129 10:54:21.116303 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.116385 kubelet[3366]: E0129 10:54:21.116328 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.116702 kubelet[3366]: E0129 10:54:21.116629 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.116702 kubelet[3366]: W0129 10:54:21.116641 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.116702 kubelet[3366]: E0129 10:54:21.116660 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.117110 kubelet[3366]: E0129 10:54:21.116995 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.117110 kubelet[3366]: W0129 10:54:21.117020 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.117110 kubelet[3366]: E0129 10:54:21.117042 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.117448 kubelet[3366]: E0129 10:54:21.117389 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.117448 kubelet[3366]: W0129 10:54:21.117401 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.117448 kubelet[3366]: E0129 10:54:21.117431 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.117843 kubelet[3366]: E0129 10:54:21.117711 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.117843 kubelet[3366]: W0129 10:54:21.117725 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.117843 kubelet[3366]: E0129 10:54:21.117810 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.118111 kubelet[3366]: E0129 10:54:21.118053 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.118111 kubelet[3366]: W0129 10:54:21.118064 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.118262 kubelet[3366]: E0129 10:54:21.118200 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.118572 kubelet[3366]: E0129 10:54:21.118453 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.118572 kubelet[3366]: W0129 10:54:21.118466 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.118572 kubelet[3366]: E0129 10:54:21.118486 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.118824 kubelet[3366]: E0129 10:54:21.118743 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.118824 kubelet[3366]: W0129 10:54:21.118756 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.118824 kubelet[3366]: E0129 10:54:21.118767 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.119318 kubelet[3366]: E0129 10:54:21.119130 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.119318 kubelet[3366]: W0129 10:54:21.119143 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.119318 kubelet[3366]: E0129 10:54:21.119154 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.119596 kubelet[3366]: E0129 10:54:21.119582 3366 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 10:54:21.119661 kubelet[3366]: W0129 10:54:21.119650 3366 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 10:54:21.119774 kubelet[3366]: E0129 10:54:21.119761 3366 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 10:54:21.193404 kubelet[3366]: I0129 10:54:21.193346 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6df65b8b5-jkktp" podStartSLOduration=1.579491647 podStartE2EDuration="3.193329125s" podCreationTimestamp="2025-01-29 10:54:18 +0000 UTC" firstStartedPulling="2025-01-29 10:54:19.113329735 +0000 UTC m=+13.195146356" lastFinishedPulling="2025-01-29 10:54:20.727167133 +0000 UTC m=+14.808983834" observedRunningTime="2025-01-29 10:54:21.104658909 +0000 UTC m=+15.186475530" watchObservedRunningTime="2025-01-29 10:54:21.193329125 +0000 UTC m=+15.275145746" Jan 29 10:54:21.906118 containerd[1726]: time="2025-01-29T10:54:21.906067050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:21.909208 containerd[1726]: time="2025-01-29T10:54:21.909130889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 29 10:54:21.911441 containerd[1726]: time="2025-01-29T10:54:21.911383168Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:21.917008 containerd[1726]: time="2025-01-29T10:54:21.916937087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:21.917746 containerd[1726]: time="2025-01-29T10:54:21.917597367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.189835514s" Jan 29 10:54:21.917746 containerd[1726]: time="2025-01-29T10:54:21.917631967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 10:54:21.920681 containerd[1726]: time="2025-01-29T10:54:21.920644406Z" level=info msg="CreateContainer within sandbox \"99f36701f52cb5c1deee9adde3d141f982d76aac76eb8b83a97409705fb45dc4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 10:54:21.962777 containerd[1726]: time="2025-01-29T10:54:21.962733674Z" level=info msg="CreateContainer within sandbox \"99f36701f52cb5c1deee9adde3d141f982d76aac76eb8b83a97409705fb45dc4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4e8eb172876221f3977491e450ba826de89d0c14f6814d5812d5361da9f441dd\"" Jan 29 10:54:21.963538 containerd[1726]: time="2025-01-29T10:54:21.963449114Z" level=info msg="StartContainer for \"4e8eb172876221f3977491e450ba826de89d0c14f6814d5812d5361da9f441dd\"" Jan 29 10:54:21.996028 systemd[1]: Started cri-containerd-4e8eb172876221f3977491e450ba826de89d0c14f6814d5812d5361da9f441dd.scope - libcontainer container 4e8eb172876221f3977491e450ba826de89d0c14f6814d5812d5361da9f441dd. Jan 29 10:54:22.029453 containerd[1726]: time="2025-01-29T10:54:22.029358256Z" level=info msg="StartContainer for \"4e8eb172876221f3977491e450ba826de89d0c14f6814d5812d5361da9f441dd\" returns successfully" Jan 29 10:54:22.034182 systemd[1]: cri-containerd-4e8eb172876221f3977491e450ba826de89d0c14f6814d5812d5361da9f441dd.scope: Deactivated successfully. Jan 29 10:54:22.061948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e8eb172876221f3977491e450ba826de89d0c14f6814d5812d5361da9f441dd-rootfs.mount: Deactivated successfully. Jan 29 10:54:22.973971 containerd[1726]: time="2025-01-29T10:54:22.973799037Z" level=info msg="shim disconnected" id=4e8eb172876221f3977491e450ba826de89d0c14f6814d5812d5361da9f441dd namespace=k8s.io Jan 29 10:54:22.974573 containerd[1726]: time="2025-01-29T10:54:22.974400077Z" level=warning msg="cleaning up after shim disconnected" id=4e8eb172876221f3977491e450ba826de89d0c14f6814d5812d5361da9f441dd namespace=k8s.io Jan 29 10:54:22.974573 containerd[1726]: time="2025-01-29T10:54:22.974440477Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:54:23.009899 kubelet[3366]: E0129 10:54:23.009825 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct4dr" podUID="85d938fd-edbc-4618-8500-89676d3770ef" Jan 29 10:54:23.097468 containerd[1726]: time="2025-01-29T10:54:23.097235883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 10:54:25.009427 kubelet[3366]: E0129 10:54:25.009243 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct4dr" podUID="85d938fd-edbc-4618-8500-89676d3770ef" Jan 29 10:54:25.850162 containerd[1726]: time="2025-01-29T10:54:25.850118168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:25.852164 containerd[1726]: time="2025-01-29T10:54:25.852122848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 10:54:25.854526 containerd[1726]: time="2025-01-29T10:54:25.854486687Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:25.858236 containerd[1726]: time="2025-01-29T10:54:25.858180126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:25.858892 containerd[1726]: time="2025-01-29T10:54:25.858749806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.761470403s" Jan 29 10:54:25.858892 containerd[1726]: time="2025-01-29T10:54:25.858779926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 10:54:25.861529 containerd[1726]: time="2025-01-29T10:54:25.861366245Z" level=info msg="CreateContainer within sandbox \"99f36701f52cb5c1deee9adde3d141f982d76aac76eb8b83a97409705fb45dc4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 10:54:25.893797 containerd[1726]: time="2025-01-29T10:54:25.893719956Z" level=info msg="CreateContainer within sandbox \"99f36701f52cb5c1deee9adde3d141f982d76aac76eb8b83a97409705fb45dc4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c05fdb8e276af3ab253da9d76b1759675f78589c2457f9399f2b12c5b113d251\"" Jan 29 10:54:25.894905 containerd[1726]: time="2025-01-29T10:54:25.894416276Z" level=info msg="StartContainer for \"c05fdb8e276af3ab253da9d76b1759675f78589c2457f9399f2b12c5b113d251\"" Jan 29 10:54:25.921013 systemd[1]: Started cri-containerd-c05fdb8e276af3ab253da9d76b1759675f78589c2457f9399f2b12c5b113d251.scope - libcontainer container c05fdb8e276af3ab253da9d76b1759675f78589c2457f9399f2b12c5b113d251. Jan 29 10:54:25.950886 containerd[1726]: time="2025-01-29T10:54:25.950384101Z" level=info msg="StartContainer for \"c05fdb8e276af3ab253da9d76b1759675f78589c2457f9399f2b12c5b113d251\" returns successfully" Jan 29 10:54:27.009309 kubelet[3366]: E0129 10:54:27.009246 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ct4dr" podUID="85d938fd-edbc-4618-8500-89676d3770ef" Jan 29 10:54:27.050429 systemd[1]: cri-containerd-c05fdb8e276af3ab253da9d76b1759675f78589c2457f9399f2b12c5b113d251.scope: Deactivated successfully. Jan 29 10:54:27.071647 kubelet[3366]: I0129 10:54:27.071601 3366 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 10:54:27.072585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c05fdb8e276af3ab253da9d76b1759675f78589c2457f9399f2b12c5b113d251-rootfs.mount: Deactivated successfully. Jan 29 10:54:27.118107 systemd[1]: Created slice kubepods-burstable-pod7648d1df_3533_48c0_904f_b5099718a0e0.slice - libcontainer container kubepods-burstable-pod7648d1df_3533_48c0_904f_b5099718a0e0.slice. Jan 29 10:54:27.128498 systemd[1]: Created slice kubepods-besteffort-pod24a732d5_74a3_4958_bf3c_24c35316b990.slice - libcontainer container kubepods-besteffort-pod24a732d5_74a3_4958_bf3c_24c35316b990.slice. Jan 29 10:54:27.407334 kubelet[3366]: I0129 10:54:27.154566 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prm7q\" (UniqueName: \"kubernetes.io/projected/8d76515f-5553-4e43-85a9-9b91a1e79d22-kube-api-access-prm7q\") pod \"calico-apiserver-785d46db66-x8gcd\" (UID: \"8d76515f-5553-4e43-85a9-9b91a1e79d22\") " pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:27.407334 kubelet[3366]: I0129 10:54:27.154617 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjs8m\" (UniqueName: \"kubernetes.io/projected/24a732d5-74a3-4958-bf3c-24c35316b990-kube-api-access-mjs8m\") pod \"calico-kube-controllers-fdbfb578c-sx269\" (UID: \"24a732d5-74a3-4958-bf3c-24c35316b990\") " pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:27.407334 kubelet[3366]: I0129 10:54:27.154638 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njc8q\" (UniqueName: \"kubernetes.io/projected/ff0d24de-055d-4e71-bae6-576826e88d95-kube-api-access-njc8q\") pod \"calico-apiserver-785d46db66-nfw9l\" (UID: \"ff0d24de-055d-4e71-bae6-576826e88d95\") " pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:27.407334 kubelet[3366]: I0129 10:54:27.154658 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dchfp\" (UniqueName: \"kubernetes.io/projected/2b04ba18-4078-4572-b181-dabad7c530d3-kube-api-access-dchfp\") pod \"coredns-6f6b679f8f-v9l7s\" (UID: \"2b04ba18-4078-4572-b181-dabad7c530d3\") " pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:27.407334 kubelet[3366]: I0129 10:54:27.154677 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ff0d24de-055d-4e71-bae6-576826e88d95-calico-apiserver-certs\") pod \"calico-apiserver-785d46db66-nfw9l\" (UID: \"ff0d24de-055d-4e71-bae6-576826e88d95\") " pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:27.139634 systemd[1]: Created slice kubepods-burstable-pod2b04ba18_4078_4572_b181_dabad7c530d3.slice - libcontainer container kubepods-burstable-pod2b04ba18_4078_4572_b181_dabad7c530d3.slice. Jan 29 10:54:27.407548 kubelet[3366]: I0129 10:54:27.154692 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7648d1df-3533-48c0-904f-b5099718a0e0-config-volume\") pod \"coredns-6f6b679f8f-lmknf\" (UID: \"7648d1df-3533-48c0-904f-b5099718a0e0\") " pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:27.407548 kubelet[3366]: I0129 10:54:27.154708 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24a732d5-74a3-4958-bf3c-24c35316b990-tigera-ca-bundle\") pod \"calico-kube-controllers-fdbfb578c-sx269\" (UID: \"24a732d5-74a3-4958-bf3c-24c35316b990\") " pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:27.407548 kubelet[3366]: I0129 10:54:27.154737 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5pf\" (UniqueName: \"kubernetes.io/projected/7648d1df-3533-48c0-904f-b5099718a0e0-kube-api-access-bd5pf\") pod \"coredns-6f6b679f8f-lmknf\" (UID: \"7648d1df-3533-48c0-904f-b5099718a0e0\") " pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:27.407548 kubelet[3366]: I0129 10:54:27.154752 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b04ba18-4078-4572-b181-dabad7c530d3-config-volume\") pod \"coredns-6f6b679f8f-v9l7s\" (UID: \"2b04ba18-4078-4572-b181-dabad7c530d3\") " pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:27.407548 kubelet[3366]: I0129 10:54:27.154776 3366 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d76515f-5553-4e43-85a9-9b91a1e79d22-calico-apiserver-certs\") pod \"calico-apiserver-785d46db66-x8gcd\" (UID: \"8d76515f-5553-4e43-85a9-9b91a1e79d22\") " pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:27.148642 systemd[1]: Created slice kubepods-besteffort-pod8d76515f_5553_4e43_85a9_9b91a1e79d22.slice - libcontainer container kubepods-besteffort-pod8d76515f_5553_4e43_85a9_9b91a1e79d22.slice. Jan 29 10:54:27.157434 systemd[1]: Created slice kubepods-besteffort-podff0d24de_055d_4e71_bae6_576826e88d95.slice - libcontainer container kubepods-besteffort-podff0d24de_055d_4e71_bae6_576826e88d95.slice. Jan 29 10:54:27.709042 containerd[1726]: time="2025-01-29T10:54:27.708922791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:0,}" Jan 29 10:54:27.724154 containerd[1726]: time="2025-01-29T10:54:27.723776662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:0,}" Jan 29 10:54:27.724636 containerd[1726]: time="2025-01-29T10:54:27.724478741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:0,}" Jan 29 10:54:27.724636 containerd[1726]: time="2025-01-29T10:54:27.724506621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:0,}" Jan 29 10:54:27.727374 containerd[1726]: time="2025-01-29T10:54:27.727315300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:0,}" Jan 29 10:54:28.259024 containerd[1726]: time="2025-01-29T10:54:28.258934662Z" level=info msg="shim disconnected" id=c05fdb8e276af3ab253da9d76b1759675f78589c2457f9399f2b12c5b113d251 namespace=k8s.io Jan 29 10:54:28.259358 containerd[1726]: time="2025-01-29T10:54:28.259183622Z" level=warning msg="cleaning up after shim disconnected" id=c05fdb8e276af3ab253da9d76b1759675f78589c2457f9399f2b12c5b113d251 namespace=k8s.io Jan 29 10:54:28.259358 containerd[1726]: time="2025-01-29T10:54:28.259212462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 10:54:28.507743 containerd[1726]: time="2025-01-29T10:54:28.507693061Z" level=error msg="Failed to destroy network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.508191 containerd[1726]: time="2025-01-29T10:54:28.508123021Z" level=error msg="encountered an error cleaning up failed sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.508246 containerd[1726]: time="2025-01-29T10:54:28.508212341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.508649 kubelet[3366]: E0129 10:54:28.508502 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.508649 kubelet[3366]: E0129 10:54:28.508576 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:28.508649 kubelet[3366]: E0129 10:54:28.508604 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:28.509719 kubelet[3366]: E0129 10:54:28.508650 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lmknf_kube-system(7648d1df-3533-48c0-904f-b5099718a0e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lmknf_kube-system(7648d1df-3533-48c0-904f-b5099718a0e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lmknf" podUID="7648d1df-3533-48c0-904f-b5099718a0e0" Jan 29 10:54:28.528076 containerd[1726]: time="2025-01-29T10:54:28.528025020Z" level=error msg="Failed to destroy network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.528376 containerd[1726]: time="2025-01-29T10:54:28.528347340Z" level=error msg="encountered an error cleaning up failed sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.528432 containerd[1726]: time="2025-01-29T10:54:28.528416100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.528654 kubelet[3366]: E0129 10:54:28.528620 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.530947 kubelet[3366]: E0129 10:54:28.528765 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:28.530947 kubelet[3366]: E0129 10:54:28.528789 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:28.530947 kubelet[3366]: E0129 10:54:28.529958 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fdbfb578c-sx269_calico-system(24a732d5-74a3-4958-bf3c-24c35316b990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fdbfb578c-sx269_calico-system(24a732d5-74a3-4958-bf3c-24c35316b990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" podUID="24a732d5-74a3-4958-bf3c-24c35316b990" Jan 29 10:54:28.545106 containerd[1726]: time="2025-01-29T10:54:28.545057178Z" level=error msg="Failed to destroy network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.545280 containerd[1726]: time="2025-01-29T10:54:28.545257978Z" level=error msg="Failed to destroy network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.545409 containerd[1726]: time="2025-01-29T10:54:28.545374258Z" level=error msg="encountered an error cleaning up failed sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.545466 containerd[1726]: time="2025-01-29T10:54:28.545443058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.545681 kubelet[3366]: E0129 10:54:28.545647 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.545790 kubelet[3366]: E0129 10:54:28.545776 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:28.545888 kubelet[3366]: E0129 10:54:28.545842 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:28.546018 kubelet[3366]: E0129 10:54:28.545994 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v9l7s" podUID="2b04ba18-4078-4572-b181-dabad7c530d3" Jan 29 10:54:28.546544 containerd[1726]: time="2025-01-29T10:54:28.546502498Z" level=error msg="encountered an error cleaning up failed sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.546660 containerd[1726]: time="2025-01-29T10:54:28.546640018Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.547009 kubelet[3366]: E0129 10:54:28.546974 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.547188 kubelet[3366]: E0129 10:54:28.547149 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:28.547284 kubelet[3366]: E0129 10:54:28.547268 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:28.547394 kubelet[3366]: E0129 10:54:28.547372 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785d46db66-nfw9l_calico-apiserver(ff0d24de-055d-4e71-bae6-576826e88d95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785d46db66-nfw9l_calico-apiserver(ff0d24de-055d-4e71-bae6-576826e88d95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" podUID="ff0d24de-055d-4e71-bae6-576826e88d95" Jan 29 10:54:28.549151 containerd[1726]: time="2025-01-29T10:54:28.549119098Z" level=error msg="Failed to destroy network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.549500 containerd[1726]: time="2025-01-29T10:54:28.549479538Z" level=error msg="encountered an error cleaning up failed sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.549622 containerd[1726]: time="2025-01-29T10:54:28.549601058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.550620 kubelet[3366]: E0129 10:54:28.550598 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:28.550764 kubelet[3366]: E0129 10:54:28.550747 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:28.550876 kubelet[3366]: E0129 10:54:28.550841 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:28.551007 kubelet[3366]: E0129 10:54:28.550985 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785d46db66-x8gcd_calico-apiserver(8d76515f-5553-4e43-85a9-9b91a1e79d22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785d46db66-x8gcd_calico-apiserver(8d76515f-5553-4e43-85a9-9b91a1e79d22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" podUID="8d76515f-5553-4e43-85a9-9b91a1e79d22" Jan 29 10:54:29.014872 systemd[1]: Created slice kubepods-besteffort-pod85d938fd_edbc_4618_8500_89676d3770ef.slice - libcontainer container kubepods-besteffort-pod85d938fd_edbc_4618_8500_89676d3770ef.slice. Jan 29 10:54:29.016841 containerd[1726]: time="2025-01-29T10:54:29.016800219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:0,}" Jan 29 10:54:29.087965 containerd[1726]: time="2025-01-29T10:54:29.087905813Z" level=error msg="Failed to destroy network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:29.088335 containerd[1726]: time="2025-01-29T10:54:29.088300173Z" level=error msg="encountered an error cleaning up failed sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:29.088389 containerd[1726]: time="2025-01-29T10:54:29.088362933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:29.088936 kubelet[3366]: E0129 10:54:29.088556 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:29.088936 kubelet[3366]: E0129 10:54:29.088609 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:29.088936 kubelet[3366]: E0129 10:54:29.088627 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:29.089068 kubelet[3366]: E0129 10:54:29.088667 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ct4dr_calico-system(85d938fd-edbc-4618-8500-89676d3770ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ct4dr_calico-system(85d938fd-edbc-4618-8500-89676d3770ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ct4dr" podUID="85d938fd-edbc-4618-8500-89676d3770ef" Jan 29 10:54:29.110135 kubelet[3366]: I0129 10:54:29.110107 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d" Jan 29 10:54:29.111706 containerd[1726]: time="2025-01-29T10:54:29.111250651Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\"" Jan 29 10:54:29.111706 containerd[1726]: time="2025-01-29T10:54:29.111415371Z" level=info msg="Ensure that sandbox 00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d in task-service has been cleanup successfully" Jan 29 10:54:29.111813 kubelet[3366]: I0129 10:54:29.111701 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e" Jan 29 10:54:29.112442 containerd[1726]: time="2025-01-29T10:54:29.112101691Z" level=info msg="TearDown network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" successfully" Jan 29 10:54:29.112442 containerd[1726]: time="2025-01-29T10:54:29.112119771Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" returns successfully" Jan 29 10:54:29.112442 containerd[1726]: time="2025-01-29T10:54:29.112185691Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\"" Jan 29 10:54:29.112442 containerd[1726]: time="2025-01-29T10:54:29.112339731Z" level=info msg="Ensure that sandbox 3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e in task-service has been cleanup successfully" Jan 29 10:54:29.112776 containerd[1726]: time="2025-01-29T10:54:29.112723291Z" level=info msg="TearDown network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" successfully" Jan 29 10:54:29.112776 containerd[1726]: time="2025-01-29T10:54:29.112758531Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" returns successfully" Jan 29 10:54:29.114416 containerd[1726]: time="2025-01-29T10:54:29.113609731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:1,}" Jan 29 10:54:29.114416 containerd[1726]: time="2025-01-29T10:54:29.113755011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:1,}" Jan 29 10:54:29.115981 kubelet[3366]: I0129 10:54:29.115963 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f" Jan 29 10:54:29.117631 containerd[1726]: time="2025-01-29T10:54:29.117518051Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\"" Jan 29 10:54:29.118199 containerd[1726]: time="2025-01-29T10:54:29.118079371Z" level=info msg="Ensure that sandbox a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f in task-service has been cleanup successfully" Jan 29 10:54:29.118878 containerd[1726]: time="2025-01-29T10:54:29.118528851Z" level=info msg="TearDown network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" successfully" Jan 29 10:54:29.118878 containerd[1726]: time="2025-01-29T10:54:29.118550251Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" returns successfully" Jan 29 10:54:29.118981 kubelet[3366]: I0129 10:54:29.118678 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122" Jan 29 10:54:29.120654 containerd[1726]: time="2025-01-29T10:54:29.120146451Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\"" Jan 29 10:54:29.120654 containerd[1726]: time="2025-01-29T10:54:29.120276051Z" level=info msg="Ensure that sandbox ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122 in task-service has been cleanup successfully" Jan 29 10:54:29.121102 containerd[1726]: time="2025-01-29T10:54:29.121027850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:1,}" Jan 29 10:54:29.121511 containerd[1726]: time="2025-01-29T10:54:29.121446210Z" level=info msg="TearDown network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" successfully" Jan 29 10:54:29.121511 containerd[1726]: time="2025-01-29T10:54:29.121463090Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" returns successfully" Jan 29 10:54:29.122693 containerd[1726]: time="2025-01-29T10:54:29.122547970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:1,}" Jan 29 10:54:29.123924 kubelet[3366]: I0129 10:54:29.123661 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2" Jan 29 10:54:29.125212 containerd[1726]: time="2025-01-29T10:54:29.125045410Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\"" Jan 29 10:54:29.125481 containerd[1726]: time="2025-01-29T10:54:29.125448090Z" level=info msg="Ensure that sandbox 6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2 in task-service has been cleanup successfully" Jan 29 10:54:29.126340 containerd[1726]: time="2025-01-29T10:54:29.126030010Z" level=info msg="TearDown network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" successfully" Jan 29 10:54:29.126340 containerd[1726]: time="2025-01-29T10:54:29.126066210Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" returns successfully" Jan 29 10:54:29.126822 containerd[1726]: time="2025-01-29T10:54:29.126793450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:1,}" Jan 29 10:54:29.130472 containerd[1726]: time="2025-01-29T10:54:29.130268250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 10:54:29.130785 kubelet[3366]: I0129 10:54:29.130678 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604" Jan 29 10:54:29.131410 containerd[1726]: time="2025-01-29T10:54:29.131380770Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\"" Jan 29 10:54:29.131547 containerd[1726]: time="2025-01-29T10:54:29.131523650Z" level=info msg="Ensure that sandbox 7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604 in task-service has been cleanup successfully" Jan 29 10:54:29.132298 containerd[1726]: time="2025-01-29T10:54:29.132197490Z" level=info msg="TearDown network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" successfully" Jan 29 10:54:29.132298 containerd[1726]: time="2025-01-29T10:54:29.132224610Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" returns successfully" Jan 29 10:54:29.132870 containerd[1726]: time="2025-01-29T10:54:29.132764489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:1,}" Jan 29 10:54:29.358972 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f-shm.mount: Deactivated successfully. Jan 29 10:54:29.359692 systemd[1]: run-netns-cni\x2d268b0875\x2ddb27\x2dda6f\x2dff92\x2d14b0ba866348.mount: Deactivated successfully. Jan 29 10:54:29.359792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2-shm.mount: Deactivated successfully. Jan 29 10:54:29.359843 systemd[1]: run-netns-cni\x2d5446d333\x2de3d7\x2dcb70\x2d3864\x2dd1f7fbaeb832.mount: Deactivated successfully. Jan 29 10:54:29.359932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122-shm.mount: Deactivated successfully. Jan 29 10:54:31.093504 containerd[1726]: time="2025-01-29T10:54:31.093390167Z" level=error msg="Failed to destroy network for sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.095132 containerd[1726]: time="2025-01-29T10:54:31.095088567Z" level=error msg="encountered an error cleaning up failed sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.095219 containerd[1726]: time="2025-01-29T10:54:31.095166127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.095418 kubelet[3366]: E0129 10:54:31.095380 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.095650 kubelet[3366]: E0129 10:54:31.095448 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:31.095650 kubelet[3366]: E0129 10:54:31.095484 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:31.095650 kubelet[3366]: E0129 10:54:31.095532 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fdbfb578c-sx269_calico-system(24a732d5-74a3-4958-bf3c-24c35316b990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fdbfb578c-sx269_calico-system(24a732d5-74a3-4958-bf3c-24c35316b990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" podUID="24a732d5-74a3-4958-bf3c-24c35316b990" Jan 29 10:54:31.136258 kubelet[3366]: I0129 10:54:31.136184 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94" Jan 29 10:54:31.137169 containerd[1726]: time="2025-01-29T10:54:31.136951644Z" level=info msg="StopPodSandbox for \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\"" Jan 29 10:54:31.137169 containerd[1726]: time="2025-01-29T10:54:31.137123884Z" level=info msg="Ensure that sandbox 733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94 in task-service has been cleanup successfully" Jan 29 10:54:31.137893 containerd[1726]: time="2025-01-29T10:54:31.137843883Z" level=info msg="TearDown network for sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" successfully" Jan 29 10:54:31.138059 containerd[1726]: time="2025-01-29T10:54:31.137985043Z" level=info msg="StopPodSandbox for \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" returns successfully" Jan 29 10:54:31.138924 containerd[1726]: time="2025-01-29T10:54:31.138809123Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\"" Jan 29 10:54:31.139073 containerd[1726]: time="2025-01-29T10:54:31.139005723Z" level=info msg="TearDown network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" successfully" Jan 29 10:54:31.139073 containerd[1726]: time="2025-01-29T10:54:31.139022043Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" returns successfully" Jan 29 10:54:31.139780 containerd[1726]: time="2025-01-29T10:54:31.139757163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:2,}" Jan 29 10:54:31.294698 containerd[1726]: time="2025-01-29T10:54:31.294642831Z" level=error msg="Failed to destroy network for sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.295072 containerd[1726]: time="2025-01-29T10:54:31.295043990Z" level=error msg="encountered an error cleaning up failed sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.295132 containerd[1726]: time="2025-01-29T10:54:31.295105070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.296974 kubelet[3366]: E0129 10:54:31.295362 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.296974 kubelet[3366]: E0129 10:54:31.295420 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:31.296974 kubelet[3366]: E0129 10:54:31.295439 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:31.297116 kubelet[3366]: E0129 10:54:31.295477 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785d46db66-nfw9l_calico-apiserver(ff0d24de-055d-4e71-bae6-576826e88d95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785d46db66-nfw9l_calico-apiserver(ff0d24de-055d-4e71-bae6-576826e88d95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" podUID="ff0d24de-055d-4e71-bae6-576826e88d95" Jan 29 10:54:31.343264 containerd[1726]: time="2025-01-29T10:54:31.343208426Z" level=error msg="Failed to destroy network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.345213 containerd[1726]: time="2025-01-29T10:54:31.345092586Z" level=error msg="encountered an error cleaning up failed sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.345213 containerd[1726]: time="2025-01-29T10:54:31.345170506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.345559 kubelet[3366]: E0129 10:54:31.345412 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.345559 kubelet[3366]: E0129 10:54:31.345462 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:31.345559 kubelet[3366]: E0129 10:54:31.345480 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:31.345660 kubelet[3366]: E0129 10:54:31.345533 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v9l7s" podUID="2b04ba18-4078-4572-b181-dabad7c530d3" Jan 29 10:54:31.450472 containerd[1726]: time="2025-01-29T10:54:31.450430618Z" level=error msg="Failed to destroy network for sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.451062 containerd[1726]: time="2025-01-29T10:54:31.450914778Z" level=error msg="encountered an error cleaning up failed sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.451062 containerd[1726]: time="2025-01-29T10:54:31.450971298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.451264 kubelet[3366]: E0129 10:54:31.451202 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.451264 kubelet[3366]: E0129 10:54:31.451259 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:31.451352 kubelet[3366]: E0129 10:54:31.451277 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:31.451352 kubelet[3366]: E0129 10:54:31.451318 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785d46db66-x8gcd_calico-apiserver(8d76515f-5553-4e43-85a9-9b91a1e79d22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785d46db66-x8gcd_calico-apiserver(8d76515f-5553-4e43-85a9-9b91a1e79d22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" podUID="8d76515f-5553-4e43-85a9-9b91a1e79d22" Jan 29 10:54:31.505096 containerd[1726]: time="2025-01-29T10:54:31.505046493Z" level=error msg="Failed to destroy network for sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.505376 containerd[1726]: time="2025-01-29T10:54:31.505351133Z" level=error msg="encountered an error cleaning up failed sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.505438 containerd[1726]: time="2025-01-29T10:54:31.505414533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.505965 kubelet[3366]: E0129 10:54:31.505631 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.505965 kubelet[3366]: E0129 10:54:31.505687 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:31.505965 kubelet[3366]: E0129 10:54:31.505707 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:31.506094 kubelet[3366]: E0129 10:54:31.505754 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lmknf_kube-system(7648d1df-3533-48c0-904f-b5099718a0e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lmknf_kube-system(7648d1df-3533-48c0-904f-b5099718a0e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lmknf" podUID="7648d1df-3533-48c0-904f-b5099718a0e0" Jan 29 10:54:31.612018 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d-shm.mount: Deactivated successfully. Jan 29 10:54:31.612108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203-shm.mount: Deactivated successfully. Jan 29 10:54:31.612158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6-shm.mount: Deactivated successfully. Jan 29 10:54:31.612213 systemd[1]: run-netns-cni\x2d6b0767a4\x2d11e0\x2d9edb\x2d9a5e\x2d59dab21348ee.mount: Deactivated successfully. Jan 29 10:54:31.612257 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94-shm.mount: Deactivated successfully. Jan 29 10:54:31.691554 containerd[1726]: time="2025-01-29T10:54:31.691494438Z" level=error msg="Failed to destroy network for sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.691868 containerd[1726]: time="2025-01-29T10:54:31.691825518Z" level=error msg="encountered an error cleaning up failed sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.691932 containerd[1726]: time="2025-01-29T10:54:31.691905718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.692276 kubelet[3366]: E0129 10:54:31.692120 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:31.692276 kubelet[3366]: E0129 10:54:31.692175 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:31.692276 kubelet[3366]: E0129 10:54:31.692193 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:31.692403 kubelet[3366]: E0129 10:54:31.692240 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ct4dr_calico-system(85d938fd-edbc-4618-8500-89676d3770ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ct4dr_calico-system(85d938fd-edbc-4618-8500-89676d3770ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ct4dr" podUID="85d938fd-edbc-4618-8500-89676d3770ef" Jan 29 10:54:31.694619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223-shm.mount: Deactivated successfully. Jan 29 10:54:32.139178 kubelet[3366]: I0129 10:54:32.139138 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203" Jan 29 10:54:32.143872 containerd[1726]: time="2025-01-29T10:54:32.139880041Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\"" Jan 29 10:54:32.143872 containerd[1726]: time="2025-01-29T10:54:32.140045081Z" level=info msg="Ensure that sandbox 96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203 in task-service has been cleanup successfully" Jan 29 10:54:32.143422 systemd[1]: run-netns-cni\x2dd4674a87\x2df7a2\x2df5f6\x2d9219\x2de88f0dfbdd42.mount: Deactivated successfully. Jan 29 10:54:32.144404 containerd[1726]: time="2025-01-29T10:54:32.144372760Z" level=info msg="TearDown network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" successfully" Jan 29 10:54:32.144404 containerd[1726]: time="2025-01-29T10:54:32.144397680Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" returns successfully" Jan 29 10:54:32.144813 containerd[1726]: time="2025-01-29T10:54:32.144681760Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\"" Jan 29 10:54:32.144813 containerd[1726]: time="2025-01-29T10:54:32.144755200Z" level=info msg="TearDown network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" successfully" Jan 29 10:54:32.144813 containerd[1726]: time="2025-01-29T10:54:32.144765720Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" returns successfully" Jan 29 10:54:32.145205 containerd[1726]: time="2025-01-29T10:54:32.145167520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:2,}" Jan 29 10:54:32.147000 kubelet[3366]: I0129 10:54:32.146928 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9" Jan 29 10:54:32.147944 containerd[1726]: time="2025-01-29T10:54:32.147788960Z" level=info msg="StopPodSandbox for \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\"" Jan 29 10:54:32.148926 containerd[1726]: time="2025-01-29T10:54:32.148775080Z" level=info msg="Ensure that sandbox 0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9 in task-service has been cleanup successfully" Jan 29 10:54:32.150946 kubelet[3366]: I0129 10:54:32.150394 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6" Jan 29 10:54:32.151002 containerd[1726]: time="2025-01-29T10:54:32.150726320Z" level=info msg="TearDown network for sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" successfully" Jan 29 10:54:32.151002 containerd[1726]: time="2025-01-29T10:54:32.150756280Z" level=info msg="StopPodSandbox for \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" returns successfully" Jan 29 10:54:32.151002 containerd[1726]: time="2025-01-29T10:54:32.150899680Z" level=info msg="StopPodSandbox for \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\"" Jan 29 10:54:32.151353 containerd[1726]: time="2025-01-29T10:54:32.151233480Z" level=info msg="Ensure that sandbox cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6 in task-service has been cleanup successfully" Jan 29 10:54:32.151737 containerd[1726]: time="2025-01-29T10:54:32.151716760Z" level=info msg="TearDown network for sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" successfully" Jan 29 10:54:32.152096 containerd[1726]: time="2025-01-29T10:54:32.151813200Z" level=info msg="StopPodSandbox for \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" returns successfully" Jan 29 10:54:32.152096 containerd[1726]: time="2025-01-29T10:54:32.151936000Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\"" Jan 29 10:54:32.152096 containerd[1726]: time="2025-01-29T10:54:32.152041120Z" level=info msg="TearDown network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" successfully" Jan 29 10:54:32.152096 containerd[1726]: time="2025-01-29T10:54:32.152052000Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" returns successfully" Jan 29 10:54:32.153255 containerd[1726]: time="2025-01-29T10:54:32.153055199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:2,}" Jan 29 10:54:32.153255 containerd[1726]: time="2025-01-29T10:54:32.153141199Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\"" Jan 29 10:54:32.153255 containerd[1726]: time="2025-01-29T10:54:32.153205399Z" level=info msg="TearDown network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" successfully" Jan 29 10:54:32.153255 containerd[1726]: time="2025-01-29T10:54:32.153213999Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" returns successfully" Jan 29 10:54:32.155420 kubelet[3366]: I0129 10:54:32.154644 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223" Jan 29 10:54:32.155540 containerd[1726]: time="2025-01-29T10:54:32.155504879Z" level=info msg="StopPodSandbox for \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\"" Jan 29 10:54:32.155700 containerd[1726]: time="2025-01-29T10:54:32.155628799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:2,}" Jan 29 10:54:32.155700 containerd[1726]: time="2025-01-29T10:54:32.155662759Z" level=info msg="Ensure that sandbox 6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223 in task-service has been cleanup successfully" Jan 29 10:54:32.156731 containerd[1726]: time="2025-01-29T10:54:32.156619119Z" level=info msg="TearDown network for sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" successfully" Jan 29 10:54:32.156731 containerd[1726]: time="2025-01-29T10:54:32.156701119Z" level=info msg="StopPodSandbox for \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" returns successfully" Jan 29 10:54:32.163523 containerd[1726]: time="2025-01-29T10:54:32.162715679Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\"" Jan 29 10:54:32.163523 containerd[1726]: time="2025-01-29T10:54:32.163127759Z" level=info msg="TearDown network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" successfully" Jan 29 10:54:32.163523 containerd[1726]: time="2025-01-29T10:54:32.163142639Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" returns successfully" Jan 29 10:54:32.164370 containerd[1726]: time="2025-01-29T10:54:32.164057439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:2,}" Jan 29 10:54:32.165615 kubelet[3366]: I0129 10:54:32.165303 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d" Jan 29 10:54:32.165792 containerd[1726]: time="2025-01-29T10:54:32.165770998Z" level=info msg="StopPodSandbox for \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\"" Jan 29 10:54:32.166165 containerd[1726]: time="2025-01-29T10:54:32.166146318Z" level=info msg="Ensure that sandbox 5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d in task-service has been cleanup successfully" Jan 29 10:54:32.166549 containerd[1726]: time="2025-01-29T10:54:32.166512038Z" level=info msg="TearDown network for sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" successfully" Jan 29 10:54:32.166652 containerd[1726]: time="2025-01-29T10:54:32.166529398Z" level=info msg="StopPodSandbox for \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" returns successfully" Jan 29 10:54:32.167690 containerd[1726]: time="2025-01-29T10:54:32.167012278Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\"" Jan 29 10:54:32.168035 containerd[1726]: time="2025-01-29T10:54:32.167972278Z" level=info msg="TearDown network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" successfully" Jan 29 10:54:32.168035 containerd[1726]: time="2025-01-29T10:54:32.167988958Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" returns successfully" Jan 29 10:54:32.170016 containerd[1726]: time="2025-01-29T10:54:32.169990238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:2,}" Jan 29 10:54:32.209026 containerd[1726]: time="2025-01-29T10:54:32.208975035Z" level=error msg="Failed to destroy network for sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:32.209312 containerd[1726]: time="2025-01-29T10:54:32.209283435Z" level=error msg="encountered an error cleaning up failed sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:32.209361 containerd[1726]: time="2025-01-29T10:54:32.209345195Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:32.209661 kubelet[3366]: E0129 10:54:32.209620 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:32.209726 kubelet[3366]: E0129 10:54:32.209681 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:32.209726 kubelet[3366]: E0129 10:54:32.209706 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:32.209784 kubelet[3366]: E0129 10:54:32.209747 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fdbfb578c-sx269_calico-system(24a732d5-74a3-4958-bf3c-24c35316b990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fdbfb578c-sx269_calico-system(24a732d5-74a3-4958-bf3c-24c35316b990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" podUID="24a732d5-74a3-4958-bf3c-24c35316b990" Jan 29 10:54:32.610562 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c-shm.mount: Deactivated successfully. Jan 29 10:54:32.610675 systemd[1]: run-netns-cni\x2dd45d6ab0\x2d7875\x2dbdc2\x2d9413\x2db897987d0fbf.mount: Deactivated successfully. Jan 29 10:54:32.610725 systemd[1]: run-netns-cni\x2d811a2a86\x2d3cbc\x2d1b67\x2ddafd\x2d4f6f312d4724.mount: Deactivated successfully. Jan 29 10:54:32.610769 systemd[1]: run-netns-cni\x2d8bd5b10e\x2d1ac6\x2d4cb9\x2dcc6f\x2d427f49a1fe13.mount: Deactivated successfully. Jan 29 10:54:32.610813 systemd[1]: run-netns-cni\x2da2f0f2ea\x2d9b84\x2d49f2\x2d5b5a\x2d7d91e56b8e20.mount: Deactivated successfully. Jan 29 10:54:33.168722 kubelet[3366]: I0129 10:54:33.168696 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c" Jan 29 10:54:33.169901 containerd[1726]: time="2025-01-29T10:54:33.169768595Z" level=info msg="StopPodSandbox for \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\"" Jan 29 10:54:33.170432 containerd[1726]: time="2025-01-29T10:54:33.169954955Z" level=info msg="Ensure that sandbox 717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c in task-service has been cleanup successfully" Jan 29 10:54:33.171332 containerd[1726]: time="2025-01-29T10:54:33.171297875Z" level=info msg="TearDown network for sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\" successfully" Jan 29 10:54:33.171332 containerd[1726]: time="2025-01-29T10:54:33.171325755Z" level=info msg="StopPodSandbox for \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\" returns successfully" Jan 29 10:54:33.174144 containerd[1726]: time="2025-01-29T10:54:33.171831835Z" level=info msg="StopPodSandbox for \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\"" Jan 29 10:54:33.174144 containerd[1726]: time="2025-01-29T10:54:33.171939195Z" level=info msg="TearDown network for sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" successfully" Jan 29 10:54:33.174144 containerd[1726]: time="2025-01-29T10:54:33.171949275Z" level=info msg="StopPodSandbox for \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" returns successfully" Jan 29 10:54:33.174144 containerd[1726]: time="2025-01-29T10:54:33.173627635Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\"" Jan 29 10:54:33.174144 containerd[1726]: time="2025-01-29T10:54:33.173742315Z" level=info msg="TearDown network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" successfully" Jan 29 10:54:33.174144 containerd[1726]: time="2025-01-29T10:54:33.173753715Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" returns successfully" Jan 29 10:54:33.174144 containerd[1726]: time="2025-01-29T10:54:33.174285075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:3,}" Jan 29 10:54:33.172865 systemd[1]: run-netns-cni\x2de7c1e644\x2d0722\x2d1344\x2d34cd\x2df747d9498e83.mount: Deactivated successfully. Jan 29 10:54:36.389895 containerd[1726]: time="2025-01-29T10:54:36.389074959Z" level=error msg="Failed to destroy network for sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.391105 containerd[1726]: time="2025-01-29T10:54:36.391067958Z" level=error msg="encountered an error cleaning up failed sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.391230 containerd[1726]: time="2025-01-29T10:54:36.391136678Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.391407 kubelet[3366]: E0129 10:54:36.391330 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.391407 kubelet[3366]: E0129 10:54:36.391380 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:36.393335 kubelet[3366]: E0129 10:54:36.391405 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:36.393335 kubelet[3366]: E0129 10:54:36.391441 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785d46db66-nfw9l_calico-apiserver(ff0d24de-055d-4e71-bae6-576826e88d95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785d46db66-nfw9l_calico-apiserver(ff0d24de-055d-4e71-bae6-576826e88d95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" podUID="ff0d24de-055d-4e71-bae6-576826e88d95" Jan 29 10:54:36.473271 containerd[1726]: time="2025-01-29T10:54:36.472964794Z" level=error msg="Failed to destroy network for sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.477057 containerd[1726]: time="2025-01-29T10:54:36.476936312Z" level=error msg="encountered an error cleaning up failed sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.477138 containerd[1726]: time="2025-01-29T10:54:36.477082072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.478962 kubelet[3366]: E0129 10:54:36.477345 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.478962 kubelet[3366]: E0129 10:54:36.477392 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:36.478962 kubelet[3366]: E0129 10:54:36.477411 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:36.479438 kubelet[3366]: E0129 10:54:36.477453 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785d46db66-x8gcd_calico-apiserver(8d76515f-5553-4e43-85a9-9b91a1e79d22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785d46db66-x8gcd_calico-apiserver(8d76515f-5553-4e43-85a9-9b91a1e79d22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" podUID="8d76515f-5553-4e43-85a9-9b91a1e79d22" Jan 29 10:54:36.527990 containerd[1726]: time="2025-01-29T10:54:36.527798845Z" level=error msg="Failed to destroy network for sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.528487 containerd[1726]: time="2025-01-29T10:54:36.528326125Z" level=error msg="encountered an error cleaning up failed sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.528487 containerd[1726]: time="2025-01-29T10:54:36.528391685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.528783 kubelet[3366]: E0129 10:54:36.528596 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.528783 kubelet[3366]: E0129 10:54:36.528651 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:36.528783 kubelet[3366]: E0129 10:54:36.528670 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:36.529783 kubelet[3366]: E0129 10:54:36.528725 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v9l7s" podUID="2b04ba18-4078-4572-b181-dabad7c530d3" Jan 29 10:54:36.707047 containerd[1726]: time="2025-01-29T10:54:36.706894950Z" level=error msg="Failed to destroy network for sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.707836 containerd[1726]: time="2025-01-29T10:54:36.707802229Z" level=error msg="encountered an error cleaning up failed sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.707926 containerd[1726]: time="2025-01-29T10:54:36.707902509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.708151 kubelet[3366]: E0129 10:54:36.708101 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.708249 kubelet[3366]: E0129 10:54:36.708165 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:36.708249 kubelet[3366]: E0129 10:54:36.708184 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:36.708249 kubelet[3366]: E0129 10:54:36.708227 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lmknf_kube-system(7648d1df-3533-48c0-904f-b5099718a0e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lmknf_kube-system(7648d1df-3533-48c0-904f-b5099718a0e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lmknf" podUID="7648d1df-3533-48c0-904f-b5099718a0e0" Jan 29 10:54:36.759226 containerd[1726]: time="2025-01-29T10:54:36.759015042Z" level=error msg="Failed to destroy network for sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.759950 containerd[1726]: time="2025-01-29T10:54:36.759737562Z" level=error msg="encountered an error cleaning up failed sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.759950 containerd[1726]: time="2025-01-29T10:54:36.759884521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.760302 kubelet[3366]: E0129 10:54:36.760258 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:36.760363 kubelet[3366]: E0129 10:54:36.760323 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:36.760363 kubelet[3366]: E0129 10:54:36.760346 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:36.760418 kubelet[3366]: E0129 10:54:36.760394 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ct4dr_calico-system(85d938fd-edbc-4618-8500-89676d3770ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ct4dr_calico-system(85d938fd-edbc-4618-8500-89676d3770ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ct4dr" podUID="85d938fd-edbc-4618-8500-89676d3770ef" Jan 29 10:54:36.913387 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7-shm.mount: Deactivated successfully. Jan 29 10:54:36.913475 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e-shm.mount: Deactivated successfully. Jan 29 10:54:36.913524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac-shm.mount: Deactivated successfully. Jan 29 10:54:36.913579 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b-shm.mount: Deactivated successfully. Jan 29 10:54:37.177576 kubelet[3366]: I0129 10:54:37.177530 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7" Jan 29 10:54:37.178781 containerd[1726]: time="2025-01-29T10:54:37.178614738Z" level=info msg="StopPodSandbox for \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\"" Jan 29 10:54:37.180939 containerd[1726]: time="2025-01-29T10:54:37.179028738Z" level=info msg="Ensure that sandbox d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7 in task-service has been cleanup successfully" Jan 29 10:54:37.182633 containerd[1726]: time="2025-01-29T10:54:37.181009537Z" level=info msg="TearDown network for sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\" successfully" Jan 29 10:54:37.182633 containerd[1726]: time="2025-01-29T10:54:37.181027737Z" level=info msg="StopPodSandbox for \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\" returns successfully" Jan 29 10:54:37.182633 containerd[1726]: time="2025-01-29T10:54:37.182197297Z" level=info msg="StopPodSandbox for \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\"" Jan 29 10:54:37.182633 containerd[1726]: time="2025-01-29T10:54:37.182310896Z" level=info msg="TearDown network for sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" successfully" Jan 29 10:54:37.182633 containerd[1726]: time="2025-01-29T10:54:37.182323816Z" level=info msg="StopPodSandbox for \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" returns successfully" Jan 29 10:54:37.182633 containerd[1726]: time="2025-01-29T10:54:37.182556096Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\"" Jan 29 10:54:37.182633 containerd[1726]: time="2025-01-29T10:54:37.182631416Z" level=info msg="TearDown network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" successfully" Jan 29 10:54:37.181185 systemd[1]: run-netns-cni\x2dcfcc37c9\x2d6f22\x2d3e25\x2d3825\x2def0456e6ea8a.mount: Deactivated successfully. Jan 29 10:54:37.182905 containerd[1726]: time="2025-01-29T10:54:37.182639936Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" returns successfully" Jan 29 10:54:37.183652 containerd[1726]: time="2025-01-29T10:54:37.183349976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:3,}" Jan 29 10:54:37.184992 kubelet[3366]: I0129 10:54:37.184625 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2" Jan 29 10:54:37.185505 containerd[1726]: time="2025-01-29T10:54:37.185243575Z" level=info msg="StopPodSandbox for \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\"" Jan 29 10:54:37.185505 containerd[1726]: time="2025-01-29T10:54:37.185380695Z" level=info msg="Ensure that sandbox 8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2 in task-service has been cleanup successfully" Jan 29 10:54:37.185807 containerd[1726]: time="2025-01-29T10:54:37.185786055Z" level=info msg="TearDown network for sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\" successfully" Jan 29 10:54:37.185920 containerd[1726]: time="2025-01-29T10:54:37.185904575Z" level=info msg="StopPodSandbox for \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\" returns successfully" Jan 29 10:54:37.187629 containerd[1726]: time="2025-01-29T10:54:37.187602974Z" level=info msg="StopPodSandbox for \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\"" Jan 29 10:54:37.187705 containerd[1726]: time="2025-01-29T10:54:37.187681854Z" level=info msg="TearDown network for sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" successfully" Jan 29 10:54:37.187705 containerd[1726]: time="2025-01-29T10:54:37.187691294Z" level=info msg="StopPodSandbox for \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" returns successfully" Jan 29 10:54:37.188099 containerd[1726]: time="2025-01-29T10:54:37.187961213Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\"" Jan 29 10:54:37.188099 containerd[1726]: time="2025-01-29T10:54:37.188041013Z" level=info msg="TearDown network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" successfully" Jan 29 10:54:37.188099 containerd[1726]: time="2025-01-29T10:54:37.188050573Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" returns successfully" Jan 29 10:54:37.189097 systemd[1]: run-netns-cni\x2de7a01599\x2df780\x2d2089\x2d1872\x2d51f754e6d127.mount: Deactivated successfully. Jan 29 10:54:37.190155 containerd[1726]: time="2025-01-29T10:54:37.189968492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:3,}" Jan 29 10:54:37.192282 kubelet[3366]: I0129 10:54:37.192244 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac" Jan 29 10:54:37.192931 containerd[1726]: time="2025-01-29T10:54:37.192910011Z" level=info msg="StopPodSandbox for \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\"" Jan 29 10:54:37.193158 containerd[1726]: time="2025-01-29T10:54:37.193140331Z" level=info msg="Ensure that sandbox 1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac in task-service has been cleanup successfully" Jan 29 10:54:37.195178 systemd[1]: run-netns-cni\x2d8a29dc17\x2dcbde\x2db768\x2db26c\x2da144d273d9f0.mount: Deactivated successfully. Jan 29 10:54:37.195395 containerd[1726]: time="2025-01-29T10:54:37.195318450Z" level=info msg="TearDown network for sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" successfully" Jan 29 10:54:37.195395 containerd[1726]: time="2025-01-29T10:54:37.195336370Z" level=info msg="StopPodSandbox for \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" returns successfully" Jan 29 10:54:37.196545 containerd[1726]: time="2025-01-29T10:54:37.196398569Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\"" Jan 29 10:54:37.196545 containerd[1726]: time="2025-01-29T10:54:37.196478329Z" level=info msg="TearDown network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" successfully" Jan 29 10:54:37.196545 containerd[1726]: time="2025-01-29T10:54:37.196488009Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" returns successfully" Jan 29 10:54:37.197100 kubelet[3366]: I0129 10:54:37.196813 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e" Jan 29 10:54:37.197622 containerd[1726]: time="2025-01-29T10:54:37.197405008Z" level=info msg="StopPodSandbox for \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\"" Jan 29 10:54:37.198222 containerd[1726]: time="2025-01-29T10:54:37.198172848Z" level=info msg="Ensure that sandbox 269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e in task-service has been cleanup successfully" Jan 29 10:54:37.200015 containerd[1726]: time="2025-01-29T10:54:37.199969087Z" level=info msg="TearDown network for sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\" successfully" Jan 29 10:54:37.200015 containerd[1726]: time="2025-01-29T10:54:37.199992167Z" level=info msg="StopPodSandbox for \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\" returns successfully" Jan 29 10:54:37.200398 containerd[1726]: time="2025-01-29T10:54:37.200177127Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\"" Jan 29 10:54:37.200398 containerd[1726]: time="2025-01-29T10:54:37.200260367Z" level=info msg="TearDown network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" successfully" Jan 29 10:54:37.200398 containerd[1726]: time="2025-01-29T10:54:37.200313247Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" returns successfully" Jan 29 10:54:37.202130 containerd[1726]: time="2025-01-29T10:54:37.201422126Z" level=info msg="StopPodSandbox for \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\"" Jan 29 10:54:37.202553 containerd[1726]: time="2025-01-29T10:54:37.201762326Z" level=info msg="TearDown network for sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" successfully" Jan 29 10:54:37.202758 containerd[1726]: time="2025-01-29T10:54:37.202320606Z" level=info msg="StopPodSandbox for \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" returns successfully" Jan 29 10:54:37.203171 containerd[1726]: time="2025-01-29T10:54:37.202843246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:3,}" Jan 29 10:54:37.202967 systemd[1]: run-netns-cni\x2d4fec3277\x2d6fd5\x2d0b0d\x2db7ab\x2d1f52fa260887.mount: Deactivated successfully. Jan 29 10:54:37.204905 containerd[1726]: time="2025-01-29T10:54:37.204789084Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\"" Jan 29 10:54:37.205039 kubelet[3366]: I0129 10:54:37.204753 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b" Jan 29 10:54:37.205286 containerd[1726]: time="2025-01-29T10:54:37.205110684Z" level=info msg="TearDown network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" successfully" Jan 29 10:54:37.205286 containerd[1726]: time="2025-01-29T10:54:37.205125804Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" returns successfully" Jan 29 10:54:37.206951 containerd[1726]: time="2025-01-29T10:54:37.206630404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:3,}" Jan 29 10:54:37.208496 containerd[1726]: time="2025-01-29T10:54:37.208170523Z" level=info msg="StopPodSandbox for \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\"" Jan 29 10:54:37.208496 containerd[1726]: time="2025-01-29T10:54:37.208309483Z" level=info msg="Ensure that sandbox 2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b in task-service has been cleanup successfully" Jan 29 10:54:37.209721 containerd[1726]: time="2025-01-29T10:54:37.209685362Z" level=info msg="TearDown network for sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\" successfully" Jan 29 10:54:37.209877 containerd[1726]: time="2025-01-29T10:54:37.209711882Z" level=info msg="StopPodSandbox for \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\" returns successfully" Jan 29 10:54:37.211867 containerd[1726]: time="2025-01-29T10:54:37.211697921Z" level=info msg="StopPodSandbox for \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\"" Jan 29 10:54:37.212147 containerd[1726]: time="2025-01-29T10:54:37.212094441Z" level=info msg="TearDown network for sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" successfully" Jan 29 10:54:37.212352 containerd[1726]: time="2025-01-29T10:54:37.212222641Z" level=info msg="StopPodSandbox for \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" returns successfully" Jan 29 10:54:37.213447 containerd[1726]: time="2025-01-29T10:54:37.213338880Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\"" Jan 29 10:54:37.213447 containerd[1726]: time="2025-01-29T10:54:37.213415520Z" level=info msg="TearDown network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" successfully" Jan 29 10:54:37.213447 containerd[1726]: time="2025-01-29T10:54:37.213425200Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" returns successfully" Jan 29 10:54:37.214572 containerd[1726]: time="2025-01-29T10:54:37.214539719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:3,}" Jan 29 10:54:37.911803 systemd[1]: run-netns-cni\x2da3529e26\x2d7637\x2d364b\x2d799b\x2da45bc02e5d6a.mount: Deactivated successfully. Jan 29 10:54:42.292258 containerd[1726]: time="2025-01-29T10:54:42.292195534Z" level=error msg="Failed to destroy network for sandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:42.294417 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28-shm.mount: Deactivated successfully. Jan 29 10:54:42.295521 containerd[1726]: time="2025-01-29T10:54:42.295452573Z" level=error msg="encountered an error cleaning up failed sandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:42.296106 containerd[1726]: time="2025-01-29T10:54:42.295925532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:42.296943 kubelet[3366]: E0129 10:54:42.296496 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:42.296943 kubelet[3366]: E0129 10:54:42.296558 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:42.296943 kubelet[3366]: E0129 10:54:42.296578 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:42.297271 kubelet[3366]: E0129 10:54:42.296625 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fdbfb578c-sx269_calico-system(24a732d5-74a3-4958-bf3c-24c35316b990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fdbfb578c-sx269_calico-system(24a732d5-74a3-4958-bf3c-24c35316b990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" podUID="24a732d5-74a3-4958-bf3c-24c35316b990" Jan 29 10:54:43.219747 kubelet[3366]: I0129 10:54:43.219705 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28" Jan 29 10:54:43.221018 containerd[1726]: time="2025-01-29T10:54:43.220643760Z" level=info msg="StopPodSandbox for \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\"" Jan 29 10:54:43.221757 containerd[1726]: time="2025-01-29T10:54:43.221611799Z" level=info msg="Ensure that sandbox 70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28 in task-service has been cleanup successfully" Jan 29 10:54:43.222071 containerd[1726]: time="2025-01-29T10:54:43.222050959Z" level=info msg="TearDown network for sandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\" successfully" Jan 29 10:54:43.222202 containerd[1726]: time="2025-01-29T10:54:43.222136839Z" level=info msg="StopPodSandbox for \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\" returns successfully" Jan 29 10:54:43.223812 systemd[1]: run-netns-cni\x2d190483c3\x2dd55c\x2dbf63\x2d980d\x2de2219e28522a.mount: Deactivated successfully. Jan 29 10:54:43.224320 containerd[1726]: time="2025-01-29T10:54:43.224287838Z" level=info msg="StopPodSandbox for \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\"" Jan 29 10:54:43.224395 containerd[1726]: time="2025-01-29T10:54:43.224383678Z" level=info msg="TearDown network for sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\" successfully" Jan 29 10:54:43.224437 containerd[1726]: time="2025-01-29T10:54:43.224395838Z" level=info msg="StopPodSandbox for \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\" returns successfully" Jan 29 10:54:43.226678 containerd[1726]: time="2025-01-29T10:54:43.225691197Z" level=info msg="StopPodSandbox for \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\"" Jan 29 10:54:43.226678 containerd[1726]: time="2025-01-29T10:54:43.225771397Z" level=info msg="TearDown network for sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" successfully" Jan 29 10:54:43.226678 containerd[1726]: time="2025-01-29T10:54:43.225780957Z" level=info msg="StopPodSandbox for \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" returns successfully" Jan 29 10:54:43.226980 containerd[1726]: time="2025-01-29T10:54:43.226949396Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\"" Jan 29 10:54:43.227333 containerd[1726]: time="2025-01-29T10:54:43.227313076Z" level=info msg="TearDown network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" successfully" Jan 29 10:54:43.227424 containerd[1726]: time="2025-01-29T10:54:43.227409996Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" returns successfully" Jan 29 10:54:43.229177 containerd[1726]: time="2025-01-29T10:54:43.228663676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:4,}" Jan 29 10:54:43.867994 containerd[1726]: time="2025-01-29T10:54:43.867942215Z" level=error msg="Failed to destroy network for sandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:43.868822 containerd[1726]: time="2025-01-29T10:54:43.868661935Z" level=error msg="encountered an error cleaning up failed sandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:43.868822 containerd[1726]: time="2025-01-29T10:54:43.868738575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:43.869219 kubelet[3366]: E0129 10:54:43.869180 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:43.869665 kubelet[3366]: E0129 10:54:43.869238 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:43.869665 kubelet[3366]: E0129 10:54:43.869258 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:43.869665 kubelet[3366]: E0129 10:54:43.869300 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ct4dr_calico-system(85d938fd-edbc-4618-8500-89676d3770ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ct4dr_calico-system(85d938fd-edbc-4618-8500-89676d3770ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ct4dr" podUID="85d938fd-edbc-4618-8500-89676d3770ef" Jan 29 10:54:44.164061 containerd[1726]: time="2025-01-29T10:54:44.163773777Z" level=error msg="Failed to destroy network for sandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.165024 containerd[1726]: time="2025-01-29T10:54:44.164987977Z" level=error msg="encountered an error cleaning up failed sandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.165260 containerd[1726]: time="2025-01-29T10:54:44.165230817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.166120 kubelet[3366]: E0129 10:54:44.166080 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.166197 kubelet[3366]: E0129 10:54:44.166135 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:44.166197 kubelet[3366]: E0129 10:54:44.166157 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:44.166808 kubelet[3366]: E0129 10:54:44.166709 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785d46db66-nfw9l_calico-apiserver(ff0d24de-055d-4e71-bae6-576826e88d95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785d46db66-nfw9l_calico-apiserver(ff0d24de-055d-4e71-bae6-576826e88d95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" podUID="ff0d24de-055d-4e71-bae6-576826e88d95" Jan 29 10:54:44.213254 containerd[1726]: time="2025-01-29T10:54:44.213204951Z" level=error msg="Failed to destroy network for sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.213928 containerd[1726]: time="2025-01-29T10:54:44.213898351Z" level=error msg="encountered an error cleaning up failed sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.214213 containerd[1726]: time="2025-01-29T10:54:44.214049231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.214412 kubelet[3366]: E0129 10:54:44.214385 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.214629 kubelet[3366]: E0129 10:54:44.214508 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:44.214629 kubelet[3366]: E0129 10:54:44.214545 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:44.214629 kubelet[3366]: E0129 10:54:44.214592 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v9l7s" podUID="2b04ba18-4078-4572-b181-dabad7c530d3" Jan 29 10:54:44.228848 containerd[1726]: time="2025-01-29T10:54:44.228701743Z" level=info msg="StopPodSandbox for \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\"" Jan 29 10:54:44.228848 containerd[1726]: time="2025-01-29T10:54:44.228900783Z" level=info msg="Ensure that sandbox c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0 in task-service has been cleanup successfully" Jan 29 10:54:44.229160 kubelet[3366]: I0129 10:54:44.227983 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0" Jan 29 10:54:44.229513 containerd[1726]: time="2025-01-29T10:54:44.229213503Z" level=info msg="TearDown network for sandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\" successfully" Jan 29 10:54:44.229513 containerd[1726]: time="2025-01-29T10:54:44.229234143Z" level=info msg="StopPodSandbox for \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\" returns successfully" Jan 29 10:54:44.230197 containerd[1726]: time="2025-01-29T10:54:44.229841382Z" level=info msg="StopPodSandbox for \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\"" Jan 29 10:54:44.230536 containerd[1726]: time="2025-01-29T10:54:44.230448662Z" level=info msg="TearDown network for sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\" successfully" Jan 29 10:54:44.230536 containerd[1726]: time="2025-01-29T10:54:44.230469702Z" level=info msg="StopPodSandbox for \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\" returns successfully" Jan 29 10:54:44.231985 containerd[1726]: time="2025-01-29T10:54:44.231847501Z" level=info msg="StopPodSandbox for \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\"" Jan 29 10:54:44.232455 containerd[1726]: time="2025-01-29T10:54:44.232209501Z" level=info msg="TearDown network for sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" successfully" Jan 29 10:54:44.232455 containerd[1726]: time="2025-01-29T10:54:44.232230661Z" level=info msg="StopPodSandbox for \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" returns successfully" Jan 29 10:54:44.233831 containerd[1726]: time="2025-01-29T10:54:44.233798260Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\"" Jan 29 10:54:44.233831 containerd[1726]: time="2025-01-29T10:54:44.234216420Z" level=info msg="TearDown network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" successfully" Jan 29 10:54:44.233831 containerd[1726]: time="2025-01-29T10:54:44.234244260Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" returns successfully" Jan 29 10:54:44.235634 containerd[1726]: time="2025-01-29T10:54:44.235609779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:4,}" Jan 29 10:54:44.235944 kubelet[3366]: I0129 10:54:44.235924 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522" Jan 29 10:54:44.239823 containerd[1726]: time="2025-01-29T10:54:44.238505978Z" level=info msg="StopPodSandbox for \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\"" Jan 29 10:54:44.240489 containerd[1726]: time="2025-01-29T10:54:44.240448337Z" level=info msg="Ensure that sandbox 9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522 in task-service has been cleanup successfully" Jan 29 10:54:44.241721 containerd[1726]: time="2025-01-29T10:54:44.241688456Z" level=info msg="TearDown network for sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\" successfully" Jan 29 10:54:44.241721 containerd[1726]: time="2025-01-29T10:54:44.241715816Z" level=info msg="StopPodSandbox for \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\" returns successfully" Jan 29 10:54:44.242346 containerd[1726]: time="2025-01-29T10:54:44.242216496Z" level=info msg="StopPodSandbox for \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\"" Jan 29 10:54:44.242873 containerd[1726]: time="2025-01-29T10:54:44.242833175Z" level=info msg="TearDown network for sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" successfully" Jan 29 10:54:44.243184 containerd[1726]: time="2025-01-29T10:54:44.243115575Z" level=info msg="StopPodSandbox for \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" returns successfully" Jan 29 10:54:44.243831 containerd[1726]: time="2025-01-29T10:54:44.243709695Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\"" Jan 29 10:54:44.244031 containerd[1726]: time="2025-01-29T10:54:44.243997775Z" level=info msg="TearDown network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" successfully" Jan 29 10:54:44.244153 containerd[1726]: time="2025-01-29T10:54:44.244128175Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" returns successfully" Jan 29 10:54:44.244465 containerd[1726]: time="2025-01-29T10:54:44.244443894Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\"" Jan 29 10:54:44.245341 kubelet[3366]: I0129 10:54:44.245314 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1" Jan 29 10:54:44.246570 containerd[1726]: time="2025-01-29T10:54:44.246535293Z" level=info msg="TearDown network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" successfully" Jan 29 10:54:44.246570 containerd[1726]: time="2025-01-29T10:54:44.246563013Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" returns successfully" Jan 29 10:54:44.248425 containerd[1726]: time="2025-01-29T10:54:44.248314852Z" level=info msg="StopPodSandbox for \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\"" Jan 29 10:54:44.249113 containerd[1726]: time="2025-01-29T10:54:44.249074292Z" level=info msg="Ensure that sandbox 59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1 in task-service has been cleanup successfully" Jan 29 10:54:44.250312 containerd[1726]: time="2025-01-29T10:54:44.250187291Z" level=info msg="TearDown network for sandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\" successfully" Jan 29 10:54:44.250312 containerd[1726]: time="2025-01-29T10:54:44.250207931Z" level=info msg="StopPodSandbox for \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\" returns successfully" Jan 29 10:54:44.251033 containerd[1726]: time="2025-01-29T10:54:44.250952251Z" level=info msg="StopPodSandbox for \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\"" Jan 29 10:54:44.251033 containerd[1726]: time="2025-01-29T10:54:44.251032851Z" level=info msg="TearDown network for sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\" successfully" Jan 29 10:54:44.251130 containerd[1726]: time="2025-01-29T10:54:44.251042571Z" level=info msg="StopPodSandbox for \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\" returns successfully" Jan 29 10:54:44.251456 containerd[1726]: time="2025-01-29T10:54:44.251198371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:4,}" Jan 29 10:54:44.251958 containerd[1726]: time="2025-01-29T10:54:44.251736291Z" level=info msg="StopPodSandbox for \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\"" Jan 29 10:54:44.252346 containerd[1726]: time="2025-01-29T10:54:44.252219170Z" level=info msg="TearDown network for sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" successfully" Jan 29 10:54:44.252346 containerd[1726]: time="2025-01-29T10:54:44.252243610Z" level=info msg="StopPodSandbox for \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" returns successfully" Jan 29 10:54:44.252745 containerd[1726]: time="2025-01-29T10:54:44.252715290Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\"" Jan 29 10:54:44.253010 containerd[1726]: time="2025-01-29T10:54:44.252984810Z" level=info msg="TearDown network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" successfully" Jan 29 10:54:44.253010 containerd[1726]: time="2025-01-29T10:54:44.253005970Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" returns successfully" Jan 29 10:54:44.253976 containerd[1726]: time="2025-01-29T10:54:44.253841929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:4,}" Jan 29 10:54:44.279714 containerd[1726]: time="2025-01-29T10:54:44.279650756Z" level=error msg="Failed to destroy network for sandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.280040 containerd[1726]: time="2025-01-29T10:54:44.279980556Z" level=error msg="encountered an error cleaning up failed sandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.280162 containerd[1726]: time="2025-01-29T10:54:44.280038316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.280300 kubelet[3366]: E0129 10:54:44.280231 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.280351 kubelet[3366]: E0129 10:54:44.280308 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:44.280351 kubelet[3366]: E0129 10:54:44.280328 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:44.281054 kubelet[3366]: E0129 10:54:44.280391 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785d46db66-x8gcd_calico-apiserver(8d76515f-5553-4e43-85a9-9b91a1e79d22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785d46db66-x8gcd_calico-apiserver(8d76515f-5553-4e43-85a9-9b91a1e79d22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" podUID="8d76515f-5553-4e43-85a9-9b91a1e79d22" Jan 29 10:54:44.373422 containerd[1726]: time="2025-01-29T10:54:44.372978546Z" level=error msg="Failed to destroy network for sandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.374279 containerd[1726]: time="2025-01-29T10:54:44.374004745Z" level=error msg="encountered an error cleaning up failed sandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.374403 containerd[1726]: time="2025-01-29T10:54:44.374371145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.374622 kubelet[3366]: E0129 10:54:44.374593 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:44.375745 kubelet[3366]: E0129 10:54:44.374926 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:44.375745 kubelet[3366]: E0129 10:54:44.374957 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:44.375745 kubelet[3366]: E0129 10:54:44.375001 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lmknf_kube-system(7648d1df-3533-48c0-904f-b5099718a0e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lmknf_kube-system(7648d1df-3533-48c0-904f-b5099718a0e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lmknf" podUID="7648d1df-3533-48c0-904f-b5099718a0e0" Jan 29 10:54:44.515338 systemd[1]: run-netns-cni\x2d06f64796\x2d3602\x2d410d\x2d6b4c\x2dc90c3797b843.mount: Deactivated successfully. Jan 29 10:54:44.515699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522-shm.mount: Deactivated successfully. Jan 29 10:54:44.515751 systemd[1]: run-netns-cni\x2d6093bd9b\x2d8d23\x2df21f\x2d3f0d\x2d59be7095dc43.mount: Deactivated successfully. Jan 29 10:54:44.515796 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1-shm.mount: Deactivated successfully. Jan 29 10:54:44.515842 systemd[1]: run-netns-cni\x2dac4e1628\x2dfd0f\x2ddc0e\x2d085a\x2dd83dac56d928.mount: Deactivated successfully. Jan 29 10:54:44.515906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0-shm.mount: Deactivated successfully. Jan 29 10:54:45.177560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount135916305.mount: Deactivated successfully. Jan 29 10:54:45.248596 kubelet[3366]: I0129 10:54:45.248563 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f" Jan 29 10:54:45.249549 containerd[1726]: time="2025-01-29T10:54:45.249287934Z" level=info msg="StopPodSandbox for \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\"" Jan 29 10:54:45.249549 containerd[1726]: time="2025-01-29T10:54:45.249470174Z" level=info msg="Ensure that sandbox 69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f in task-service has been cleanup successfully" Jan 29 10:54:45.252408 systemd[1]: run-netns-cni\x2da0adaa7a\x2d8319\x2d2f2b\x2de570\x2dcf58ad8572a2.mount: Deactivated successfully. Jan 29 10:54:45.252805 containerd[1726]: time="2025-01-29T10:54:45.252606572Z" level=info msg="TearDown network for sandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\" successfully" Jan 29 10:54:45.252805 containerd[1726]: time="2025-01-29T10:54:45.252644372Z" level=info msg="StopPodSandbox for \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\" returns successfully" Jan 29 10:54:45.253348 containerd[1726]: time="2025-01-29T10:54:45.253267372Z" level=info msg="StopPodSandbox for \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\"" Jan 29 10:54:45.253573 containerd[1726]: time="2025-01-29T10:54:45.253554932Z" level=info msg="TearDown network for sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\" successfully" Jan 29 10:54:45.253716 containerd[1726]: time="2025-01-29T10:54:45.253603972Z" level=info msg="StopPodSandbox for \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\" returns successfully" Jan 29 10:54:45.254434 containerd[1726]: time="2025-01-29T10:54:45.254331411Z" level=info msg="StopPodSandbox for \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\"" Jan 29 10:54:45.255251 containerd[1726]: time="2025-01-29T10:54:45.255171171Z" level=info msg="TearDown network for sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" successfully" Jan 29 10:54:45.255251 containerd[1726]: time="2025-01-29T10:54:45.255207491Z" level=info msg="StopPodSandbox for \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" returns successfully" Jan 29 10:54:45.256263 kubelet[3366]: I0129 10:54:45.255725 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc" Jan 29 10:54:45.256499 containerd[1726]: time="2025-01-29T10:54:45.256394370Z" level=info msg="StopPodSandbox for \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\"" Jan 29 10:54:45.256752 containerd[1726]: time="2025-01-29T10:54:45.256544170Z" level=info msg="Ensure that sandbox e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc in task-service has been cleanup successfully" Jan 29 10:54:45.256752 containerd[1726]: time="2025-01-29T10:54:45.256607290Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\"" Jan 29 10:54:45.256752 containerd[1726]: time="2025-01-29T10:54:45.256705930Z" level=info msg="TearDown network for sandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\" successfully" Jan 29 10:54:45.256752 containerd[1726]: time="2025-01-29T10:54:45.256719850Z" level=info msg="StopPodSandbox for \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\" returns successfully" Jan 29 10:54:45.257129 containerd[1726]: time="2025-01-29T10:54:45.257014010Z" level=info msg="StopPodSandbox for \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\"" Jan 29 10:54:45.257129 containerd[1726]: time="2025-01-29T10:54:45.257053490Z" level=info msg="TearDown network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" successfully" Jan 29 10:54:45.257129 containerd[1726]: time="2025-01-29T10:54:45.257068770Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" returns successfully" Jan 29 10:54:45.257129 containerd[1726]: time="2025-01-29T10:54:45.257092250Z" level=info msg="TearDown network for sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\" successfully" Jan 29 10:54:45.257129 containerd[1726]: time="2025-01-29T10:54:45.257102010Z" level=info msg="StopPodSandbox for \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\" returns successfully" Jan 29 10:54:45.259804 containerd[1726]: time="2025-01-29T10:54:45.257766850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:4,}" Jan 29 10:54:45.259804 containerd[1726]: time="2025-01-29T10:54:45.258091329Z" level=info msg="StopPodSandbox for \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\"" Jan 29 10:54:45.259804 containerd[1726]: time="2025-01-29T10:54:45.258165529Z" level=info msg="TearDown network for sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" successfully" Jan 29 10:54:45.259804 containerd[1726]: time="2025-01-29T10:54:45.258175009Z" level=info msg="StopPodSandbox for \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" returns successfully" Jan 29 10:54:45.259071 systemd[1]: run-netns-cni\x2da03acd79\x2deb8c\x2d6215\x2dc28c\x2dbda5e47a1f63.mount: Deactivated successfully. Jan 29 10:54:45.260673 containerd[1726]: time="2025-01-29T10:54:45.260530488Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\"" Jan 29 10:54:45.260673 containerd[1726]: time="2025-01-29T10:54:45.260614808Z" level=info msg="TearDown network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" successfully" Jan 29 10:54:45.260673 containerd[1726]: time="2025-01-29T10:54:45.260624128Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" returns successfully" Jan 29 10:54:45.261833 containerd[1726]: time="2025-01-29T10:54:45.261592648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:4,}" Jan 29 10:54:46.854155 containerd[1726]: time="2025-01-29T10:54:46.854093427Z" level=error msg="Failed to destroy network for sandbox \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:46.854551 containerd[1726]: time="2025-01-29T10:54:46.854420267Z" level=error msg="encountered an error cleaning up failed sandbox \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:46.854551 containerd[1726]: time="2025-01-29T10:54:46.854474187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:46.856717 kubelet[3366]: E0129 10:54:46.855030 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:46.856717 kubelet[3366]: E0129 10:54:46.855089 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:46.856717 kubelet[3366]: E0129 10:54:46.855110 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" Jan 29 10:54:46.857133 kubelet[3366]: E0129 10:54:46.855147 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fdbfb578c-sx269_calico-system(24a732d5-74a3-4958-bf3c-24c35316b990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fdbfb578c-sx269_calico-system(24a732d5-74a3-4958-bf3c-24c35316b990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" podUID="24a732d5-74a3-4958-bf3c-24c35316b990" Jan 29 10:54:46.857346 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170-shm.mount: Deactivated successfully. Jan 29 10:54:46.859717 containerd[1726]: time="2025-01-29T10:54:46.859078624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:46.885657 containerd[1726]: time="2025-01-29T10:54:46.885596771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 10:54:46.907955 containerd[1726]: time="2025-01-29T10:54:46.907917679Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:46.926577 containerd[1726]: time="2025-01-29T10:54:46.926535669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:46.929385 containerd[1726]: time="2025-01-29T10:54:46.929339748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 17.799040178s" Jan 29 10:54:46.929385 containerd[1726]: time="2025-01-29T10:54:46.929380188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 10:54:46.942550 containerd[1726]: time="2025-01-29T10:54:46.942499461Z" level=info msg="CreateContainer within sandbox \"99f36701f52cb5c1deee9adde3d141f982d76aac76eb8b83a97409705fb45dc4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 10:54:46.970447 containerd[1726]: time="2025-01-29T10:54:46.970404487Z" level=error msg="Failed to destroy network for sandbox \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:46.971458 containerd[1726]: time="2025-01-29T10:54:46.971431286Z" level=error msg="encountered an error cleaning up failed sandbox \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:46.971619 containerd[1726]: time="2025-01-29T10:54:46.971576286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:46.974022 kubelet[3366]: E0129 10:54:46.973684 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:46.974022 kubelet[3366]: E0129 10:54:46.973749 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:46.974022 kubelet[3366]: E0129 10:54:46.973769 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" Jan 29 10:54:46.974162 kubelet[3366]: E0129 10:54:46.973813 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785d46db66-nfw9l_calico-apiserver(ff0d24de-055d-4e71-bae6-576826e88d95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785d46db66-nfw9l_calico-apiserver(ff0d24de-055d-4e71-bae6-576826e88d95)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" podUID="ff0d24de-055d-4e71-bae6-576826e88d95" Jan 29 10:54:46.997890 containerd[1726]: time="2025-01-29T10:54:46.997719873Z" level=info msg="CreateContainer within sandbox \"99f36701f52cb5c1deee9adde3d141f982d76aac76eb8b83a97409705fb45dc4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"505ae507e2def43c029872c22eb244d8832b61b438210b48831162af3e0c5de3\"" Jan 29 10:54:46.999848 containerd[1726]: time="2025-01-29T10:54:46.999582552Z" level=info msg="StartContainer for \"505ae507e2def43c029872c22eb244d8832b61b438210b48831162af3e0c5de3\"" Jan 29 10:54:47.016796 containerd[1726]: time="2025-01-29T10:54:47.016648743Z" level=error msg="Failed to destroy network for sandbox \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.020503 containerd[1726]: time="2025-01-29T10:54:47.020452101Z" level=error msg="encountered an error cleaning up failed sandbox \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.020616 containerd[1726]: time="2025-01-29T10:54:47.020533381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.021867 kubelet[3366]: E0129 10:54:47.020831 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.021992 kubelet[3366]: E0129 10:54:47.021902 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:47.021992 kubelet[3366]: E0129 10:54:47.021922 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ct4dr" Jan 29 10:54:47.021992 kubelet[3366]: E0129 10:54:47.021977 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ct4dr_calico-system(85d938fd-edbc-4618-8500-89676d3770ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ct4dr_calico-system(85d938fd-edbc-4618-8500-89676d3770ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ct4dr" podUID="85d938fd-edbc-4618-8500-89676d3770ef" Jan 29 10:54:47.039884 containerd[1726]: time="2025-01-29T10:54:47.039769451Z" level=error msg="Failed to destroy network for sandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.042068 containerd[1726]: time="2025-01-29T10:54:47.041877050Z" level=error msg="Failed to destroy network for sandbox \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.042640 containerd[1726]: time="2025-01-29T10:54:47.042475970Z" level=error msg="encountered an error cleaning up failed sandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.042640 containerd[1726]: time="2025-01-29T10:54:47.042539290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.043199 kubelet[3366]: E0129 10:54:47.042753 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.043280 containerd[1726]: time="2025-01-29T10:54:47.043086089Z" level=error msg="encountered an error cleaning up failed sandbox \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.043280 containerd[1726]: time="2025-01-29T10:54:47.043133649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.044018 kubelet[3366]: E0129 10:54:47.042810 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:47.044091 kubelet[3366]: E0129 10:54:47.044021 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:47.044252 kubelet[3366]: E0129 10:54:47.043967 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.044252 kubelet[3366]: E0129 10:54:47.044159 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:47.044252 kubelet[3366]: E0129 10:54:47.044176 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" Jan 29 10:54:47.045053 kubelet[3366]: E0129 10:54:47.044219 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-785d46db66-x8gcd_calico-apiserver(8d76515f-5553-4e43-85a9-9b91a1e79d22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-785d46db66-x8gcd_calico-apiserver(8d76515f-5553-4e43-85a9-9b91a1e79d22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" podUID="8d76515f-5553-4e43-85a9-9b91a1e79d22" Jan 29 10:54:47.045053 kubelet[3366]: E0129 10:54:47.044340 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v9l7s" podUID="2b04ba18-4078-4572-b181-dabad7c530d3" Jan 29 10:54:47.052136 systemd[1]: Started cri-containerd-505ae507e2def43c029872c22eb244d8832b61b438210b48831162af3e0c5de3.scope - libcontainer container 505ae507e2def43c029872c22eb244d8832b61b438210b48831162af3e0c5de3. Jan 29 10:54:47.055460 containerd[1726]: time="2025-01-29T10:54:47.055259443Z" level=error msg="Failed to destroy network for sandbox \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.055945 containerd[1726]: time="2025-01-29T10:54:47.055795843Z" level=error msg="encountered an error cleaning up failed sandbox \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.055945 containerd[1726]: time="2025-01-29T10:54:47.055898363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.056493 kubelet[3366]: E0129 10:54:47.056329 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.056493 kubelet[3366]: E0129 10:54:47.056382 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:47.056493 kubelet[3366]: E0129 10:54:47.056402 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lmknf" Jan 29 10:54:47.056707 kubelet[3366]: E0129 10:54:47.056438 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lmknf_kube-system(7648d1df-3533-48c0-904f-b5099718a0e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lmknf_kube-system(7648d1df-3533-48c0-904f-b5099718a0e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lmknf" podUID="7648d1df-3533-48c0-904f-b5099718a0e0" Jan 29 10:54:47.086534 containerd[1726]: time="2025-01-29T10:54:47.086452467Z" level=info msg="StartContainer for \"505ae507e2def43c029872c22eb244d8832b61b438210b48831162af3e0c5de3\" returns successfully" Jan 29 10:54:47.269618 kubelet[3366]: I0129 10:54:47.269583 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f" Jan 29 10:54:47.270290 containerd[1726]: time="2025-01-29T10:54:47.270261932Z" level=info msg="StopPodSandbox for \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\"" Jan 29 10:54:47.270535 containerd[1726]: time="2025-01-29T10:54:47.270517172Z" level=info msg="Ensure that sandbox e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f in task-service has been cleanup successfully" Jan 29 10:54:47.270766 containerd[1726]: time="2025-01-29T10:54:47.270731852Z" level=info msg="TearDown network for sandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\" successfully" Jan 29 10:54:47.270849 containerd[1726]: time="2025-01-29T10:54:47.270836572Z" level=info msg="StopPodSandbox for \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\" returns successfully" Jan 29 10:54:47.271385 containerd[1726]: time="2025-01-29T10:54:47.271352652Z" level=info msg="StopPodSandbox for \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\"" Jan 29 10:54:47.271451 containerd[1726]: time="2025-01-29T10:54:47.271439532Z" level=info msg="TearDown network for sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\" successfully" Jan 29 10:54:47.271655 containerd[1726]: time="2025-01-29T10:54:47.271450732Z" level=info msg="StopPodSandbox for \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\" returns successfully" Jan 29 10:54:47.274083 containerd[1726]: time="2025-01-29T10:54:47.274049490Z" level=info msg="StopPodSandbox for \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\"" Jan 29 10:54:47.275134 containerd[1726]: time="2025-01-29T10:54:47.275099610Z" level=info msg="TearDown network for sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" successfully" Jan 29 10:54:47.275216 containerd[1726]: time="2025-01-29T10:54:47.275169130Z" level=info msg="StopPodSandbox for \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" returns successfully" Jan 29 10:54:47.275498 containerd[1726]: time="2025-01-29T10:54:47.275472170Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\"" Jan 29 10:54:47.275629 containerd[1726]: time="2025-01-29T10:54:47.275614890Z" level=info msg="TearDown network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" successfully" Jan 29 10:54:47.275687 containerd[1726]: time="2025-01-29T10:54:47.275674409Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" returns successfully" Jan 29 10:54:47.278321 containerd[1726]: time="2025-01-29T10:54:47.278285848Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\"" Jan 29 10:54:47.278436 containerd[1726]: time="2025-01-29T10:54:47.278410608Z" level=info msg="TearDown network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" successfully" Jan 29 10:54:47.278436 containerd[1726]: time="2025-01-29T10:54:47.278429408Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" returns successfully" Jan 29 10:54:47.278863 containerd[1726]: time="2025-01-29T10:54:47.278830528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:5,}" Jan 29 10:54:47.280739 kubelet[3366]: I0129 10:54:47.280337 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041" Jan 29 10:54:47.281265 containerd[1726]: time="2025-01-29T10:54:47.281231447Z" level=info msg="StopPodSandbox for \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\"" Jan 29 10:54:47.281425 containerd[1726]: time="2025-01-29T10:54:47.281394127Z" level=info msg="Ensure that sandbox 87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041 in task-service has been cleanup successfully" Jan 29 10:54:47.282775 containerd[1726]: time="2025-01-29T10:54:47.281737166Z" level=info msg="TearDown network for sandbox \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\" successfully" Jan 29 10:54:47.282775 containerd[1726]: time="2025-01-29T10:54:47.281773006Z" level=info msg="StopPodSandbox for \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\" returns successfully" Jan 29 10:54:47.283291 containerd[1726]: time="2025-01-29T10:54:47.283268886Z" level=info msg="StopPodSandbox for \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\"" Jan 29 10:54:47.283841 containerd[1726]: time="2025-01-29T10:54:47.283821165Z" level=info msg="TearDown network for sandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\" successfully" Jan 29 10:54:47.283966 containerd[1726]: time="2025-01-29T10:54:47.283950685Z" level=info msg="StopPodSandbox for \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\" returns successfully" Jan 29 10:54:47.288252 containerd[1726]: time="2025-01-29T10:54:47.288212043Z" level=info msg="StopPodSandbox for \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\"" Jan 29 10:54:47.288343 containerd[1726]: time="2025-01-29T10:54:47.288321003Z" level=info msg="TearDown network for sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\" successfully" Jan 29 10:54:47.288343 containerd[1726]: time="2025-01-29T10:54:47.288335363Z" level=info msg="StopPodSandbox for \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\" returns successfully" Jan 29 10:54:47.288639 kubelet[3366]: I0129 10:54:47.288553 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-grvgj" podStartSLOduration=1.6082752249999999 podStartE2EDuration="29.288535723s" podCreationTimestamp="2025-01-29 10:54:18 +0000 UTC" firstStartedPulling="2025-01-29 10:54:19.25002817 +0000 UTC m=+13.331844751" lastFinishedPulling="2025-01-29 10:54:46.930288628 +0000 UTC m=+41.012105249" observedRunningTime="2025-01-29 10:54:47.285461124 +0000 UTC m=+41.367277745" watchObservedRunningTime="2025-01-29 10:54:47.288535723 +0000 UTC m=+41.370352344" Jan 29 10:54:47.289240 containerd[1726]: time="2025-01-29T10:54:47.289215523Z" level=info msg="StopPodSandbox for \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\"" Jan 29 10:54:47.289403 containerd[1726]: time="2025-01-29T10:54:47.289387242Z" level=info msg="TearDown network for sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" successfully" Jan 29 10:54:47.289462 containerd[1726]: time="2025-01-29T10:54:47.289450122Z" level=info msg="StopPodSandbox for \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" returns successfully" Jan 29 10:54:47.290215 containerd[1726]: time="2025-01-29T10:54:47.290113522Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\"" Jan 29 10:54:47.290379 containerd[1726]: time="2025-01-29T10:54:47.290316602Z" level=info msg="TearDown network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" successfully" Jan 29 10:54:47.290379 containerd[1726]: time="2025-01-29T10:54:47.290331802Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" returns successfully" Jan 29 10:54:47.291480 containerd[1726]: time="2025-01-29T10:54:47.291284961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:5,}" Jan 29 10:54:47.291871 kubelet[3366]: I0129 10:54:47.291726 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906" Jan 29 10:54:47.292848 containerd[1726]: time="2025-01-29T10:54:47.292826041Z" level=info msg="StopPodSandbox for \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\"" Jan 29 10:54:47.293775 containerd[1726]: time="2025-01-29T10:54:47.293720880Z" level=info msg="Ensure that sandbox 1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906 in task-service has been cleanup successfully" Jan 29 10:54:47.297440 containerd[1726]: time="2025-01-29T10:54:47.297305238Z" level=info msg="TearDown network for sandbox \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\" successfully" Jan 29 10:54:47.297440 containerd[1726]: time="2025-01-29T10:54:47.297343558Z" level=info msg="StopPodSandbox for \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\" returns successfully" Jan 29 10:54:47.298031 containerd[1726]: time="2025-01-29T10:54:47.297903798Z" level=info msg="StopPodSandbox for \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\"" Jan 29 10:54:47.298031 containerd[1726]: time="2025-01-29T10:54:47.297989918Z" level=info msg="TearDown network for sandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\" successfully" Jan 29 10:54:47.298031 containerd[1726]: time="2025-01-29T10:54:47.297999198Z" level=info msg="StopPodSandbox for \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\" returns successfully" Jan 29 10:54:47.298894 kubelet[3366]: I0129 10:54:47.298451 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad" Jan 29 10:54:47.299076 containerd[1726]: time="2025-01-29T10:54:47.299049317Z" level=info msg="StopPodSandbox for \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\"" Jan 29 10:54:47.299152 containerd[1726]: time="2025-01-29T10:54:47.299131437Z" level=info msg="TearDown network for sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\" successfully" Jan 29 10:54:47.299152 containerd[1726]: time="2025-01-29T10:54:47.299146717Z" level=info msg="StopPodSandbox for \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\" returns successfully" Jan 29 10:54:47.299323 containerd[1726]: time="2025-01-29T10:54:47.299301557Z" level=info msg="StopPodSandbox for \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\"" Jan 29 10:54:47.299481 containerd[1726]: time="2025-01-29T10:54:47.299459517Z" level=info msg="Ensure that sandbox e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad in task-service has been cleanup successfully" Jan 29 10:54:47.299838 containerd[1726]: time="2025-01-29T10:54:47.299693757Z" level=info msg="TearDown network for sandbox \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\" successfully" Jan 29 10:54:47.299838 containerd[1726]: time="2025-01-29T10:54:47.299714157Z" level=info msg="StopPodSandbox for \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\" returns successfully" Jan 29 10:54:47.300029 containerd[1726]: time="2025-01-29T10:54:47.299999117Z" level=info msg="StopPodSandbox for \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\"" Jan 29 10:54:47.300098 containerd[1726]: time="2025-01-29T10:54:47.300074277Z" level=info msg="TearDown network for sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" successfully" Jan 29 10:54:47.300098 containerd[1726]: time="2025-01-29T10:54:47.300089437Z" level=info msg="StopPodSandbox for \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" returns successfully" Jan 29 10:54:47.300552 containerd[1726]: time="2025-01-29T10:54:47.300522237Z" level=info msg="StopPodSandbox for \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\"" Jan 29 10:54:47.300621 containerd[1726]: time="2025-01-29T10:54:47.300600717Z" level=info msg="TearDown network for sandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\" successfully" Jan 29 10:54:47.300621 containerd[1726]: time="2025-01-29T10:54:47.300616117Z" level=info msg="StopPodSandbox for \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\" returns successfully" Jan 29 10:54:47.300696 containerd[1726]: time="2025-01-29T10:54:47.300660837Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\"" Jan 29 10:54:47.300726 containerd[1726]: time="2025-01-29T10:54:47.300713517Z" level=info msg="TearDown network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" successfully" Jan 29 10:54:47.300726 containerd[1726]: time="2025-01-29T10:54:47.300721717Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" returns successfully" Jan 29 10:54:47.301825 containerd[1726]: time="2025-01-29T10:54:47.301796276Z" level=info msg="StopPodSandbox for \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\"" Jan 29 10:54:47.302390 containerd[1726]: time="2025-01-29T10:54:47.301910716Z" level=info msg="TearDown network for sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\" successfully" Jan 29 10:54:47.303116 containerd[1726]: time="2025-01-29T10:54:47.302455796Z" level=info msg="StopPodSandbox for \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\" returns successfully" Jan 29 10:54:47.305825 containerd[1726]: time="2025-01-29T10:54:47.304519475Z" level=info msg="StopPodSandbox for \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\"" Jan 29 10:54:47.305901 containerd[1726]: time="2025-01-29T10:54:47.305683874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:5,}" Jan 29 10:54:47.312057 containerd[1726]: time="2025-01-29T10:54:47.306157874Z" level=info msg="TearDown network for sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" successfully" Jan 29 10:54:47.312057 containerd[1726]: time="2025-01-29T10:54:47.306180794Z" level=info msg="StopPodSandbox for \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" returns successfully" Jan 29 10:54:47.312550 containerd[1726]: time="2025-01-29T10:54:47.312514991Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\"" Jan 29 10:54:47.312606 containerd[1726]: time="2025-01-29T10:54:47.312595990Z" level=info msg="TearDown network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" successfully" Jan 29 10:54:47.312640 containerd[1726]: time="2025-01-29T10:54:47.312605510Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" returns successfully" Jan 29 10:54:47.314905 containerd[1726]: time="2025-01-29T10:54:47.313823670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:5,}" Jan 29 10:54:47.320558 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 10:54:47.320653 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 10:54:47.320674 kubelet[3366]: I0129 10:54:47.319376 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92" Jan 29 10:54:47.321050 containerd[1726]: time="2025-01-29T10:54:47.321005106Z" level=info msg="StopPodSandbox for \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\"" Jan 29 10:54:47.321257 containerd[1726]: time="2025-01-29T10:54:47.321167066Z" level=info msg="Ensure that sandbox 3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92 in task-service has been cleanup successfully" Jan 29 10:54:47.321554 containerd[1726]: time="2025-01-29T10:54:47.321456946Z" level=info msg="TearDown network for sandbox \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\" successfully" Jan 29 10:54:47.321554 containerd[1726]: time="2025-01-29T10:54:47.321483546Z" level=info msg="StopPodSandbox for \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\" returns successfully" Jan 29 10:54:47.322313 containerd[1726]: time="2025-01-29T10:54:47.322232586Z" level=info msg="StopPodSandbox for \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\"" Jan 29 10:54:47.322313 containerd[1726]: time="2025-01-29T10:54:47.322326865Z" level=info msg="TearDown network for sandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\" successfully" Jan 29 10:54:47.322313 containerd[1726]: time="2025-01-29T10:54:47.322337305Z" level=info msg="StopPodSandbox for \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\" returns successfully" Jan 29 10:54:47.324253 containerd[1726]: time="2025-01-29T10:54:47.324187064Z" level=info msg="StopPodSandbox for \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\"" Jan 29 10:54:47.324339 containerd[1726]: time="2025-01-29T10:54:47.324264064Z" level=info msg="TearDown network for sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\" successfully" Jan 29 10:54:47.324339 containerd[1726]: time="2025-01-29T10:54:47.324273144Z" level=info msg="StopPodSandbox for \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\" returns successfully" Jan 29 10:54:47.327092 containerd[1726]: time="2025-01-29T10:54:47.326670423Z" level=info msg="StopPodSandbox for \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\"" Jan 29 10:54:47.327439 containerd[1726]: time="2025-01-29T10:54:47.327092823Z" level=info msg="TearDown network for sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" successfully" Jan 29 10:54:47.327439 containerd[1726]: time="2025-01-29T10:54:47.327109943Z" level=info msg="StopPodSandbox for \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" returns successfully" Jan 29 10:54:47.328569 containerd[1726]: time="2025-01-29T10:54:47.328401022Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\"" Jan 29 10:54:47.328569 containerd[1726]: time="2025-01-29T10:54:47.328494302Z" level=info msg="TearDown network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" successfully" Jan 29 10:54:47.328569 containerd[1726]: time="2025-01-29T10:54:47.328515462Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" returns successfully" Jan 29 10:54:47.330291 containerd[1726]: time="2025-01-29T10:54:47.330092101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:5,}" Jan 29 10:54:47.331603 kubelet[3366]: I0129 10:54:47.331537 3366 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170" Jan 29 10:54:47.332906 containerd[1726]: time="2025-01-29T10:54:47.332675580Z" level=info msg="StopPodSandbox for \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\"" Jan 29 10:54:47.333600 containerd[1726]: time="2025-01-29T10:54:47.333366060Z" level=info msg="Ensure that sandbox aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170 in task-service has been cleanup successfully" Jan 29 10:54:47.335107 containerd[1726]: time="2025-01-29T10:54:47.334665419Z" level=info msg="TearDown network for sandbox \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\" successfully" Jan 29 10:54:47.335107 containerd[1726]: time="2025-01-29T10:54:47.334706739Z" level=info msg="StopPodSandbox for \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\" returns successfully" Jan 29 10:54:47.336202 containerd[1726]: time="2025-01-29T10:54:47.336090618Z" level=info msg="StopPodSandbox for \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\"" Jan 29 10:54:47.336987 containerd[1726]: time="2025-01-29T10:54:47.336946538Z" level=info msg="TearDown network for sandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\" successfully" Jan 29 10:54:47.337115 containerd[1726]: time="2025-01-29T10:54:47.337074738Z" level=info msg="StopPodSandbox for \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\" returns successfully" Jan 29 10:54:47.338097 containerd[1726]: time="2025-01-29T10:54:47.337940577Z" level=info msg="StopPodSandbox for \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\"" Jan 29 10:54:47.338849 containerd[1726]: time="2025-01-29T10:54:47.338352897Z" level=info msg="TearDown network for sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\" successfully" Jan 29 10:54:47.338849 containerd[1726]: time="2025-01-29T10:54:47.338371457Z" level=info msg="StopPodSandbox for \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\" returns successfully" Jan 29 10:54:47.339960 containerd[1726]: time="2025-01-29T10:54:47.339633177Z" level=info msg="StopPodSandbox for \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\"" Jan 29 10:54:47.339960 containerd[1726]: time="2025-01-29T10:54:47.339713776Z" level=info msg="TearDown network for sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" successfully" Jan 29 10:54:47.339960 containerd[1726]: time="2025-01-29T10:54:47.339725336Z" level=info msg="StopPodSandbox for \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" returns successfully" Jan 29 10:54:47.340494 containerd[1726]: time="2025-01-29T10:54:47.340388576Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\"" Jan 29 10:54:47.340494 containerd[1726]: time="2025-01-29T10:54:47.340471096Z" level=info msg="TearDown network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" successfully" Jan 29 10:54:47.340494 containerd[1726]: time="2025-01-29T10:54:47.340480856Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" returns successfully" Jan 29 10:54:47.342605 containerd[1726]: time="2025-01-29T10:54:47.342152255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:5,}" Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.473 [INFO][5143] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.473 [INFO][5143] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" iface="eth0" netns="/var/run/netns/cni-a8da33b0-c9e9-4d37-000a-376fbb158efb" Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.473 [INFO][5143] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" iface="eth0" netns="/var/run/netns/cni-a8da33b0-c9e9-4d37-000a-376fbb158efb" Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.474 [INFO][5143] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" iface="eth0" netns="/var/run/netns/cni-a8da33b0-c9e9-4d37-000a-376fbb158efb" Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.474 [INFO][5143] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.474 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.548 [INFO][5166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" HandleID="k8s-pod-network.b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.554 [INFO][5166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.555 [INFO][5166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.579 [WARNING][5166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" HandleID="k8s-pod-network.b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.579 [INFO][5166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" HandleID="k8s-pod-network.b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.583 [INFO][5166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:54:47.594920 containerd[1726]: 2025-01-29 10:54:47.592 [INFO][5143] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8" Jan 29 10:54:47.609881 containerd[1726]: time="2025-01-29T10:54:47.609310438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.612110 kubelet[3366]: E0129 10:54:47.610324 3366 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 10:54:47.612110 kubelet[3366]: E0129 10:54:47.610390 3366 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:47.612110 kubelet[3366]: E0129 10:54:47.610409 3366 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v9l7s" Jan 29 10:54:47.612310 kubelet[3366]: E0129 10:54:47.610455 3366 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v9l7s_kube-system(2b04ba18-4078-4572-b181-dabad7c530d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b19aac451d5c47fe3b251dd2de45c244e6f21875a4bb872c9bf718a600f865e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v9l7s" podUID="2b04ba18-4078-4572-b181-dabad7c530d3" Jan 29 10:54:47.788506 systemd[1]: run-netns-cni\x2dff717591\x2d34bb\x2dfdfc\x2d3ba0\x2d550662dae08b.mount: Deactivated successfully. Jan 29 10:54:47.788783 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906-shm.mount: Deactivated successfully. Jan 29 10:54:47.788961 systemd[1]: run-netns-cni\x2d59c225d0\x2dfa52\x2debc3\x2df67e\x2d15b4ff1b2652.mount: Deactivated successfully. Jan 29 10:54:47.887766 systemd-networkd[1334]: cali7517c85c8a2: Link UP Jan 29 10:54:47.890000 systemd-networkd[1334]: cali7517c85c8a2: Gained carrier Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.482 [INFO][5151] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.509 [INFO][5151] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0 coredns-6f6b679f8f- kube-system 7648d1df-3533-48c0-904f-b5099718a0e0 663 0 2025-01-29 10:54:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-2e829ed2e0 coredns-6f6b679f8f-lmknf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7517c85c8a2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" Namespace="kube-system" Pod="coredns-6f6b679f8f-lmknf" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.509 [INFO][5151] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" Namespace="kube-system" Pod="coredns-6f6b679f8f-lmknf" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.690 [INFO][5217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" HandleID="k8s-pod-network.ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.759 [INFO][5217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" HandleID="k8s-pod-network.ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003889c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-2e829ed2e0", "pod":"coredns-6f6b679f8f-lmknf", "timestamp":"2025-01-29 10:54:47.690804156 +0000 UTC"}, Hostname:"ci-4186.1.0-a-2e829ed2e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.759 [INFO][5217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.759 [INFO][5217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.759 [INFO][5217] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-2e829ed2e0' Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.766 [INFO][5217] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.840 [INFO][5217] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.850 [INFO][5217] ipam/ipam.go 489: Trying affinity for 192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.852 [INFO][5217] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.854 [INFO][5217] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.854 [INFO][5217] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.128/26 handle="k8s-pod-network.ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.856 [INFO][5217] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.863 [INFO][5217] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.128/26 handle="k8s-pod-network.ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.872 [INFO][5217] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.128/26] block=192.168.20.128/26 handle="k8s-pod-network.ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.872 [INFO][5217] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.128/26] handle="k8s-pod-network.ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.873 [INFO][5217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:54:47.907211 containerd[1726]: 2025-01-29 10:54:47.873 [INFO][5217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.128/26] IPv6=[] ContainerID="ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" HandleID="k8s-pod-network.ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0" Jan 29 10:54:47.908756 containerd[1726]: 2025-01-29 10:54:47.877 [INFO][5151] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" Namespace="kube-system" Pod="coredns-6f6b679f8f-lmknf" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7648d1df-3533-48c0-904f-b5099718a0e0", ResourceVersion:"663", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"", Pod:"coredns-6f6b679f8f-lmknf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7517c85c8a2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:47.908756 containerd[1726]: 2025-01-29 10:54:47.877 [INFO][5151] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.128/32] ContainerID="ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" Namespace="kube-system" Pod="coredns-6f6b679f8f-lmknf" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0" Jan 29 10:54:47.908756 containerd[1726]: 2025-01-29 10:54:47.877 [INFO][5151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7517c85c8a2 ContainerID="ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" Namespace="kube-system" Pod="coredns-6f6b679f8f-lmknf" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0" Jan 29 10:54:47.908756 containerd[1726]: 2025-01-29 10:54:47.890 [INFO][5151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" Namespace="kube-system" Pod="coredns-6f6b679f8f-lmknf" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0" Jan 29 10:54:47.908756 containerd[1726]: 2025-01-29 10:54:47.890 [INFO][5151] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" Namespace="kube-system" Pod="coredns-6f6b679f8f-lmknf" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7648d1df-3533-48c0-904f-b5099718a0e0", ResourceVersion:"663", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c", Pod:"coredns-6f6b679f8f-lmknf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7517c85c8a2", MAC:"3a:fc:ca:9b:d8:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:47.908756 containerd[1726]: 2025-01-29 10:54:47.904 [INFO][5151] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c" Namespace="kube-system" Pod="coredns-6f6b679f8f-lmknf" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--lmknf-eth0" Jan 29 10:54:47.935591 containerd[1726]: time="2025-01-29T10:54:47.935472189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:47.935799 containerd[1726]: time="2025-01-29T10:54:47.935592709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:47.935799 containerd[1726]: time="2025-01-29T10:54:47.935625909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:47.935799 containerd[1726]: time="2025-01-29T10:54:47.935745629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:47.973060 systemd[1]: Started cri-containerd-ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c.scope - libcontainer container ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c. Jan 29 10:54:48.007835 systemd-networkd[1334]: cali0f3afd5fae4: Link UP Jan 29 10:54:48.009513 systemd-networkd[1334]: cali0f3afd5fae4: Gained carrier Jan 29 10:54:48.034333 containerd[1726]: time="2025-01-29T10:54:48.034224459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lmknf,Uid:7648d1df-3533-48c0-904f-b5099718a0e0,Namespace:kube-system,Attempt:5,} returns sandbox id \"ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c\"" Jan 29 10:54:48.042160 containerd[1726]: time="2025-01-29T10:54:48.042120094Z" level=info msg="CreateContainer within sandbox \"ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.535 [INFO][5172] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.572 [INFO][5172] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0 calico-apiserver-785d46db66- calico-apiserver ff0d24de-055d-4e71-bae6-576826e88d95 666 0 2025-01-29 10:54:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:785d46db66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-2e829ed2e0 calico-apiserver-785d46db66-nfw9l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f3afd5fae4 [] []}} ContainerID="d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-nfw9l" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.572 [INFO][5172] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-nfw9l" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.687 [INFO][5226] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" HandleID="k8s-pod-network.d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.761 [INFO][5226] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" HandleID="k8s-pod-network.d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003baba0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-2e829ed2e0", "pod":"calico-apiserver-785d46db66-nfw9l", "timestamp":"2025-01-29 10:54:47.687540557 +0000 UTC"}, Hostname:"ci-4186.1.0-a-2e829ed2e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.761 [INFO][5226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.873 [INFO][5226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.873 [INFO][5226] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-2e829ed2e0' Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.879 [INFO][5226] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.947 [INFO][5226] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.964 [INFO][5226] ipam/ipam.go 489: Trying affinity for 192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.975 [INFO][5226] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.980 [INFO][5226] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.980 [INFO][5226] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.128/26 handle="k8s-pod-network.d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.981 [INFO][5226] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:47.988 [INFO][5226] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.128/26 handle="k8s-pod-network.d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:48.000 [INFO][5226] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.130/26] block=192.168.20.128/26 handle="k8s-pod-network.d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:48.000 [INFO][5226] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.130/26] handle="k8s-pod-network.d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:48.000 [INFO][5226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:54:48.043425 containerd[1726]: 2025-01-29 10:54:48.000 [INFO][5226] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.130/26] IPv6=[] ContainerID="d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" HandleID="k8s-pod-network.d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0" Jan 29 10:54:48.045157 containerd[1726]: 2025-01-29 10:54:48.004 [INFO][5172] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-nfw9l" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0", GenerateName:"calico-apiserver-785d46db66-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff0d24de-055d-4e71-bae6-576826e88d95", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785d46db66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"", Pod:"calico-apiserver-785d46db66-nfw9l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f3afd5fae4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:48.045157 containerd[1726]: 2025-01-29 10:54:48.004 [INFO][5172] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.130/32] ContainerID="d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-nfw9l" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0" Jan 29 10:54:48.045157 containerd[1726]: 2025-01-29 10:54:48.004 [INFO][5172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f3afd5fae4 ContainerID="d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-nfw9l" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0" Jan 29 10:54:48.045157 containerd[1726]: 2025-01-29 10:54:48.010 [INFO][5172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-nfw9l" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0" Jan 29 10:54:48.045157 containerd[1726]: 2025-01-29 10:54:48.016 [INFO][5172] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-nfw9l" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0", GenerateName:"calico-apiserver-785d46db66-", Namespace:"calico-apiserver", SelfLink:"", UID:"ff0d24de-055d-4e71-bae6-576826e88d95", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785d46db66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc", Pod:"calico-apiserver-785d46db66-nfw9l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f3afd5fae4", MAC:"06:9c:c7:40:95:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:48.045157 containerd[1726]: 2025-01-29 10:54:48.036 [INFO][5172] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-nfw9l" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--nfw9l-eth0" Jan 29 10:54:48.084449 containerd[1726]: time="2025-01-29T10:54:48.082367074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:48.084449 containerd[1726]: time="2025-01-29T10:54:48.082444594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:48.084449 containerd[1726]: time="2025-01-29T10:54:48.082460634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:48.084449 containerd[1726]: time="2025-01-29T10:54:48.082554754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:48.104436 containerd[1726]: time="2025-01-29T10:54:48.104292662Z" level=info msg="CreateContainer within sandbox \"ce5f42af873ace3a71d06af66b023c0d6fecffbd86c78c24ec691255b6fd8f7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef7fd01558998581ac0af3f12b1a351f9e3e804a581af7b0fb474250fa7823f4\"" Jan 29 10:54:48.106932 containerd[1726]: time="2025-01-29T10:54:48.105366022Z" level=info msg="StartContainer for \"ef7fd01558998581ac0af3f12b1a351f9e3e804a581af7b0fb474250fa7823f4\"" Jan 29 10:54:48.107243 systemd[1]: Started cri-containerd-d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc.scope - libcontainer container d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc. Jan 29 10:54:48.115655 systemd-networkd[1334]: cali8900d0d5a9c: Link UP Jan 29 10:54:48.118838 systemd-networkd[1334]: cali8900d0d5a9c: Gained carrier Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:47.644 [INFO][5199] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:47.673 [INFO][5199] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0 calico-apiserver-785d46db66- calico-apiserver 8d76515f-5553-4e43-85a9-9b91a1e79d22 671 0 2025-01-29 10:54:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:785d46db66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186.1.0-a-2e829ed2e0 calico-apiserver-785d46db66-x8gcd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8900d0d5a9c [] []}} ContainerID="a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-x8gcd" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:47.673 [INFO][5199] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-x8gcd" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:47.802 [INFO][5246] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" HandleID="k8s-pod-network.a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:47.945 [INFO][5246] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" HandleID="k8s-pod-network.a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a1860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186.1.0-a-2e829ed2e0", "pod":"calico-apiserver-785d46db66-x8gcd", "timestamp":"2025-01-29 10:54:47.802143618 +0000 UTC"}, Hostname:"ci-4186.1.0-a-2e829ed2e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:47.946 [INFO][5246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.001 [INFO][5246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.001 [INFO][5246] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-2e829ed2e0' Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.010 [INFO][5246] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.040 [INFO][5246] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.060 [INFO][5246] ipam/ipam.go 489: Trying affinity for 192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.070 [INFO][5246] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.075 [INFO][5246] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.075 [INFO][5246] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.128/26 handle="k8s-pod-network.a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.077 [INFO][5246] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905 Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.089 [INFO][5246] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.128/26 handle="k8s-pod-network.a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.100 [INFO][5246] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.131/26] block=192.168.20.128/26 handle="k8s-pod-network.a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.102 [INFO][5246] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.131/26] handle="k8s-pod-network.a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.102 [INFO][5246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:54:48.143662 containerd[1726]: 2025-01-29 10:54:48.102 [INFO][5246] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.131/26] IPv6=[] ContainerID="a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" HandleID="k8s-pod-network.a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0" Jan 29 10:54:48.146382 containerd[1726]: 2025-01-29 10:54:48.107 [INFO][5199] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-x8gcd" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0", GenerateName:"calico-apiserver-785d46db66-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d76515f-5553-4e43-85a9-9b91a1e79d22", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785d46db66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"", Pod:"calico-apiserver-785d46db66-x8gcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8900d0d5a9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:48.146382 containerd[1726]: 2025-01-29 10:54:48.108 [INFO][5199] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.131/32] ContainerID="a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-x8gcd" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0" Jan 29 10:54:48.146382 containerd[1726]: 2025-01-29 10:54:48.108 [INFO][5199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8900d0d5a9c ContainerID="a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-x8gcd" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0" Jan 29 10:54:48.146382 containerd[1726]: 2025-01-29 10:54:48.120 [INFO][5199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-x8gcd" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0" Jan 29 10:54:48.146382 containerd[1726]: 2025-01-29 10:54:48.121 [INFO][5199] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-x8gcd" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0", GenerateName:"calico-apiserver-785d46db66-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d76515f-5553-4e43-85a9-9b91a1e79d22", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"785d46db66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905", Pod:"calico-apiserver-785d46db66-x8gcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8900d0d5a9c", MAC:"5a:03:be:65:84:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:48.146382 containerd[1726]: 2025-01-29 10:54:48.139 [INFO][5199] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905" Namespace="calico-apiserver" Pod="calico-apiserver-785d46db66-x8gcd" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--apiserver--785d46db66--x8gcd-eth0" Jan 29 10:54:48.155686 systemd[1]: Started cri-containerd-ef7fd01558998581ac0af3f12b1a351f9e3e804a581af7b0fb474250fa7823f4.scope - libcontainer container ef7fd01558998581ac0af3f12b1a351f9e3e804a581af7b0fb474250fa7823f4. Jan 29 10:54:48.218909 containerd[1726]: time="2025-01-29T10:54:48.218266164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:48.218909 containerd[1726]: time="2025-01-29T10:54:48.218463084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:48.218909 containerd[1726]: time="2025-01-29T10:54:48.218581644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:48.219083 containerd[1726]: time="2025-01-29T10:54:48.219019763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:48.237998 containerd[1726]: time="2025-01-29T10:54:48.237849714Z" level=info msg="StartContainer for \"ef7fd01558998581ac0af3f12b1a351f9e3e804a581af7b0fb474250fa7823f4\" returns successfully" Jan 29 10:54:48.255119 systemd-networkd[1334]: calia7f244fc5a5: Link UP Jan 29 10:54:48.256186 systemd-networkd[1334]: calia7f244fc5a5: Gained carrier Jan 29 10:54:48.280843 systemd[1]: Started cri-containerd-a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905.scope - libcontainer container a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905. Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:47.636 [INFO][5212] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:47.667 [INFO][5212] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0 calico-kube-controllers-fdbfb578c- calico-system 24a732d5-74a3-4958-bf3c-24c35316b990 669 0 2025-01-29 10:54:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fdbfb578c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186.1.0-a-2e829ed2e0 calico-kube-controllers-fdbfb578c-sx269 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia7f244fc5a5 [] []}} ContainerID="67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" Namespace="calico-system" Pod="calico-kube-controllers-fdbfb578c-sx269" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:47.667 [INFO][5212] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" Namespace="calico-system" Pod="calico-kube-controllers-fdbfb578c-sx269" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:47.768 [INFO][5242] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" HandleID="k8s-pod-network.67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:47.941 [INFO][5242] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" HandleID="k8s-pod-network.67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000483280), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-2e829ed2e0", "pod":"calico-kube-controllers-fdbfb578c-sx269", "timestamp":"2025-01-29 10:54:47.768397396 +0000 UTC"}, Hostname:"ci-4186.1.0-a-2e829ed2e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:47.948 [INFO][5242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.102 [INFO][5242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.102 [INFO][5242] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-2e829ed2e0' Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.119 [INFO][5242] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.141 [INFO][5242] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.174 [INFO][5242] ipam/ipam.go 489: Trying affinity for 192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.181 [INFO][5242] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.189 [INFO][5242] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.189 [INFO][5242] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.128/26 handle="k8s-pod-network.67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.194 [INFO][5242] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70 Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.207 [INFO][5242] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.128/26 handle="k8s-pod-network.67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.232 [INFO][5242] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.132/26] block=192.168.20.128/26 handle="k8s-pod-network.67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.234 [INFO][5242] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.132/26] handle="k8s-pod-network.67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.236 [INFO][5242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:54:48.292665 containerd[1726]: 2025-01-29 10:54:48.237 [INFO][5242] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.132/26] IPv6=[] ContainerID="67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" HandleID="k8s-pod-network.67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0" Jan 29 10:54:48.293567 containerd[1726]: 2025-01-29 10:54:48.250 [INFO][5212] cni-plugin/k8s.go 386: Populated endpoint ContainerID="67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" Namespace="calico-system" Pod="calico-kube-controllers-fdbfb578c-sx269" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0", GenerateName:"calico-kube-controllers-fdbfb578c-", Namespace:"calico-system", SelfLink:"", UID:"24a732d5-74a3-4958-bf3c-24c35316b990", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fdbfb578c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"", Pod:"calico-kube-controllers-fdbfb578c-sx269", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7f244fc5a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:48.293567 containerd[1726]: 2025-01-29 10:54:48.250 [INFO][5212] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.132/32] ContainerID="67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" Namespace="calico-system" Pod="calico-kube-controllers-fdbfb578c-sx269" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0" Jan 29 10:54:48.293567 containerd[1726]: 2025-01-29 10:54:48.250 [INFO][5212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7f244fc5a5 ContainerID="67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" Namespace="calico-system" Pod="calico-kube-controllers-fdbfb578c-sx269" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0" Jan 29 10:54:48.293567 containerd[1726]: 2025-01-29 10:54:48.257 [INFO][5212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" Namespace="calico-system" Pod="calico-kube-controllers-fdbfb578c-sx269" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0" Jan 29 10:54:48.293567 containerd[1726]: 2025-01-29 10:54:48.257 [INFO][5212] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" Namespace="calico-system" Pod="calico-kube-controllers-fdbfb578c-sx269" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0", GenerateName:"calico-kube-controllers-fdbfb578c-", Namespace:"calico-system", SelfLink:"", UID:"24a732d5-74a3-4958-bf3c-24c35316b990", ResourceVersion:"669", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fdbfb578c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70", Pod:"calico-kube-controllers-fdbfb578c-sx269", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia7f244fc5a5", MAC:"2a:40:01:2b:42:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:48.293567 containerd[1726]: 2025-01-29 10:54:48.289 [INFO][5212] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70" Namespace="calico-system" Pod="calico-kube-controllers-fdbfb578c-sx269" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-calico--kube--controllers--fdbfb578c--sx269-eth0" Jan 29 10:54:48.298013 containerd[1726]: time="2025-01-29T10:54:48.297651283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-nfw9l,Uid:ff0d24de-055d-4e71-bae6-576826e88d95,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc\"" Jan 29 10:54:48.301978 containerd[1726]: time="2025-01-29T10:54:48.301555081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 10:54:48.341019 containerd[1726]: time="2025-01-29T10:54:48.340616061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:48.341019 containerd[1726]: time="2025-01-29T10:54:48.340678021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:48.341019 containerd[1726]: time="2025-01-29T10:54:48.340694021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:48.341019 containerd[1726]: time="2025-01-29T10:54:48.340796221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:48.356775 containerd[1726]: time="2025-01-29T10:54:48.356521412Z" level=info msg="StopPodSandbox for \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\"" Jan 29 10:54:48.356775 containerd[1726]: time="2025-01-29T10:54:48.356619452Z" level=info msg="TearDown network for sandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\" successfully" Jan 29 10:54:48.356775 containerd[1726]: time="2025-01-29T10:54:48.356632132Z" level=info msg="StopPodSandbox for \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\" returns successfully" Jan 29 10:54:48.357450 containerd[1726]: time="2025-01-29T10:54:48.357382612Z" level=info msg="StopPodSandbox for \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\"" Jan 29 10:54:48.358087 containerd[1726]: time="2025-01-29T10:54:48.357577892Z" level=info msg="TearDown network for sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\" successfully" Jan 29 10:54:48.359886 containerd[1726]: time="2025-01-29T10:54:48.359841091Z" level=info msg="StopPodSandbox for \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\" returns successfully" Jan 29 10:54:48.360516 containerd[1726]: time="2025-01-29T10:54:48.360226531Z" level=info msg="StopPodSandbox for \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\"" Jan 29 10:54:48.360516 containerd[1726]: time="2025-01-29T10:54:48.360304570Z" level=info msg="TearDown network for sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" successfully" Jan 29 10:54:48.360516 containerd[1726]: time="2025-01-29T10:54:48.360313730Z" level=info msg="StopPodSandbox for \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" returns successfully" Jan 29 10:54:48.361955 containerd[1726]: time="2025-01-29T10:54:48.361218010Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\"" Jan 29 10:54:48.362768 containerd[1726]: time="2025-01-29T10:54:48.362578649Z" level=info msg="TearDown network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" successfully" Jan 29 10:54:48.362768 containerd[1726]: time="2025-01-29T10:54:48.362598449Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" returns successfully" Jan 29 10:54:48.363490 containerd[1726]: time="2025-01-29T10:54:48.363124929Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\"" Jan 29 10:54:48.363490 containerd[1726]: time="2025-01-29T10:54:48.363254249Z" level=info msg="TearDown network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" successfully" Jan 29 10:54:48.363490 containerd[1726]: time="2025-01-29T10:54:48.363264769Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" returns successfully" Jan 29 10:54:48.369008 kubelet[3366]: I0129 10:54:48.368560 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lmknf" podStartSLOduration=37.368541326 podStartE2EDuration="37.368541326s" podCreationTimestamp="2025-01-29 10:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:54:48.366254407 +0000 UTC m=+42.448071028" watchObservedRunningTime="2025-01-29 10:54:48.368541326 +0000 UTC m=+42.450357947" Jan 29 10:54:48.369451 systemd-networkd[1334]: cali4db625dc5d6: Link UP Jan 29 10:54:48.376490 systemd-networkd[1334]: cali4db625dc5d6: Gained carrier Jan 29 10:54:48.381384 containerd[1726]: time="2025-01-29T10:54:48.375357683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:5,}" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:47.636 [INFO][5190] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:47.666 [INFO][5190] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0 csi-node-driver- calico-system 85d938fd-edbc-4618-8500-89676d3770ef 581 0 2025-01-29 10:54:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186.1.0-a-2e829ed2e0 csi-node-driver-ct4dr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4db625dc5d6 [] []}} ContainerID="2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" Namespace="calico-system" Pod="csi-node-driver-ct4dr" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:47.666 [INFO][5190] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" Namespace="calico-system" Pod="csi-node-driver-ct4dr" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:47.818 [INFO][5247] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" HandleID="k8s-pod-network.2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:47.949 [INFO][5247] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" HandleID="k8s-pod-network.2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011a740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186.1.0-a-2e829ed2e0", "pod":"csi-node-driver-ct4dr", "timestamp":"2025-01-29 10:54:47.81766485 +0000 UTC"}, Hostname:"ci-4186.1.0-a-2e829ed2e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:47.950 [INFO][5247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.236 [INFO][5247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.238 [INFO][5247] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-2e829ed2e0' Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.244 [INFO][5247] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.260 [INFO][5247] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.278 [INFO][5247] ipam/ipam.go 489: Trying affinity for 192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.289 [INFO][5247] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.295 [INFO][5247] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.295 [INFO][5247] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.128/26 handle="k8s-pod-network.2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.300 [INFO][5247] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.336 [INFO][5247] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.128/26 handle="k8s-pod-network.2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.352 [INFO][5247] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.133/26] block=192.168.20.128/26 handle="k8s-pod-network.2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.352 [INFO][5247] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.133/26] handle="k8s-pod-network.2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.352 [INFO][5247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:54:48.417176 containerd[1726]: 2025-01-29 10:54:48.352 [INFO][5247] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.133/26] IPv6=[] ContainerID="2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" HandleID="k8s-pod-network.2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0" Jan 29 10:54:48.418346 containerd[1726]: 2025-01-29 10:54:48.358 [INFO][5190] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" Namespace="calico-system" Pod="csi-node-driver-ct4dr" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85d938fd-edbc-4618-8500-89676d3770ef", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"", Pod:"csi-node-driver-ct4dr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4db625dc5d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:48.418346 containerd[1726]: 2025-01-29 10:54:48.358 [INFO][5190] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.133/32] ContainerID="2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" Namespace="calico-system" Pod="csi-node-driver-ct4dr" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0" Jan 29 10:54:48.418346 containerd[1726]: 2025-01-29 10:54:48.358 [INFO][5190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4db625dc5d6 ContainerID="2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" Namespace="calico-system" Pod="csi-node-driver-ct4dr" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0" Jan 29 10:54:48.418346 containerd[1726]: 2025-01-29 10:54:48.385 [INFO][5190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" Namespace="calico-system" Pod="csi-node-driver-ct4dr" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0" Jan 29 10:54:48.418346 containerd[1726]: 2025-01-29 10:54:48.388 [INFO][5190] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" Namespace="calico-system" Pod="csi-node-driver-ct4dr" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"85d938fd-edbc-4618-8500-89676d3770ef", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c", Pod:"csi-node-driver-ct4dr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4db625dc5d6", MAC:"7e:d4:3e:b2:3a:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:48.418346 containerd[1726]: 2025-01-29 10:54:48.410 [INFO][5190] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c" Namespace="calico-system" Pod="csi-node-driver-ct4dr" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-csi--node--driver--ct4dr-eth0" Jan 29 10:54:48.422157 systemd[1]: Started cri-containerd-67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70.scope - libcontainer container 67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70. Jan 29 10:54:48.483104 containerd[1726]: time="2025-01-29T10:54:48.482582627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:48.483104 containerd[1726]: time="2025-01-29T10:54:48.482651107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:48.483526 containerd[1726]: time="2025-01-29T10:54:48.482663867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:48.487724 containerd[1726]: time="2025-01-29T10:54:48.484475546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:48.511146 containerd[1726]: time="2025-01-29T10:54:48.510997413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-785d46db66-x8gcd,Uid:8d76515f-5553-4e43-85a9-9b91a1e79d22,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905\"" Jan 29 10:54:48.542705 systemd[1]: Started cri-containerd-2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c.scope - libcontainer container 2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c. Jan 29 10:54:48.621763 containerd[1726]: time="2025-01-29T10:54:48.621589716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ct4dr,Uid:85d938fd-edbc-4618-8500-89676d3770ef,Namespace:calico-system,Attempt:5,} returns sandbox id \"2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c\"" Jan 29 10:54:48.661812 containerd[1726]: time="2025-01-29T10:54:48.661712455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fdbfb578c-sx269,Uid:24a732d5-74a3-4958-bf3c-24c35316b990,Namespace:calico-system,Attempt:5,} returns sandbox id \"67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70\"" Jan 29 10:54:48.671376 systemd-networkd[1334]: cali894a8fb03c9: Link UP Jan 29 10:54:48.671723 systemd-networkd[1334]: cali894a8fb03c9: Gained carrier Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.474 [INFO][5509] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.514 [INFO][5509] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0 coredns-6f6b679f8f- kube-system 2b04ba18-4078-4572-b181-dabad7c530d3 786 0 2025-01-29 10:54:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186.1.0-a-2e829ed2e0 coredns-6f6b679f8f-v9l7s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali894a8fb03c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9l7s" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.515 [INFO][5509] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9l7s" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.580 [INFO][5579] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" HandleID="k8s-pod-network.fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.596 [INFO][5579] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" HandleID="k8s-pod-network.fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004dcc00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186.1.0-a-2e829ed2e0", "pod":"coredns-6f6b679f8f-v9l7s", "timestamp":"2025-01-29 10:54:48.580683017 +0000 UTC"}, Hostname:"ci-4186.1.0-a-2e829ed2e0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.596 [INFO][5579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.597 [INFO][5579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.597 [INFO][5579] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186.1.0-a-2e829ed2e0' Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.607 [INFO][5579] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.614 [INFO][5579] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.620 [INFO][5579] ipam/ipam.go 489: Trying affinity for 192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.624 [INFO][5579] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.628 [INFO][5579] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.128/26 host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.628 [INFO][5579] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.128/26 handle="k8s-pod-network.fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.630 [INFO][5579] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.644 [INFO][5579] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.128/26 handle="k8s-pod-network.fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.665 [INFO][5579] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.134/26] block=192.168.20.128/26 handle="k8s-pod-network.fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.665 [INFO][5579] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.134/26] handle="k8s-pod-network.fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" host="ci-4186.1.0-a-2e829ed2e0" Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.665 [INFO][5579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 10:54:48.689950 containerd[1726]: 2025-01-29 10:54:48.665 [INFO][5579] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.134/26] IPv6=[] ContainerID="fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" HandleID="k8s-pod-network.fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" Workload="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" Jan 29 10:54:48.692306 containerd[1726]: 2025-01-29 10:54:48.668 [INFO][5509] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9l7s" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2b04ba18-4078-4572-b181-dabad7c530d3", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"", Pod:"coredns-6f6b679f8f-v9l7s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali894a8fb03c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:48.692306 containerd[1726]: 2025-01-29 10:54:48.668 [INFO][5509] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.134/32] ContainerID="fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9l7s" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" Jan 29 10:54:48.692306 containerd[1726]: 2025-01-29 10:54:48.668 [INFO][5509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali894a8fb03c9 ContainerID="fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9l7s" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" Jan 29 10:54:48.692306 containerd[1726]: 2025-01-29 10:54:48.672 [INFO][5509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9l7s" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" Jan 29 10:54:48.692306 containerd[1726]: 2025-01-29 10:54:48.672 [INFO][5509] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9l7s" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2b04ba18-4078-4572-b181-dabad7c530d3", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 10, 54, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186.1.0-a-2e829ed2e0", ContainerID:"fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb", Pod:"coredns-6f6b679f8f-v9l7s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali894a8fb03c9", MAC:"22:11:44:1e:92:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 10:54:48.692306 containerd[1726]: 2025-01-29 10:54:48.686 [INFO][5509] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb" Namespace="kube-system" Pod="coredns-6f6b679f8f-v9l7s" WorkloadEndpoint="ci--4186.1.0--a--2e829ed2e0-k8s-coredns--6f6b679f8f--v9l7s-eth0" Jan 29 10:54:48.723931 containerd[1726]: time="2025-01-29T10:54:48.723306503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 10:54:48.723931 containerd[1726]: time="2025-01-29T10:54:48.723358743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 10:54:48.723931 containerd[1726]: time="2025-01-29T10:54:48.723375023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:48.723931 containerd[1726]: time="2025-01-29T10:54:48.723609103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 10:54:48.741041 systemd[1]: Started cri-containerd-fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb.scope - libcontainer container fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb. Jan 29 10:54:48.786002 containerd[1726]: time="2025-01-29T10:54:48.785046832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v9l7s,Uid:2b04ba18-4078-4572-b181-dabad7c530d3,Namespace:kube-system,Attempt:5,} returns sandbox id \"fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb\"" Jan 29 10:54:48.789916 containerd[1726]: time="2025-01-29T10:54:48.789879349Z" level=info msg="CreateContainer within sandbox \"fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 10:54:48.825808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4022359319.mount: Deactivated successfully. Jan 29 10:54:48.855475 containerd[1726]: time="2025-01-29T10:54:48.855359075Z" level=info msg="CreateContainer within sandbox \"fb113fcb905dd0e8850c83ff46e82cdbb1187cec8194afce3c831a325b1c5feb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9d781ad21b85168b3dd70d2ecec070f08abca475be5a46b0b50eb28f1d1697a\"" Jan 29 10:54:48.855929 containerd[1726]: time="2025-01-29T10:54:48.855778235Z" level=info msg="StartContainer for \"a9d781ad21b85168b3dd70d2ecec070f08abca475be5a46b0b50eb28f1d1697a\"" Jan 29 10:54:48.886039 systemd[1]: Started cri-containerd-a9d781ad21b85168b3dd70d2ecec070f08abca475be5a46b0b50eb28f1d1697a.scope - libcontainer container a9d781ad21b85168b3dd70d2ecec070f08abca475be5a46b0b50eb28f1d1697a. Jan 29 10:54:48.909786 containerd[1726]: time="2025-01-29T10:54:48.909738367Z" level=info msg="StartContainer for \"a9d781ad21b85168b3dd70d2ecec070f08abca475be5a46b0b50eb28f1d1697a\" returns successfully" Jan 29 10:54:49.035005 systemd-networkd[1334]: cali0f3afd5fae4: Gained IPv6LL Jan 29 10:54:49.319896 kernel: bpftool[5801]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 10:54:49.399229 kubelet[3366]: I0129 10:54:49.399057 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v9l7s" podStartSLOduration=38.399039115 podStartE2EDuration="38.399039115s" podCreationTimestamp="2025-01-29 10:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 10:54:49.398208076 +0000 UTC m=+43.480024737" watchObservedRunningTime="2025-01-29 10:54:49.399039115 +0000 UTC m=+43.480855696" Jan 29 10:54:49.419036 systemd-networkd[1334]: cali8900d0d5a9c: Gained IPv6LL Jan 29 10:54:49.675052 systemd-networkd[1334]: calia7f244fc5a5: Gained IPv6LL Jan 29 10:54:49.867010 systemd-networkd[1334]: cali7517c85c8a2: Gained IPv6LL Jan 29 10:54:50.059009 systemd-networkd[1334]: cali4db625dc5d6: Gained IPv6LL Jan 29 10:54:50.143911 systemd-networkd[1334]: vxlan.calico: Link UP Jan 29 10:54:50.143921 systemd-networkd[1334]: vxlan.calico: Gained carrier Jan 29 10:54:50.699122 systemd-networkd[1334]: cali894a8fb03c9: Gained IPv6LL Jan 29 10:54:52.170979 systemd-networkd[1334]: vxlan.calico: Gained IPv6LL Jan 29 10:54:52.833667 containerd[1726]: time="2025-01-29T10:54:52.833587853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:52.837187 containerd[1726]: time="2025-01-29T10:54:52.837147451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 10:54:52.841906 containerd[1726]: time="2025-01-29T10:54:52.841840169Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:52.849134 containerd[1726]: time="2025-01-29T10:54:52.849058845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:52.849847 containerd[1726]: time="2025-01-29T10:54:52.849664805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.548075644s" Jan 29 10:54:52.849847 containerd[1726]: time="2025-01-29T10:54:52.849690205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 10:54:52.860177 containerd[1726]: time="2025-01-29T10:54:52.860141799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 10:54:52.877940 containerd[1726]: time="2025-01-29T10:54:52.877908349Z" level=info msg="CreateContainer within sandbox \"d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 10:54:52.926197 containerd[1726]: time="2025-01-29T10:54:52.926121763Z" level=info msg="CreateContainer within sandbox \"d38644f0176785cb5ebb2dba45054d09a74b15113b206c53c9a23cdd3363b2bc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"47820369ae8e4ac92581858199999c5b8b3df8d5ed1d05e5b406feeb07c7364c\"" Jan 29 10:54:52.929091 containerd[1726]: time="2025-01-29T10:54:52.929049322Z" level=info msg="StartContainer for \"47820369ae8e4ac92581858199999c5b8b3df8d5ed1d05e5b406feeb07c7364c\"" Jan 29 10:54:52.965024 systemd[1]: Started cri-containerd-47820369ae8e4ac92581858199999c5b8b3df8d5ed1d05e5b406feeb07c7364c.scope - libcontainer container 47820369ae8e4ac92581858199999c5b8b3df8d5ed1d05e5b406feeb07c7364c. Jan 29 10:54:53.003754 containerd[1726]: time="2025-01-29T10:54:53.003462041Z" level=info msg="StartContainer for \"47820369ae8e4ac92581858199999c5b8b3df8d5ed1d05e5b406feeb07c7364c\" returns successfully" Jan 29 10:54:53.197729 containerd[1726]: time="2025-01-29T10:54:53.197275016Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:53.201254 containerd[1726]: time="2025-01-29T10:54:53.200650894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 10:54:53.202555 containerd[1726]: time="2025-01-29T10:54:53.202531653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 342.354894ms" Jan 29 10:54:53.202666 containerd[1726]: time="2025-01-29T10:54:53.202650893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 10:54:53.206714 containerd[1726]: time="2025-01-29T10:54:53.206687331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 10:54:53.229189 containerd[1726]: time="2025-01-29T10:54:53.228805279Z" level=info msg="CreateContainer within sandbox \"a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 10:54:53.275753 containerd[1726]: time="2025-01-29T10:54:53.275715333Z" level=info msg="CreateContainer within sandbox \"a35b643fc5e5074d42fabf8fafd93192dd37e807e4f0a6300ac6d4af1932a905\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"85122f0d065e26e9f344825ee572571f9a8a830d2560714ebd5fa3f9ffa8ec83\"" Jan 29 10:54:53.276499 containerd[1726]: time="2025-01-29T10:54:53.276372253Z" level=info msg="StartContainer for \"85122f0d065e26e9f344825ee572571f9a8a830d2560714ebd5fa3f9ffa8ec83\"" Jan 29 10:54:53.305014 systemd[1]: Started cri-containerd-85122f0d065e26e9f344825ee572571f9a8a830d2560714ebd5fa3f9ffa8ec83.scope - libcontainer container 85122f0d065e26e9f344825ee572571f9a8a830d2560714ebd5fa3f9ffa8ec83. Jan 29 10:54:53.343334 containerd[1726]: time="2025-01-29T10:54:53.343290177Z" level=info msg="StartContainer for \"85122f0d065e26e9f344825ee572571f9a8a830d2560714ebd5fa3f9ffa8ec83\" returns successfully" Jan 29 10:54:53.449956 kubelet[3366]: I0129 10:54:53.449124 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-785d46db66-nfw9l" podStartSLOduration=31.890007641 podStartE2EDuration="36.449106479s" podCreationTimestamp="2025-01-29 10:54:17 +0000 UTC" firstStartedPulling="2025-01-29 10:54:48.300931361 +0000 UTC m=+42.382747982" lastFinishedPulling="2025-01-29 10:54:52.860030239 +0000 UTC m=+46.941846820" observedRunningTime="2025-01-29 10:54:53.433939687 +0000 UTC m=+47.515756308" watchObservedRunningTime="2025-01-29 10:54:53.449106479 +0000 UTC m=+47.530923100" Jan 29 10:54:53.449956 kubelet[3366]: I0129 10:54:53.449404 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-785d46db66-x8gcd" podStartSLOduration=31.758265759 podStartE2EDuration="36.449398399s" podCreationTimestamp="2025-01-29 10:54:17 +0000 UTC" firstStartedPulling="2025-01-29 10:54:48.515221251 +0000 UTC m=+42.597037832" lastFinishedPulling="2025-01-29 10:54:53.206353891 +0000 UTC m=+47.288170472" observedRunningTime="2025-01-29 10:54:53.44802072 +0000 UTC m=+47.529837381" watchObservedRunningTime="2025-01-29 10:54:53.449398399 +0000 UTC m=+47.531215020" Jan 29 10:54:54.419131 kubelet[3366]: I0129 10:54:54.418516 3366 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:54:54.530102 containerd[1726]: time="2025-01-29T10:54:54.529999772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:54.532148 containerd[1726]: time="2025-01-29T10:54:54.532104851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 10:54:54.536138 containerd[1726]: time="2025-01-29T10:54:54.535999649Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:54.541057 containerd[1726]: time="2025-01-29T10:54:54.540918206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:54.541553 containerd[1726]: time="2025-01-29T10:54:54.541522766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.334621275s" Jan 29 10:54:54.541553 containerd[1726]: time="2025-01-29T10:54:54.541555406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 10:54:54.544745 containerd[1726]: time="2025-01-29T10:54:54.544404885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 10:54:54.546189 containerd[1726]: time="2025-01-29T10:54:54.546152884Z" level=info msg="CreateContainer within sandbox \"2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 10:54:54.592415 containerd[1726]: time="2025-01-29T10:54:54.592364338Z" level=info msg="CreateContainer within sandbox \"2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cf982af4b5bf2291b30dfc2bf4fe098f09a1a0c5b6e8c0ce3f807ad979696b02\"" Jan 29 10:54:54.595686 containerd[1726]: time="2025-01-29T10:54:54.593444618Z" level=info msg="StartContainer for \"cf982af4b5bf2291b30dfc2bf4fe098f09a1a0c5b6e8c0ce3f807ad979696b02\"" Jan 29 10:54:54.642187 systemd[1]: Started cri-containerd-cf982af4b5bf2291b30dfc2bf4fe098f09a1a0c5b6e8c0ce3f807ad979696b02.scope - libcontainer container cf982af4b5bf2291b30dfc2bf4fe098f09a1a0c5b6e8c0ce3f807ad979696b02. Jan 29 10:54:54.707978 containerd[1726]: time="2025-01-29T10:54:54.704098238Z" level=info msg="StartContainer for \"cf982af4b5bf2291b30dfc2bf4fe098f09a1a0c5b6e8c0ce3f807ad979696b02\" returns successfully" Jan 29 10:54:55.425531 kubelet[3366]: I0129 10:54:55.425367 3366 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:54:58.316247 containerd[1726]: time="2025-01-29T10:54:58.316020797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:58.319957 containerd[1726]: time="2025-01-29T10:54:58.319903435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 10:54:58.324677 containerd[1726]: time="2025-01-29T10:54:58.324408672Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:58.329465 containerd[1726]: time="2025-01-29T10:54:58.329438070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:58.330106 containerd[1726]: time="2025-01-29T10:54:58.330072949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.785636105s" Jan 29 10:54:58.330106 containerd[1726]: time="2025-01-29T10:54:58.330103149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 10:54:58.331938 containerd[1726]: time="2025-01-29T10:54:58.331749508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 10:54:58.348750 containerd[1726]: time="2025-01-29T10:54:58.348497179Z" level=info msg="CreateContainer within sandbox \"67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 10:54:58.404105 containerd[1726]: time="2025-01-29T10:54:58.404068469Z" level=info msg="CreateContainer within sandbox \"67d413ebc37372ed77aa1e5ecdde9a2b4b71839f0ac4843e9505e4af7f242a70\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c9fc0e0d6e2c4c62d29cb365f0c0ce6970c60b56356d421037854e713d2b29d5\"" Jan 29 10:54:58.405128 containerd[1726]: time="2025-01-29T10:54:58.405031348Z" level=info msg="StartContainer for \"c9fc0e0d6e2c4c62d29cb365f0c0ce6970c60b56356d421037854e713d2b29d5\"" Jan 29 10:54:58.440028 systemd[1]: Started cri-containerd-c9fc0e0d6e2c4c62d29cb365f0c0ce6970c60b56356d421037854e713d2b29d5.scope - libcontainer container c9fc0e0d6e2c4c62d29cb365f0c0ce6970c60b56356d421037854e713d2b29d5. Jan 29 10:54:58.482512 containerd[1726]: time="2025-01-29T10:54:58.482463986Z" level=info msg="StartContainer for \"c9fc0e0d6e2c4c62d29cb365f0c0ce6970c60b56356d421037854e713d2b29d5\" returns successfully" Jan 29 10:54:59.510600 kubelet[3366]: I0129 10:54:59.510421 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-fdbfb578c-sx269" podStartSLOduration=30.843342133 podStartE2EDuration="40.510402948s" podCreationTimestamp="2025-01-29 10:54:19 +0000 UTC" firstStartedPulling="2025-01-29 10:54:48.664009774 +0000 UTC m=+42.745826395" lastFinishedPulling="2025-01-29 10:54:58.331070589 +0000 UTC m=+52.412887210" observedRunningTime="2025-01-29 10:54:59.472509369 +0000 UTC m=+53.554325990" watchObservedRunningTime="2025-01-29 10:54:59.510402948 +0000 UTC m=+53.592219569" Jan 29 10:54:59.649955 containerd[1726]: time="2025-01-29T10:54:59.649898113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:59.652812 containerd[1726]: time="2025-01-29T10:54:59.652411191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 10:54:59.658966 containerd[1726]: time="2025-01-29T10:54:59.658920428Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:59.666176 containerd[1726]: time="2025-01-29T10:54:59.665918304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 10:54:59.667995 containerd[1726]: time="2025-01-29T10:54:59.667865223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.336071235s" Jan 29 10:54:59.667995 containerd[1726]: time="2025-01-29T10:54:59.667903903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 10:54:59.671152 containerd[1726]: time="2025-01-29T10:54:59.671113661Z" level=info msg="CreateContainer within sandbox \"2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 10:54:59.715749 containerd[1726]: time="2025-01-29T10:54:59.715135477Z" level=info msg="CreateContainer within sandbox \"2253c5a23bb1c07b6f3a5da7f749a9c50b742d9da3564714f2f0494293084d8c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"de67791f06b3017a3a406e92965cbdc745b8e513c75a63afd7164a00941c6dcf\"" Jan 29 10:54:59.717158 containerd[1726]: time="2025-01-29T10:54:59.717117076Z" level=info msg="StartContainer for \"de67791f06b3017a3a406e92965cbdc745b8e513c75a63afd7164a00941c6dcf\"" Jan 29 10:54:59.751065 systemd[1]: Started cri-containerd-de67791f06b3017a3a406e92965cbdc745b8e513c75a63afd7164a00941c6dcf.scope - libcontainer container de67791f06b3017a3a406e92965cbdc745b8e513c75a63afd7164a00941c6dcf. Jan 29 10:54:59.800642 containerd[1726]: time="2025-01-29T10:54:59.800539471Z" level=info msg="StartContainer for \"de67791f06b3017a3a406e92965cbdc745b8e513c75a63afd7164a00941c6dcf\" returns successfully" Jan 29 10:55:00.122368 kubelet[3366]: I0129 10:55:00.122263 3366 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 10:55:00.122368 kubelet[3366]: I0129 10:55:00.122307 3366 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 10:55:00.472678 kubelet[3366]: I0129 10:55:00.472523 3366 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ct4dr" podStartSLOduration=30.439353876 podStartE2EDuration="41.47250711s" podCreationTimestamp="2025-01-29 10:54:19 +0000 UTC" firstStartedPulling="2025-01-29 10:54:48.636192148 +0000 UTC m=+42.718008769" lastFinishedPulling="2025-01-29 10:54:59.669345382 +0000 UTC m=+53.751162003" observedRunningTime="2025-01-29 10:55:00.47199787 +0000 UTC m=+54.553814491" watchObservedRunningTime="2025-01-29 10:55:00.47250711 +0000 UTC m=+54.554323731" Jan 29 10:55:06.034545 containerd[1726]: time="2025-01-29T10:55:06.034501209Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\"" Jan 29 10:55:06.034912 containerd[1726]: time="2025-01-29T10:55:06.034612049Z" level=info msg="TearDown network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" successfully" Jan 29 10:55:06.034912 containerd[1726]: time="2025-01-29T10:55:06.034624689Z" level=info msg="StopPodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" returns successfully" Jan 29 10:55:06.035517 containerd[1726]: time="2025-01-29T10:55:06.035123408Z" level=info msg="RemovePodSandbox for \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\"" Jan 29 10:55:06.035517 containerd[1726]: time="2025-01-29T10:55:06.035150808Z" level=info msg="Forcibly stopping sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\"" Jan 29 10:55:06.035517 containerd[1726]: time="2025-01-29T10:55:06.035213208Z" level=info msg="TearDown network for sandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" successfully" Jan 29 10:55:06.045101 containerd[1726]: time="2025-01-29T10:55:06.045054284Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.045281 containerd[1726]: time="2025-01-29T10:55:06.045119564Z" level=info msg="RemovePodSandbox \"7b8fc1e21c7d72d235bddca741a930a59f6275cd61e9584dad382c9869778604\" returns successfully" Jan 29 10:55:06.045883 containerd[1726]: time="2025-01-29T10:55:06.045802244Z" level=info msg="StopPodSandbox for \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\"" Jan 29 10:55:06.045965 containerd[1726]: time="2025-01-29T10:55:06.045919403Z" level=info msg="TearDown network for sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" successfully" Jan 29 10:55:06.045965 containerd[1726]: time="2025-01-29T10:55:06.045931563Z" level=info msg="StopPodSandbox for \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" returns successfully" Jan 29 10:55:06.046237 containerd[1726]: time="2025-01-29T10:55:06.046187243Z" level=info msg="RemovePodSandbox for \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\"" Jan 29 10:55:06.046237 containerd[1726]: time="2025-01-29T10:55:06.046213523Z" level=info msg="Forcibly stopping sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\"" Jan 29 10:55:06.046298 containerd[1726]: time="2025-01-29T10:55:06.046272883Z" level=info msg="TearDown network for sandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" successfully" Jan 29 10:55:06.055412 containerd[1726]: time="2025-01-29T10:55:06.055365719Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.055525 containerd[1726]: time="2025-01-29T10:55:06.055434199Z" level=info msg="RemovePodSandbox \"6c4e1a1d5e4e6626b51c91064a0f0fa522b59ed67fd0b2c7473bf7eae5232223\" returns successfully" Jan 29 10:55:06.055838 containerd[1726]: time="2025-01-29T10:55:06.055810399Z" level=info msg="StopPodSandbox for \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\"" Jan 29 10:55:06.055952 containerd[1726]: time="2025-01-29T10:55:06.055929799Z" level=info msg="TearDown network for sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\" successfully" Jan 29 10:55:06.055952 containerd[1726]: time="2025-01-29T10:55:06.055947719Z" level=info msg="StopPodSandbox for \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\" returns successfully" Jan 29 10:55:06.056901 containerd[1726]: time="2025-01-29T10:55:06.056255879Z" level=info msg="RemovePodSandbox for \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\"" Jan 29 10:55:06.056901 containerd[1726]: time="2025-01-29T10:55:06.056282799Z" level=info msg="Forcibly stopping sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\"" Jan 29 10:55:06.056901 containerd[1726]: time="2025-01-29T10:55:06.056353399Z" level=info msg="TearDown network for sandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\" successfully" Jan 29 10:55:06.065962 containerd[1726]: time="2025-01-29T10:55:06.065920634Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.066051 containerd[1726]: time="2025-01-29T10:55:06.065980754Z" level=info msg="RemovePodSandbox \"d20a1bf08cb54e30d0a413607d5914102b519a26d36551854065ea3c5f2062f7\" returns successfully" Jan 29 10:55:06.066610 containerd[1726]: time="2025-01-29T10:55:06.066435914Z" level=info msg="StopPodSandbox for \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\"" Jan 29 10:55:06.066610 containerd[1726]: time="2025-01-29T10:55:06.066528474Z" level=info msg="TearDown network for sandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\" successfully" Jan 29 10:55:06.066610 containerd[1726]: time="2025-01-29T10:55:06.066538314Z" level=info msg="StopPodSandbox for \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\" returns successfully" Jan 29 10:55:06.067107 containerd[1726]: time="2025-01-29T10:55:06.066966274Z" level=info msg="RemovePodSandbox for \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\"" Jan 29 10:55:06.067107 containerd[1726]: time="2025-01-29T10:55:06.067002554Z" level=info msg="Forcibly stopping sandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\"" Jan 29 10:55:06.067107 containerd[1726]: time="2025-01-29T10:55:06.067061674Z" level=info msg="TearDown network for sandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\" successfully" Jan 29 10:55:06.076404 containerd[1726]: time="2025-01-29T10:55:06.076244470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.076404 containerd[1726]: time="2025-01-29T10:55:06.076369030Z" level=info msg="RemovePodSandbox \"c7e637408df9a1003520b89e0e960b9d3fba84ce26a02960d793518d3517c3e0\" returns successfully" Jan 29 10:55:06.077166 containerd[1726]: time="2025-01-29T10:55:06.076989309Z" level=info msg="StopPodSandbox for \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\"" Jan 29 10:55:06.077166 containerd[1726]: time="2025-01-29T10:55:06.077109869Z" level=info msg="TearDown network for sandbox \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\" successfully" Jan 29 10:55:06.077166 containerd[1726]: time="2025-01-29T10:55:06.077119669Z" level=info msg="StopPodSandbox for \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\" returns successfully" Jan 29 10:55:06.077626 containerd[1726]: time="2025-01-29T10:55:06.077493389Z" level=info msg="RemovePodSandbox for \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\"" Jan 29 10:55:06.077626 containerd[1726]: time="2025-01-29T10:55:06.077518109Z" level=info msg="Forcibly stopping sandbox \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\"" Jan 29 10:55:06.078402 containerd[1726]: time="2025-01-29T10:55:06.077807389Z" level=info msg="TearDown network for sandbox \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\" successfully" Jan 29 10:55:06.086606 containerd[1726]: time="2025-01-29T10:55:06.086581905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.087190 containerd[1726]: time="2025-01-29T10:55:06.087167145Z" level=info msg="RemovePodSandbox \"e71b4566256a089c7de865a60c2f8604e5979e4ac42bb619f5eabff728eb7dad\" returns successfully" Jan 29 10:55:06.087939 containerd[1726]: time="2025-01-29T10:55:06.087917184Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\"" Jan 29 10:55:06.088095 containerd[1726]: time="2025-01-29T10:55:06.088080064Z" level=info msg="TearDown network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" successfully" Jan 29 10:55:06.088261 containerd[1726]: time="2025-01-29T10:55:06.088244104Z" level=info msg="StopPodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" returns successfully" Jan 29 10:55:06.089829 containerd[1726]: time="2025-01-29T10:55:06.088597464Z" level=info msg="RemovePodSandbox for \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\"" Jan 29 10:55:06.089829 containerd[1726]: time="2025-01-29T10:55:06.088620384Z" level=info msg="Forcibly stopping sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\"" Jan 29 10:55:06.089829 containerd[1726]: time="2025-01-29T10:55:06.088676264Z" level=info msg="TearDown network for sandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" successfully" Jan 29 10:55:06.096049 containerd[1726]: time="2025-01-29T10:55:06.096024501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.096192 containerd[1726]: time="2025-01-29T10:55:06.096176621Z" level=info msg="RemovePodSandbox \"6186383689a032290051a3c79a3ba3c5d07974175379f4ce4a5478798af1bac2\" returns successfully" Jan 29 10:55:06.096583 containerd[1726]: time="2025-01-29T10:55:06.096562900Z" level=info msg="StopPodSandbox for \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\"" Jan 29 10:55:06.096820 containerd[1726]: time="2025-01-29T10:55:06.096804620Z" level=info msg="TearDown network for sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" successfully" Jan 29 10:55:06.096932 containerd[1726]: time="2025-01-29T10:55:06.096916740Z" level=info msg="StopPodSandbox for \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" returns successfully" Jan 29 10:55:06.097277 containerd[1726]: time="2025-01-29T10:55:06.097256940Z" level=info msg="RemovePodSandbox for \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\"" Jan 29 10:55:06.097432 containerd[1726]: time="2025-01-29T10:55:06.097403220Z" level=info msg="Forcibly stopping sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\"" Jan 29 10:55:06.097596 containerd[1726]: time="2025-01-29T10:55:06.097580300Z" level=info msg="TearDown network for sandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" successfully" Jan 29 10:55:06.104919 containerd[1726]: time="2025-01-29T10:55:06.104891777Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.105068 containerd[1726]: time="2025-01-29T10:55:06.105050896Z" level=info msg="RemovePodSandbox \"cbadd46d495876552d61309881b131cfd6f757c5d49eedc3c102f91aa110a8a6\" returns successfully" Jan 29 10:55:06.105595 containerd[1726]: time="2025-01-29T10:55:06.105550016Z" level=info msg="StopPodSandbox for \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\"" Jan 29 10:55:06.105665 containerd[1726]: time="2025-01-29T10:55:06.105652776Z" level=info msg="TearDown network for sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\" successfully" Jan 29 10:55:06.105702 containerd[1726]: time="2025-01-29T10:55:06.105663776Z" level=info msg="StopPodSandbox for \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\" returns successfully" Jan 29 10:55:06.107055 containerd[1726]: time="2025-01-29T10:55:06.105940016Z" level=info msg="RemovePodSandbox for \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\"" Jan 29 10:55:06.107055 containerd[1726]: time="2025-01-29T10:55:06.105964576Z" level=info msg="Forcibly stopping sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\"" Jan 29 10:55:06.107055 containerd[1726]: time="2025-01-29T10:55:06.106025776Z" level=info msg="TearDown network for sandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\" successfully" Jan 29 10:55:06.112760 containerd[1726]: time="2025-01-29T10:55:06.112680093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.112760 containerd[1726]: time="2025-01-29T10:55:06.112732013Z" level=info msg="RemovePodSandbox \"2174c53e35b823f8f8a90343ee91895933eef46b070c12494246f1f47226021b\" returns successfully" Jan 29 10:55:06.113494 containerd[1726]: time="2025-01-29T10:55:06.113226413Z" level=info msg="StopPodSandbox for \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\"" Jan 29 10:55:06.113494 containerd[1726]: time="2025-01-29T10:55:06.113299373Z" level=info msg="TearDown network for sandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\" successfully" Jan 29 10:55:06.113494 containerd[1726]: time="2025-01-29T10:55:06.113310613Z" level=info msg="StopPodSandbox for \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\" returns successfully" Jan 29 10:55:06.113959 containerd[1726]: time="2025-01-29T10:55:06.113801972Z" level=info msg="RemovePodSandbox for \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\"" Jan 29 10:55:06.113959 containerd[1726]: time="2025-01-29T10:55:06.113824852Z" level=info msg="Forcibly stopping sandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\"" Jan 29 10:55:06.113959 containerd[1726]: time="2025-01-29T10:55:06.113916852Z" level=info msg="TearDown network for sandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\" successfully" Jan 29 10:55:06.124069 containerd[1726]: time="2025-01-29T10:55:06.124011408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.124069 containerd[1726]: time="2025-01-29T10:55:06.124064288Z" level=info msg="RemovePodSandbox \"59ead00b21021a5f681250d34096e6b55e9f916c930c4c35b232e96d740043b1\" returns successfully" Jan 29 10:55:06.124519 containerd[1726]: time="2025-01-29T10:55:06.124368688Z" level=info msg="StopPodSandbox for \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\"" Jan 29 10:55:06.124519 containerd[1726]: time="2025-01-29T10:55:06.124454208Z" level=info msg="TearDown network for sandbox \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\" successfully" Jan 29 10:55:06.124519 containerd[1726]: time="2025-01-29T10:55:06.124463408Z" level=info msg="StopPodSandbox for \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\" returns successfully" Jan 29 10:55:06.124728 containerd[1726]: time="2025-01-29T10:55:06.124687927Z" level=info msg="RemovePodSandbox for \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\"" Jan 29 10:55:06.124728 containerd[1726]: time="2025-01-29T10:55:06.124719807Z" level=info msg="Forcibly stopping sandbox \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\"" Jan 29 10:55:06.124811 containerd[1726]: time="2025-01-29T10:55:06.124781367Z" level=info msg="TearDown network for sandbox \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\" successfully" Jan 29 10:55:06.133164 containerd[1726]: time="2025-01-29T10:55:06.133121004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.133242 containerd[1726]: time="2025-01-29T10:55:06.133181764Z" level=info msg="RemovePodSandbox \"1f4fbd2b618fdb17e00f0dfa01fda148dc54cb925c22d6fab06145d99d9a4906\" returns successfully" Jan 29 10:55:06.133887 containerd[1726]: time="2025-01-29T10:55:06.133583163Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\"" Jan 29 10:55:06.133887 containerd[1726]: time="2025-01-29T10:55:06.133672483Z" level=info msg="TearDown network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" successfully" Jan 29 10:55:06.133887 containerd[1726]: time="2025-01-29T10:55:06.133681723Z" level=info msg="StopPodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" returns successfully" Jan 29 10:55:06.134223 containerd[1726]: time="2025-01-29T10:55:06.134201563Z" level=info msg="RemovePodSandbox for \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\"" Jan 29 10:55:06.134883 containerd[1726]: time="2025-01-29T10:55:06.134348323Z" level=info msg="Forcibly stopping sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\"" Jan 29 10:55:06.134883 containerd[1726]: time="2025-01-29T10:55:06.134418643Z" level=info msg="TearDown network for sandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" successfully" Jan 29 10:55:06.141822 containerd[1726]: time="2025-01-29T10:55:06.141753920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.141973 containerd[1726]: time="2025-01-29T10:55:06.141830600Z" level=info msg="RemovePodSandbox \"a6aff7136580e8af8090e8c7217847ceb6dd088350db7683fef01f84fcec381f\" returns successfully" Jan 29 10:55:06.142522 containerd[1726]: time="2025-01-29T10:55:06.142286559Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\"" Jan 29 10:55:06.142522 containerd[1726]: time="2025-01-29T10:55:06.142373039Z" level=info msg="TearDown network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" successfully" Jan 29 10:55:06.142522 containerd[1726]: time="2025-01-29T10:55:06.142385679Z" level=info msg="StopPodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" returns successfully" Jan 29 10:55:06.142828 containerd[1726]: time="2025-01-29T10:55:06.142811879Z" level=info msg="RemovePodSandbox for \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\"" Jan 29 10:55:06.142938 containerd[1726]: time="2025-01-29T10:55:06.142922319Z" level=info msg="Forcibly stopping sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\"" Jan 29 10:55:06.143096 containerd[1726]: time="2025-01-29T10:55:06.143049399Z" level=info msg="TearDown network for sandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" successfully" Jan 29 10:55:06.151598 containerd[1726]: time="2025-01-29T10:55:06.151551755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.151695 containerd[1726]: time="2025-01-29T10:55:06.151625715Z" level=info msg="RemovePodSandbox \"96f0f5e3935c51217f11f43b6bd7844524628e734e2f76dd813ec786f8af7203\" returns successfully" Jan 29 10:55:06.152210 containerd[1726]: time="2025-01-29T10:55:06.152028635Z" level=info msg="StopPodSandbox for \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\"" Jan 29 10:55:06.152210 containerd[1726]: time="2025-01-29T10:55:06.152127755Z" level=info msg="TearDown network for sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" successfully" Jan 29 10:55:06.152210 containerd[1726]: time="2025-01-29T10:55:06.152138275Z" level=info msg="StopPodSandbox for \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" returns successfully" Jan 29 10:55:06.160641 containerd[1726]: time="2025-01-29T10:55:06.152441475Z" level=info msg="RemovePodSandbox for \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\"" Jan 29 10:55:06.161484 containerd[1726]: time="2025-01-29T10:55:06.160751431Z" level=info msg="Forcibly stopping sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\"" Jan 29 10:55:06.161484 containerd[1726]: time="2025-01-29T10:55:06.160841511Z" level=info msg="TearDown network for sandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" successfully" Jan 29 10:55:06.168990 containerd[1726]: time="2025-01-29T10:55:06.168945067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.169058 containerd[1726]: time="2025-01-29T10:55:06.169032867Z" level=info msg="RemovePodSandbox \"1da189d849c652ecd0853bc40bac5a18f7da7ebf9c54e5573f7233fba61392ac\" returns successfully" Jan 29 10:55:06.169750 containerd[1726]: time="2025-01-29T10:55:06.169471707Z" level=info msg="StopPodSandbox for \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\"" Jan 29 10:55:06.169750 containerd[1726]: time="2025-01-29T10:55:06.169562867Z" level=info msg="TearDown network for sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\" successfully" Jan 29 10:55:06.169750 containerd[1726]: time="2025-01-29T10:55:06.169572267Z" level=info msg="StopPodSandbox for \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\" returns successfully" Jan 29 10:55:06.170266 containerd[1726]: time="2025-01-29T10:55:06.170122707Z" level=info msg="RemovePodSandbox for \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\"" Jan 29 10:55:06.170266 containerd[1726]: time="2025-01-29T10:55:06.170156067Z" level=info msg="Forcibly stopping sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\"" Jan 29 10:55:06.170266 containerd[1726]: time="2025-01-29T10:55:06.170217507Z" level=info msg="TearDown network for sandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\" successfully" Jan 29 10:55:06.179526 containerd[1726]: time="2025-01-29T10:55:06.179467862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.179526 containerd[1726]: time="2025-01-29T10:55:06.179523022Z" level=info msg="RemovePodSandbox \"9aeb13b5eea426c7ff2dd034baa7a8c39c1000624aa5708e8a6a9434dd745522\" returns successfully" Jan 29 10:55:06.179986 containerd[1726]: time="2025-01-29T10:55:06.179825942Z" level=info msg="StopPodSandbox for \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\"" Jan 29 10:55:06.179986 containerd[1726]: time="2025-01-29T10:55:06.179923582Z" level=info msg="TearDown network for sandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\" successfully" Jan 29 10:55:06.179986 containerd[1726]: time="2025-01-29T10:55:06.179934462Z" level=info msg="StopPodSandbox for \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\" returns successfully" Jan 29 10:55:06.180197 containerd[1726]: time="2025-01-29T10:55:06.180153102Z" level=info msg="RemovePodSandbox for \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\"" Jan 29 10:55:06.180197 containerd[1726]: time="2025-01-29T10:55:06.180185382Z" level=info msg="Forcibly stopping sandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\"" Jan 29 10:55:06.180254 containerd[1726]: time="2025-01-29T10:55:06.180246342Z" level=info msg="TearDown network for sandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\" successfully" Jan 29 10:55:06.193719 containerd[1726]: time="2025-01-29T10:55:06.193630216Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.193719 containerd[1726]: time="2025-01-29T10:55:06.193714696Z" level=info msg="RemovePodSandbox \"e4667f46779f0d4e2b7149c564a5eda2f410fa262e5562c74b29871e1bcbba6f\" returns successfully" Jan 29 10:55:06.194328 containerd[1726]: time="2025-01-29T10:55:06.194191296Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\"" Jan 29 10:55:06.194328 containerd[1726]: time="2025-01-29T10:55:06.194271776Z" level=info msg="TearDown network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" successfully" Jan 29 10:55:06.194328 containerd[1726]: time="2025-01-29T10:55:06.194285016Z" level=info msg="StopPodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" returns successfully" Jan 29 10:55:06.194983 containerd[1726]: time="2025-01-29T10:55:06.194755575Z" level=info msg="RemovePodSandbox for \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\"" Jan 29 10:55:06.194983 containerd[1726]: time="2025-01-29T10:55:06.194798015Z" level=info msg="Forcibly stopping sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\"" Jan 29 10:55:06.194983 containerd[1726]: time="2025-01-29T10:55:06.194882695Z" level=info msg="TearDown network for sandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" successfully" Jan 29 10:55:06.209061 containerd[1726]: time="2025-01-29T10:55:06.208911329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.209310 containerd[1726]: time="2025-01-29T10:55:06.209211449Z" level=info msg="RemovePodSandbox \"ebfbb644b210235a47030fbcb2036e47df0817b3dbc1727ff1bc0235666ef122\" returns successfully" Jan 29 10:55:06.209869 containerd[1726]: time="2025-01-29T10:55:06.209829689Z" level=info msg="StopPodSandbox for \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\"" Jan 29 10:55:06.210486 containerd[1726]: time="2025-01-29T10:55:06.210287168Z" level=info msg="TearDown network for sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" successfully" Jan 29 10:55:06.210486 containerd[1726]: time="2025-01-29T10:55:06.210370688Z" level=info msg="StopPodSandbox for \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" returns successfully" Jan 29 10:55:06.211135 containerd[1726]: time="2025-01-29T10:55:06.210994288Z" level=info msg="RemovePodSandbox for \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\"" Jan 29 10:55:06.211135 containerd[1726]: time="2025-01-29T10:55:06.211026048Z" level=info msg="Forcibly stopping sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\"" Jan 29 10:55:06.211135 containerd[1726]: time="2025-01-29T10:55:06.211089128Z" level=info msg="TearDown network for sandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" successfully" Jan 29 10:55:06.219935 containerd[1726]: time="2025-01-29T10:55:06.219896604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.219935 containerd[1726]: time="2025-01-29T10:55:06.219955084Z" level=info msg="RemovePodSandbox \"0298a92f67640cb3424295d696bf507fa745673f5cc02402680369131cad59a9\" returns successfully" Jan 29 10:55:06.222969 containerd[1726]: time="2025-01-29T10:55:06.220287004Z" level=info msg="StopPodSandbox for \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\"" Jan 29 10:55:06.222969 containerd[1726]: time="2025-01-29T10:55:06.220373444Z" level=info msg="TearDown network for sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\" successfully" Jan 29 10:55:06.222969 containerd[1726]: time="2025-01-29T10:55:06.220382324Z" level=info msg="StopPodSandbox for \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\" returns successfully" Jan 29 10:55:06.222969 containerd[1726]: time="2025-01-29T10:55:06.220605004Z" level=info msg="RemovePodSandbox for \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\"" Jan 29 10:55:06.222969 containerd[1726]: time="2025-01-29T10:55:06.220624844Z" level=info msg="Forcibly stopping sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\"" Jan 29 10:55:06.222969 containerd[1726]: time="2025-01-29T10:55:06.220679604Z" level=info msg="TearDown network for sandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\" successfully" Jan 29 10:55:06.229193 containerd[1726]: time="2025-01-29T10:55:06.229131480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.229410 containerd[1726]: time="2025-01-29T10:55:06.229388880Z" level=info msg="RemovePodSandbox \"269e293135ffd0b32451929f28161fe90d28c985eb8789c745a84e2f573d096e\" returns successfully" Jan 29 10:55:06.229817 containerd[1726]: time="2025-01-29T10:55:06.229795159Z" level=info msg="StopPodSandbox for \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\"" Jan 29 10:55:06.230121 containerd[1726]: time="2025-01-29T10:55:06.230104719Z" level=info msg="TearDown network for sandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\" successfully" Jan 29 10:55:06.230191 containerd[1726]: time="2025-01-29T10:55:06.230178199Z" level=info msg="StopPodSandbox for \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\" returns successfully" Jan 29 10:55:06.230524 containerd[1726]: time="2025-01-29T10:55:06.230504879Z" level=info msg="RemovePodSandbox for \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\"" Jan 29 10:55:06.230693 containerd[1726]: time="2025-01-29T10:55:06.230678159Z" level=info msg="Forcibly stopping sandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\"" Jan 29 10:55:06.230848 containerd[1726]: time="2025-01-29T10:55:06.230832039Z" level=info msg="TearDown network for sandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\" successfully" Jan 29 10:55:06.238656 containerd[1726]: time="2025-01-29T10:55:06.238618515Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.238824 containerd[1726]: time="2025-01-29T10:55:06.238806435Z" level=info msg="RemovePodSandbox \"e58aa06865e944cfac05c8f3b15472e0eec4422f2c439e3b641f154fe8c9eafc\" returns successfully" Jan 29 10:55:06.239287 containerd[1726]: time="2025-01-29T10:55:06.239266195Z" level=info msg="StopPodSandbox for \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\"" Jan 29 10:55:06.239609 containerd[1726]: time="2025-01-29T10:55:06.239590075Z" level=info msg="TearDown network for sandbox \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\" successfully" Jan 29 10:55:06.239688 containerd[1726]: time="2025-01-29T10:55:06.239673835Z" level=info msg="StopPodSandbox for \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\" returns successfully" Jan 29 10:55:06.240167 containerd[1726]: time="2025-01-29T10:55:06.240147515Z" level=info msg="RemovePodSandbox for \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\"" Jan 29 10:55:06.240330 containerd[1726]: time="2025-01-29T10:55:06.240296475Z" level=info msg="Forcibly stopping sandbox \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\"" Jan 29 10:55:06.240471 containerd[1726]: time="2025-01-29T10:55:06.240454955Z" level=info msg="TearDown network for sandbox \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\" successfully" Jan 29 10:55:06.253514 containerd[1726]: time="2025-01-29T10:55:06.253463349Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.253734 containerd[1726]: time="2025-01-29T10:55:06.253715629Z" level=info msg="RemovePodSandbox \"87c5890e9df47bedb496d2ef0835a9fd1b01a326fe6fc0bced28dd22cef95041\" returns successfully" Jan 29 10:55:06.254279 containerd[1726]: time="2025-01-29T10:55:06.254248948Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\"" Jan 29 10:55:06.254401 containerd[1726]: time="2025-01-29T10:55:06.254353588Z" level=info msg="TearDown network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" successfully" Jan 29 10:55:06.254401 containerd[1726]: time="2025-01-29T10:55:06.254371668Z" level=info msg="StopPodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" returns successfully" Jan 29 10:55:06.255888 containerd[1726]: time="2025-01-29T10:55:06.254707668Z" level=info msg="RemovePodSandbox for \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\"" Jan 29 10:55:06.255888 containerd[1726]: time="2025-01-29T10:55:06.254735068Z" level=info msg="Forcibly stopping sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\"" Jan 29 10:55:06.255888 containerd[1726]: time="2025-01-29T10:55:06.254799108Z" level=info msg="TearDown network for sandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" successfully" Jan 29 10:55:06.263585 containerd[1726]: time="2025-01-29T10:55:06.263550944Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.263721 containerd[1726]: time="2025-01-29T10:55:06.263705264Z" level=info msg="RemovePodSandbox \"00c1fd23a419272f87d3dad7fa8d8494dfacfdd456dc3f5fb223b34f7fd4f34d\" returns successfully" Jan 29 10:55:06.264196 containerd[1726]: time="2025-01-29T10:55:06.264167064Z" level=info msg="StopPodSandbox for \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\"" Jan 29 10:55:06.264271 containerd[1726]: time="2025-01-29T10:55:06.264258184Z" level=info msg="TearDown network for sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" successfully" Jan 29 10:55:06.264310 containerd[1726]: time="2025-01-29T10:55:06.264272904Z" level=info msg="StopPodSandbox for \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" returns successfully" Jan 29 10:55:06.264892 containerd[1726]: time="2025-01-29T10:55:06.264586544Z" level=info msg="RemovePodSandbox for \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\"" Jan 29 10:55:06.264892 containerd[1726]: time="2025-01-29T10:55:06.264620664Z" level=info msg="Forcibly stopping sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\"" Jan 29 10:55:06.264892 containerd[1726]: time="2025-01-29T10:55:06.264681704Z" level=info msg="TearDown network for sandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" successfully" Jan 29 10:55:06.273583 containerd[1726]: time="2025-01-29T10:55:06.273548579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.273843 containerd[1726]: time="2025-01-29T10:55:06.273742139Z" level=info msg="RemovePodSandbox \"5b596d72473263728a2f519aa2641c394dad40a3c60baa8c2a4f5a8373da770d\" returns successfully" Jan 29 10:55:06.274439 containerd[1726]: time="2025-01-29T10:55:06.274258099Z" level=info msg="StopPodSandbox for \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\"" Jan 29 10:55:06.274439 containerd[1726]: time="2025-01-29T10:55:06.274362299Z" level=info msg="TearDown network for sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\" successfully" Jan 29 10:55:06.274439 containerd[1726]: time="2025-01-29T10:55:06.274372899Z" level=info msg="StopPodSandbox for \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\" returns successfully" Jan 29 10:55:06.274802 containerd[1726]: time="2025-01-29T10:55:06.274682979Z" level=info msg="RemovePodSandbox for \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\"" Jan 29 10:55:06.274802 containerd[1726]: time="2025-01-29T10:55:06.274741499Z" level=info msg="Forcibly stopping sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\"" Jan 29 10:55:06.275894 containerd[1726]: time="2025-01-29T10:55:06.274958899Z" level=info msg="TearDown network for sandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\" successfully" Jan 29 10:55:06.285793 containerd[1726]: time="2025-01-29T10:55:06.285692054Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.285984 containerd[1726]: time="2025-01-29T10:55:06.285964134Z" level=info msg="RemovePodSandbox \"8c26899042b83b352928de05f2ec2c29b980e8fb03783fbfe30806769caab7d2\" returns successfully" Jan 29 10:55:06.288310 containerd[1726]: time="2025-01-29T10:55:06.286437374Z" level=info msg="StopPodSandbox for \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\"" Jan 29 10:55:06.288310 containerd[1726]: time="2025-01-29T10:55:06.288253853Z" level=info msg="TearDown network for sandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\" successfully" Jan 29 10:55:06.288310 containerd[1726]: time="2025-01-29T10:55:06.288272253Z" level=info msg="StopPodSandbox for \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\" returns successfully" Jan 29 10:55:06.288877 containerd[1726]: time="2025-01-29T10:55:06.288843172Z" level=info msg="RemovePodSandbox for \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\"" Jan 29 10:55:06.288998 containerd[1726]: time="2025-01-29T10:55:06.288981812Z" level=info msg="Forcibly stopping sandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\"" Jan 29 10:55:06.289118 containerd[1726]: time="2025-01-29T10:55:06.289102372Z" level=info msg="TearDown network for sandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\" successfully" Jan 29 10:55:06.297351 containerd[1726]: time="2025-01-29T10:55:06.297322489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.297585 containerd[1726]: time="2025-01-29T10:55:06.297468329Z" level=info msg="RemovePodSandbox \"69f718f40398f9fe1ce14d98272e45ef76dbadfb1d0b6793b770a5749d872d4f\" returns successfully" Jan 29 10:55:06.297986 containerd[1726]: time="2025-01-29T10:55:06.297955768Z" level=info msg="StopPodSandbox for \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\"" Jan 29 10:55:06.298076 containerd[1726]: time="2025-01-29T10:55:06.298054288Z" level=info msg="TearDown network for sandbox \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\" successfully" Jan 29 10:55:06.298076 containerd[1726]: time="2025-01-29T10:55:06.298071408Z" level=info msg="StopPodSandbox for \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\" returns successfully" Jan 29 10:55:06.298468 containerd[1726]: time="2025-01-29T10:55:06.298438768Z" level=info msg="RemovePodSandbox for \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\"" Jan 29 10:55:06.298525 containerd[1726]: time="2025-01-29T10:55:06.298474048Z" level=info msg="Forcibly stopping sandbox \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\"" Jan 29 10:55:06.298552 containerd[1726]: time="2025-01-29T10:55:06.298536208Z" level=info msg="TearDown network for sandbox \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\" successfully" Jan 29 10:55:06.307329 containerd[1726]: time="2025-01-29T10:55:06.307284124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.307422 containerd[1726]: time="2025-01-29T10:55:06.307359884Z" level=info msg="RemovePodSandbox \"3a4dd8faa39567c07d60703c389ee54e0910ae147cd81800b0225f37c148ce92\" returns successfully" Jan 29 10:55:06.307813 containerd[1726]: time="2025-01-29T10:55:06.307780764Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\"" Jan 29 10:55:06.308029 containerd[1726]: time="2025-01-29T10:55:06.307991884Z" level=info msg="TearDown network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" successfully" Jan 29 10:55:06.308029 containerd[1726]: time="2025-01-29T10:55:06.308010564Z" level=info msg="StopPodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" returns successfully" Jan 29 10:55:06.309668 containerd[1726]: time="2025-01-29T10:55:06.308337804Z" level=info msg="RemovePodSandbox for \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\"" Jan 29 10:55:06.309668 containerd[1726]: time="2025-01-29T10:55:06.308366804Z" level=info msg="Forcibly stopping sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\"" Jan 29 10:55:06.309668 containerd[1726]: time="2025-01-29T10:55:06.308443524Z" level=info msg="TearDown network for sandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" successfully" Jan 29 10:55:06.317148 containerd[1726]: time="2025-01-29T10:55:06.317097560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.317230 containerd[1726]: time="2025-01-29T10:55:06.317193240Z" level=info msg="RemovePodSandbox \"3828424a73258ba419f2693e8cb17c34fe0702068614a610c8348e9c73c85d5e\" returns successfully" Jan 29 10:55:06.317827 containerd[1726]: time="2025-01-29T10:55:06.317641239Z" level=info msg="StopPodSandbox for \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\"" Jan 29 10:55:06.317827 containerd[1726]: time="2025-01-29T10:55:06.317751479Z" level=info msg="TearDown network for sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" successfully" Jan 29 10:55:06.317827 containerd[1726]: time="2025-01-29T10:55:06.317761959Z" level=info msg="StopPodSandbox for \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" returns successfully" Jan 29 10:55:06.318887 containerd[1726]: time="2025-01-29T10:55:06.318109359Z" level=info msg="RemovePodSandbox for \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\"" Jan 29 10:55:06.318887 containerd[1726]: time="2025-01-29T10:55:06.318172279Z" level=info msg="Forcibly stopping sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\"" Jan 29 10:55:06.318887 containerd[1726]: time="2025-01-29T10:55:06.318237279Z" level=info msg="TearDown network for sandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" successfully" Jan 29 10:55:06.327014 containerd[1726]: time="2025-01-29T10:55:06.326963675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.327129 containerd[1726]: time="2025-01-29T10:55:06.327061275Z" level=info msg="RemovePodSandbox \"733b830d0fb981c6b27c143d6c8244a68f238625a85f6dc3b92648f512ad8f94\" returns successfully" Jan 29 10:55:06.327546 containerd[1726]: time="2025-01-29T10:55:06.327519875Z" level=info msg="StopPodSandbox for \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\"" Jan 29 10:55:06.327959 containerd[1726]: time="2025-01-29T10:55:06.327832355Z" level=info msg="TearDown network for sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\" successfully" Jan 29 10:55:06.327959 containerd[1726]: time="2025-01-29T10:55:06.327850275Z" level=info msg="StopPodSandbox for \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\" returns successfully" Jan 29 10:55:06.328964 containerd[1726]: time="2025-01-29T10:55:06.328204074Z" level=info msg="RemovePodSandbox for \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\"" Jan 29 10:55:06.328964 containerd[1726]: time="2025-01-29T10:55:06.328239314Z" level=info msg="Forcibly stopping sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\"" Jan 29 10:55:06.328964 containerd[1726]: time="2025-01-29T10:55:06.328301594Z" level=info msg="TearDown network for sandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\" successfully" Jan 29 10:55:06.339034 containerd[1726]: time="2025-01-29T10:55:06.338979150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.339107 containerd[1726]: time="2025-01-29T10:55:06.339073110Z" level=info msg="RemovePodSandbox \"717095d7774bcee8af66a26026062bcf1025503e44cafbf1f35c9a7ce9cac56c\" returns successfully" Jan 29 10:55:06.339763 containerd[1726]: time="2025-01-29T10:55:06.339535749Z" level=info msg="StopPodSandbox for \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\"" Jan 29 10:55:06.339763 containerd[1726]: time="2025-01-29T10:55:06.339648829Z" level=info msg="TearDown network for sandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\" successfully" Jan 29 10:55:06.339763 containerd[1726]: time="2025-01-29T10:55:06.339658749Z" level=info msg="StopPodSandbox for \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\" returns successfully" Jan 29 10:55:06.341452 containerd[1726]: time="2025-01-29T10:55:06.340136029Z" level=info msg="RemovePodSandbox for \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\"" Jan 29 10:55:06.341452 containerd[1726]: time="2025-01-29T10:55:06.340236109Z" level=info msg="Forcibly stopping sandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\"" Jan 29 10:55:06.341452 containerd[1726]: time="2025-01-29T10:55:06.340316509Z" level=info msg="TearDown network for sandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\" successfully" Jan 29 10:55:06.351569 containerd[1726]: time="2025-01-29T10:55:06.351481624Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.351704 containerd[1726]: time="2025-01-29T10:55:06.351598424Z" level=info msg="RemovePodSandbox \"70ae0d3adc62c5cb9b275d630eb48630a21dd7caa91ca6f299ece51c4e816d28\" returns successfully" Jan 29 10:55:06.352218 containerd[1726]: time="2025-01-29T10:55:06.352187904Z" level=info msg="StopPodSandbox for \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\"" Jan 29 10:55:06.352320 containerd[1726]: time="2025-01-29T10:55:06.352298343Z" level=info msg="TearDown network for sandbox \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\" successfully" Jan 29 10:55:06.352320 containerd[1726]: time="2025-01-29T10:55:06.352314743Z" level=info msg="StopPodSandbox for \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\" returns successfully" Jan 29 10:55:06.353941 containerd[1726]: time="2025-01-29T10:55:06.352603063Z" level=info msg="RemovePodSandbox for \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\"" Jan 29 10:55:06.353941 containerd[1726]: time="2025-01-29T10:55:06.352633943Z" level=info msg="Forcibly stopping sandbox \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\"" Jan 29 10:55:06.353941 containerd[1726]: time="2025-01-29T10:55:06.352701543Z" level=info msg="TearDown network for sandbox \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\" successfully" Jan 29 10:55:06.362242 containerd[1726]: time="2025-01-29T10:55:06.362089179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 10:55:06.362242 containerd[1726]: time="2025-01-29T10:55:06.362157619Z" level=info msg="RemovePodSandbox \"aea11fd5a4cdb8e67b6279402ffd88e6270ebbcfb5ff8cdb6379eb85e84f0170\" returns successfully" Jan 29 10:55:13.242093 kubelet[3366]: I0129 10:55:13.242025 3366 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 10:55:35.449354 systemd[1]: run-containerd-runc-k8s.io-c9fc0e0d6e2c4c62d29cb365f0c0ce6970c60b56356d421037854e713d2b29d5-runc.KV9r0A.mount: Deactivated successfully. Jan 29 10:55:50.951998 systemd[1]: run-containerd-runc-k8s.io-505ae507e2def43c029872c22eb244d8832b61b438210b48831162af3e0c5de3-runc.9w5i5h.mount: Deactivated successfully. Jan 29 10:56:02.409096 systemd[1]: Started sshd@7-10.200.20.37:22-10.200.16.10:41984.service - OpenSSH per-connection server daemon (10.200.16.10:41984). Jan 29 10:56:02.842475 sshd[6307]: Accepted publickey for core from 10.200.16.10 port 41984 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:02.844571 sshd-session[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:02.848304 systemd-logind[1710]: New session 10 of user core. Jan 29 10:56:02.859072 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 10:56:03.238913 sshd[6309]: Connection closed by 10.200.16.10 port 41984 Jan 29 10:56:03.238729 sshd-session[6307]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:03.242269 systemd[1]: sshd@7-10.200.20.37:22-10.200.16.10:41984.service: Deactivated successfully. Jan 29 10:56:03.244001 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 10:56:03.244653 systemd-logind[1710]: Session 10 logged out. Waiting for processes to exit. Jan 29 10:56:03.245615 systemd-logind[1710]: Removed session 10. Jan 29 10:56:08.324102 systemd[1]: Started sshd@8-10.200.20.37:22-10.200.16.10:58772.service - OpenSSH per-connection server daemon (10.200.16.10:58772). Jan 29 10:56:08.751321 sshd[6343]: Accepted publickey for core from 10.200.16.10 port 58772 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:08.753960 sshd-session[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:08.758682 systemd-logind[1710]: New session 11 of user core. Jan 29 10:56:08.765001 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 10:56:09.141590 sshd[6345]: Connection closed by 10.200.16.10 port 58772 Jan 29 10:56:09.141039 sshd-session[6343]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:09.145073 systemd[1]: sshd@8-10.200.20.37:22-10.200.16.10:58772.service: Deactivated successfully. Jan 29 10:56:09.147277 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 10:56:09.151578 systemd-logind[1710]: Session 11 logged out. Waiting for processes to exit. Jan 29 10:56:09.152385 systemd-logind[1710]: Removed session 11. Jan 29 10:56:14.218338 systemd[1]: Started sshd@9-10.200.20.37:22-10.200.16.10:58788.service - OpenSSH per-connection server daemon (10.200.16.10:58788). Jan 29 10:56:14.645709 sshd[6367]: Accepted publickey for core from 10.200.16.10 port 58788 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:14.647051 sshd-session[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:14.650994 systemd-logind[1710]: New session 12 of user core. Jan 29 10:56:14.659012 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 10:56:15.029813 sshd[6369]: Connection closed by 10.200.16.10 port 58788 Jan 29 10:56:15.029719 sshd-session[6367]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:15.032986 systemd[1]: sshd@9-10.200.20.37:22-10.200.16.10:58788.service: Deactivated successfully. Jan 29 10:56:15.036385 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 10:56:15.037377 systemd-logind[1710]: Session 12 logged out. Waiting for processes to exit. Jan 29 10:56:15.038630 systemd-logind[1710]: Removed session 12. Jan 29 10:56:15.112207 systemd[1]: Started sshd@10-10.200.20.37:22-10.200.16.10:58804.service - OpenSSH per-connection server daemon (10.200.16.10:58804). Jan 29 10:56:15.540928 sshd[6381]: Accepted publickey for core from 10.200.16.10 port 58804 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:15.542591 sshd-session[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:15.546178 systemd-logind[1710]: New session 13 of user core. Jan 29 10:56:15.555997 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 10:56:15.960368 sshd[6383]: Connection closed by 10.200.16.10 port 58804 Jan 29 10:56:15.960618 sshd-session[6381]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:15.964717 systemd[1]: sshd@10-10.200.20.37:22-10.200.16.10:58804.service: Deactivated successfully. Jan 29 10:56:15.966433 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 10:56:15.967065 systemd-logind[1710]: Session 13 logged out. Waiting for processes to exit. Jan 29 10:56:15.968045 systemd-logind[1710]: Removed session 13. Jan 29 10:56:16.037803 systemd[1]: Started sshd@11-10.200.20.37:22-10.200.16.10:48998.service - OpenSSH per-connection server daemon (10.200.16.10:48998). Jan 29 10:56:16.467996 sshd[6391]: Accepted publickey for core from 10.200.16.10 port 48998 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:16.469457 sshd-session[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:16.474261 systemd-logind[1710]: New session 14 of user core. Jan 29 10:56:16.479059 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 10:56:16.850817 sshd[6393]: Connection closed by 10.200.16.10 port 48998 Jan 29 10:56:16.851402 sshd-session[6391]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:16.855230 systemd[1]: sshd@11-10.200.20.37:22-10.200.16.10:48998.service: Deactivated successfully. Jan 29 10:56:16.857057 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 10:56:16.857766 systemd-logind[1710]: Session 14 logged out. Waiting for processes to exit. Jan 29 10:56:16.858677 systemd-logind[1710]: Removed session 14. Jan 29 10:56:21.928956 systemd[1]: Started sshd@12-10.200.20.37:22-10.200.16.10:49012.service - OpenSSH per-connection server daemon (10.200.16.10:49012). Jan 29 10:56:22.358349 sshd[6429]: Accepted publickey for core from 10.200.16.10 port 49012 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:22.359679 sshd-session[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:22.363436 systemd-logind[1710]: New session 15 of user core. Jan 29 10:56:22.371054 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 10:56:22.756453 sshd[6431]: Connection closed by 10.200.16.10 port 49012 Jan 29 10:56:22.757062 sshd-session[6429]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:22.760463 systemd[1]: sshd@12-10.200.20.37:22-10.200.16.10:49012.service: Deactivated successfully. Jan 29 10:56:22.762689 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 10:56:22.765395 systemd-logind[1710]: Session 15 logged out. Waiting for processes to exit. Jan 29 10:56:22.766557 systemd-logind[1710]: Removed session 15. Jan 29 10:56:27.834969 systemd[1]: Started sshd@13-10.200.20.37:22-10.200.16.10:32996.service - OpenSSH per-connection server daemon (10.200.16.10:32996). Jan 29 10:56:28.263371 sshd[6459]: Accepted publickey for core from 10.200.16.10 port 32996 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:28.264695 sshd-session[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:28.268738 systemd-logind[1710]: New session 16 of user core. Jan 29 10:56:28.274999 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 10:56:28.647762 sshd[6461]: Connection closed by 10.200.16.10 port 32996 Jan 29 10:56:28.648491 sshd-session[6459]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:28.651958 systemd[1]: sshd@13-10.200.20.37:22-10.200.16.10:32996.service: Deactivated successfully. Jan 29 10:56:28.653733 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 10:56:28.655426 systemd-logind[1710]: Session 16 logged out. Waiting for processes to exit. Jan 29 10:56:28.656815 systemd-logind[1710]: Removed session 16. Jan 29 10:56:28.741151 systemd[1]: Started sshd@14-10.200.20.37:22-10.200.16.10:33006.service - OpenSSH per-connection server daemon (10.200.16.10:33006). Jan 29 10:56:29.174884 sshd[6471]: Accepted publickey for core from 10.200.16.10 port 33006 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:29.176369 sshd-session[6471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:29.181028 systemd-logind[1710]: New session 17 of user core. Jan 29 10:56:29.185009 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 10:56:29.893549 sshd[6473]: Connection closed by 10.200.16.10 port 33006 Jan 29 10:56:29.894598 sshd-session[6471]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:29.898300 systemd-logind[1710]: Session 17 logged out. Waiting for processes to exit. Jan 29 10:56:29.898722 systemd[1]: sshd@14-10.200.20.37:22-10.200.16.10:33006.service: Deactivated successfully. Jan 29 10:56:29.901068 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 10:56:29.902274 systemd-logind[1710]: Removed session 17. Jan 29 10:56:29.977356 systemd[1]: Started sshd@15-10.200.20.37:22-10.200.16.10:33010.service - OpenSSH per-connection server daemon (10.200.16.10:33010). Jan 29 10:56:30.423010 sshd[6482]: Accepted publickey for core from 10.200.16.10 port 33010 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:30.426035 sshd-session[6482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:30.430022 systemd-logind[1710]: New session 18 of user core. Jan 29 10:56:30.438017 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 10:56:32.481906 sshd[6484]: Connection closed by 10.200.16.10 port 33010 Jan 29 10:56:32.482645 sshd-session[6482]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:32.486257 systemd[1]: sshd@15-10.200.20.37:22-10.200.16.10:33010.service: Deactivated successfully. Jan 29 10:56:32.488652 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 10:56:32.490172 systemd-logind[1710]: Session 18 logged out. Waiting for processes to exit. Jan 29 10:56:32.491109 systemd-logind[1710]: Removed session 18. Jan 29 10:56:32.568136 systemd[1]: Started sshd@16-10.200.20.37:22-10.200.16.10:33016.service - OpenSSH per-connection server daemon (10.200.16.10:33016). Jan 29 10:56:32.995542 sshd[6500]: Accepted publickey for core from 10.200.16.10 port 33016 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:32.997367 sshd-session[6500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:33.002560 systemd-logind[1710]: New session 19 of user core. Jan 29 10:56:33.011011 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 10:56:33.501904 sshd[6502]: Connection closed by 10.200.16.10 port 33016 Jan 29 10:56:33.502527 sshd-session[6500]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:33.505996 systemd-logind[1710]: Session 19 logged out. Waiting for processes to exit. Jan 29 10:56:33.506814 systemd[1]: sshd@16-10.200.20.37:22-10.200.16.10:33016.service: Deactivated successfully. Jan 29 10:56:33.510311 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 10:56:33.511522 systemd-logind[1710]: Removed session 19. Jan 29 10:56:33.589174 systemd[1]: Started sshd@17-10.200.20.37:22-10.200.16.10:33030.service - OpenSSH per-connection server daemon (10.200.16.10:33030). Jan 29 10:56:34.015139 sshd[6511]: Accepted publickey for core from 10.200.16.10 port 33030 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:34.016490 sshd-session[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:34.021913 systemd-logind[1710]: New session 20 of user core. Jan 29 10:56:34.026042 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 10:56:34.394996 sshd[6513]: Connection closed by 10.200.16.10 port 33030 Jan 29 10:56:34.394470 sshd-session[6511]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:34.397430 systemd[1]: sshd@17-10.200.20.37:22-10.200.16.10:33030.service: Deactivated successfully. Jan 29 10:56:34.399327 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 10:56:34.401152 systemd-logind[1710]: Session 20 logged out. Waiting for processes to exit. Jan 29 10:56:34.402371 systemd-logind[1710]: Removed session 20. Jan 29 10:56:39.471923 systemd[1]: Started sshd@18-10.200.20.37:22-10.200.16.10:37976.service - OpenSSH per-connection server daemon (10.200.16.10:37976). Jan 29 10:56:39.900474 sshd[6545]: Accepted publickey for core from 10.200.16.10 port 37976 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:39.901765 sshd-session[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:39.906111 systemd-logind[1710]: New session 21 of user core. Jan 29 10:56:39.910012 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 10:56:40.298299 sshd[6547]: Connection closed by 10.200.16.10 port 37976 Jan 29 10:56:40.298848 sshd-session[6545]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:40.302975 systemd[1]: sshd@18-10.200.20.37:22-10.200.16.10:37976.service: Deactivated successfully. Jan 29 10:56:40.304784 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 10:56:40.305528 systemd-logind[1710]: Session 21 logged out. Waiting for processes to exit. Jan 29 10:56:40.306613 systemd-logind[1710]: Removed session 21. Jan 29 10:56:45.375557 systemd[1]: Started sshd@19-10.200.20.37:22-10.200.16.10:37986.service - OpenSSH per-connection server daemon (10.200.16.10:37986). Jan 29 10:56:45.801108 sshd[6559]: Accepted publickey for core from 10.200.16.10 port 37986 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:45.802810 sshd-session[6559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:45.806892 systemd-logind[1710]: New session 22 of user core. Jan 29 10:56:45.810002 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 10:56:46.183970 sshd[6561]: Connection closed by 10.200.16.10 port 37986 Jan 29 10:56:46.184567 sshd-session[6559]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:46.188162 systemd[1]: sshd@19-10.200.20.37:22-10.200.16.10:37986.service: Deactivated successfully. Jan 29 10:56:46.189844 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 10:56:46.190586 systemd-logind[1710]: Session 22 logged out. Waiting for processes to exit. Jan 29 10:56:46.191904 systemd-logind[1710]: Removed session 22. Jan 29 10:56:50.943425 systemd[1]: run-containerd-runc-k8s.io-505ae507e2def43c029872c22eb244d8832b61b438210b48831162af3e0c5de3-runc.DbveYZ.mount: Deactivated successfully. Jan 29 10:56:51.261657 systemd[1]: Started sshd@20-10.200.20.37:22-10.200.16.10:36754.service - OpenSSH per-connection server daemon (10.200.16.10:36754). Jan 29 10:56:51.691512 sshd[6616]: Accepted publickey for core from 10.200.16.10 port 36754 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:51.692827 sshd-session[6616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:51.696460 systemd-logind[1710]: New session 23 of user core. Jan 29 10:56:51.707006 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 10:56:52.079077 sshd[6618]: Connection closed by 10.200.16.10 port 36754 Jan 29 10:56:52.078983 sshd-session[6616]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:52.081665 systemd[1]: sshd@20-10.200.20.37:22-10.200.16.10:36754.service: Deactivated successfully. Jan 29 10:56:52.084079 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 10:56:52.085449 systemd-logind[1710]: Session 23 logged out. Waiting for processes to exit. Jan 29 10:56:52.086568 systemd-logind[1710]: Removed session 23. Jan 29 10:56:57.159091 systemd[1]: Started sshd@21-10.200.20.37:22-10.200.16.10:53210.service - OpenSSH per-connection server daemon (10.200.16.10:53210). Jan 29 10:56:57.582938 sshd[6629]: Accepted publickey for core from 10.200.16.10 port 53210 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:56:57.584329 sshd-session[6629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:56:57.590818 systemd-logind[1710]: New session 24 of user core. Jan 29 10:56:57.595092 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 10:56:57.966704 sshd[6634]: Connection closed by 10.200.16.10 port 53210 Jan 29 10:56:57.966537 sshd-session[6629]: pam_unix(sshd:session): session closed for user core Jan 29 10:56:57.969546 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 10:56:57.970252 systemd-logind[1710]: Session 24 logged out. Waiting for processes to exit. Jan 29 10:56:57.970544 systemd[1]: sshd@21-10.200.20.37:22-10.200.16.10:53210.service: Deactivated successfully. Jan 29 10:56:57.973744 systemd-logind[1710]: Removed session 24. Jan 29 10:57:03.045889 systemd[1]: Started sshd@22-10.200.20.37:22-10.200.16.10:53222.service - OpenSSH per-connection server daemon (10.200.16.10:53222). Jan 29 10:57:03.475633 sshd[6646]: Accepted publickey for core from 10.200.16.10 port 53222 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:57:03.476800 sshd-session[6646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:57:03.485128 systemd-logind[1710]: New session 25 of user core. Jan 29 10:57:03.492125 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 10:57:03.860685 sshd[6648]: Connection closed by 10.200.16.10 port 53222 Jan 29 10:57:03.860587 sshd-session[6646]: pam_unix(sshd:session): session closed for user core Jan 29 10:57:03.864290 systemd[1]: sshd@22-10.200.20.37:22-10.200.16.10:53222.service: Deactivated successfully. Jan 29 10:57:03.867754 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 10:57:03.869394 systemd-logind[1710]: Session 25 logged out. Waiting for processes to exit. Jan 29 10:57:03.870362 systemd-logind[1710]: Removed session 25. Jan 29 10:57:08.952187 systemd[1]: Started sshd@23-10.200.20.37:22-10.200.16.10:40912.service - OpenSSH per-connection server daemon (10.200.16.10:40912). Jan 29 10:57:09.379420 sshd[6679]: Accepted publickey for core from 10.200.16.10 port 40912 ssh2: RSA SHA256:KlqcmS58HAsEcZvkCNNoVLavNd4HuqXgUMbsyiVnGr0 Jan 29 10:57:09.380759 sshd-session[6679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 10:57:09.384563 systemd-logind[1710]: New session 26 of user core. Jan 29 10:57:09.388033 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 10:57:09.758482 sshd[6681]: Connection closed by 10.200.16.10 port 40912 Jan 29 10:57:09.759111 sshd-session[6679]: pam_unix(sshd:session): session closed for user core Jan 29 10:57:09.762714 systemd[1]: sshd@23-10.200.20.37:22-10.200.16.10:40912.service: Deactivated successfully. Jan 29 10:57:09.764364 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 10:57:09.765447 systemd-logind[1710]: Session 26 logged out. Waiting for processes to exit. Jan 29 10:57:09.766448 systemd-logind[1710]: Removed session 26.