Jun 25 18:31:38.324772 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 18:31:38.324794 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Jun 25 17:19:03 -00 2024 Jun 25 18:31:38.324803 kernel: KASLR enabled Jun 25 18:31:38.324811 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 25 18:31:38.324816 kernel: printk: bootconsole [pl11] enabled Jun 25 18:31:38.324822 kernel: efi: EFI v2.7 by EDK II Jun 25 18:31:38.324829 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Jun 25 18:31:38.324835 kernel: random: crng init done Jun 25 18:31:38.324841 kernel: ACPI: Early table checksum verification disabled Jun 25 18:31:38.324847 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jun 25 18:31:38.324854 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324860 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324867 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:31:38.324873 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324881 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324887 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324894 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324902 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324909 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324915 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 25 18:31:38.324921 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324928 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 25 18:31:38.324934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 25 18:31:38.324941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jun 25 18:31:38.324947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jun 25 18:31:38.324953 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jun 25 18:31:38.324960 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jun 25 18:31:38.324967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jun 25 18:31:38.324975 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jun 25 18:31:38.324981 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jun 25 18:31:38.324987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jun 25 18:31:38.324994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jun 25 18:31:38.325000 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jun 25 18:31:38.325006 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jun 25 18:31:38.325013 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jun 25 18:31:38.325019 kernel: Zone ranges: Jun 25 18:31:38.325025 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 25 18:31:38.325031 kernel: DMA32 empty Jun 25 18:31:38.325038 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 18:31:38.325046 kernel: Movable zone start for each node Jun 25 18:31:38.325055 kernel: Early memory node ranges Jun 25 18:31:38.325061 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 25 18:31:38.325068 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jun 25 18:31:38.325075 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jun 25 18:31:38.325083 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jun 25 18:31:38.325090 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jun 25 18:31:38.325097 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jun 25 18:31:38.325104 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jun 25 18:31:38.325110 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jun 25 18:31:38.327151 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 18:31:38.327187 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 25 18:31:38.327195 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 25 18:31:38.327202 kernel: psci: probing for conduit method from ACPI. Jun 25 18:31:38.327210 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 18:31:38.327216 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 18:31:38.327223 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 25 18:31:38.327238 kernel: psci: SMC Calling Convention v1.4 Jun 25 18:31:38.327245 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 25 18:31:38.327252 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 25 18:31:38.327259 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jun 25 18:31:38.327265 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jun 25 18:31:38.327273 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 25 18:31:38.327280 kernel: Detected PIPT I-cache on CPU0 Jun 25 18:31:38.327286 kernel: CPU features: detected: GIC system register CPU interface Jun 25 18:31:38.327293 kernel: CPU features: detected: Hardware dirty bit management Jun 25 18:31:38.327300 kernel: CPU features: detected: Spectre-BHB Jun 25 18:31:38.327307 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 18:31:38.327314 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 18:31:38.327322 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 18:31:38.327329 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jun 25 18:31:38.327336 kernel: alternatives: applying boot alternatives Jun 25 18:31:38.327345 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:31:38.327352 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:31:38.327359 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:31:38.327366 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:31:38.327373 kernel: Fallback order for Node 0: 0 Jun 25 18:31:38.327380 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jun 25 18:31:38.327387 kernel: Policy zone: Normal Jun 25 18:31:38.327395 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:31:38.327402 kernel: software IO TLB: area num 2. Jun 25 18:31:38.327409 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Jun 25 18:31:38.327416 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Jun 25 18:31:38.327423 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:31:38.327430 kernel: trace event string verifier disabled Jun 25 18:31:38.327437 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:31:38.327445 kernel: rcu: RCU event tracing is enabled. Jun 25 18:31:38.327452 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:31:38.327459 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:31:38.327466 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:31:38.327473 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:31:38.327482 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:31:38.327489 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 18:31:38.327495 kernel: GICv3: 960 SPIs implemented Jun 25 18:31:38.327502 kernel: GICv3: 0 Extended SPIs implemented Jun 25 18:31:38.327509 kernel: Root IRQ handler: gic_handle_irq Jun 25 18:31:38.327516 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 18:31:38.327523 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 25 18:31:38.327529 kernel: ITS: No ITS available, not enabling LPIs Jun 25 18:31:38.327536 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:31:38.327543 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:31:38.327550 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 18:31:38.327559 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 18:31:38.327566 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 18:31:38.327573 kernel: Console: colour dummy device 80x25 Jun 25 18:31:38.327580 kernel: printk: console [tty1] enabled Jun 25 18:31:38.327587 kernel: ACPI: Core revision 20230628 Jun 25 18:31:38.327594 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 18:31:38.327601 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:31:38.327608 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:31:38.327616 kernel: SELinux: Initializing. Jun 25 18:31:38.327623 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:31:38.327632 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:31:38.327640 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:31:38.327647 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:31:38.327654 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jun 25 18:31:38.327661 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jun 25 18:31:38.327668 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:31:38.327675 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:31:38.327690 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:31:38.327697 kernel: Remapping and enabling EFI services. Jun 25 18:31:38.327705 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:31:38.327712 kernel: Detected PIPT I-cache on CPU1 Jun 25 18:31:38.327721 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 25 18:31:38.327729 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:31:38.327737 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 18:31:38.327744 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:31:38.327752 kernel: SMP: Total of 2 processors activated. Jun 25 18:31:38.327761 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 18:31:38.327769 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 25 18:31:38.327776 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 18:31:38.327784 kernel: CPU features: detected: CRC32 instructions Jun 25 18:31:38.327791 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 18:31:38.327799 kernel: CPU features: detected: LSE atomic instructions Jun 25 18:31:38.327806 kernel: CPU features: detected: Privileged Access Never Jun 25 18:31:38.327814 kernel: CPU: All CPU(s) started at EL1 Jun 25 18:31:38.327821 kernel: alternatives: applying system-wide alternatives Jun 25 18:31:38.327830 kernel: devtmpfs: initialized Jun 25 18:31:38.327838 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:31:38.327845 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:31:38.327853 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:31:38.327860 kernel: SMBIOS 3.1.0 present. Jun 25 18:31:38.327868 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jun 25 18:31:38.327875 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:31:38.327883 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 18:31:38.327890 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 18:31:38.327900 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 18:31:38.327907 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:31:38.327915 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jun 25 18:31:38.327922 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:31:38.327930 kernel: cpuidle: using governor menu Jun 25 18:31:38.327937 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 18:31:38.327945 kernel: ASID allocator initialised with 32768 entries Jun 25 18:31:38.327952 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:31:38.327960 kernel: Serial: AMBA PL011 UART driver Jun 25 18:31:38.327968 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 25 18:31:38.327976 kernel: Modules: 0 pages in range for non-PLT usage Jun 25 18:31:38.327984 kernel: Modules: 509120 pages in range for PLT usage Jun 25 18:31:38.327991 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:31:38.327999 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:31:38.328006 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 18:31:38.328014 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 18:31:38.328021 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:31:38.328029 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:31:38.328038 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 18:31:38.328046 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 18:31:38.328053 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:31:38.328060 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:31:38.328068 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:31:38.328075 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:31:38.328083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:31:38.328090 kernel: ACPI: Interpreter enabled Jun 25 18:31:38.328098 kernel: ACPI: Using GIC for interrupt routing Jun 25 18:31:38.328107 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 25 18:31:38.328114 kernel: printk: console [ttyAMA0] enabled Jun 25 18:31:38.328248 kernel: printk: bootconsole [pl11] disabled Jun 25 18:31:38.328255 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 25 18:31:38.328263 kernel: iommu: Default domain type: Translated Jun 25 18:31:38.328270 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 18:31:38.328278 kernel: efivars: Registered efivars operations Jun 25 18:31:38.328286 kernel: vgaarb: loaded Jun 25 18:31:38.328293 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 18:31:38.328301 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:31:38.328311 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:31:38.328318 kernel: pnp: PnP ACPI init Jun 25 18:31:38.328326 kernel: pnp: PnP ACPI: found 0 devices Jun 25 18:31:38.328333 kernel: NET: Registered PF_INET protocol family Jun 25 18:31:38.328340 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:31:38.328348 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:31:38.328356 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:31:38.328363 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:31:38.328372 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:31:38.328380 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:31:38.328388 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:31:38.328395 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:31:38.328402 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:31:38.328410 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:31:38.328417 kernel: kvm [1]: HYP mode not available Jun 25 18:31:38.328425 kernel: Initialise system trusted keyrings Jun 25 18:31:38.328432 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:31:38.328441 kernel: Key type asymmetric registered Jun 25 18:31:38.328448 kernel: Asymmetric key parser 'x509' registered Jun 25 18:31:38.328456 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 25 18:31:38.328463 kernel: io scheduler mq-deadline registered Jun 25 18:31:38.328470 kernel: io scheduler kyber registered Jun 25 18:31:38.328478 kernel: io scheduler bfq registered Jun 25 18:31:38.328486 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:31:38.328493 kernel: thunder_xcv, ver 1.0 Jun 25 18:31:38.328500 kernel: thunder_bgx, ver 1.0 Jun 25 18:31:38.328508 kernel: nicpf, ver 1.0 Jun 25 18:31:38.328517 kernel: nicvf, ver 1.0 Jun 25 18:31:38.328683 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 18:31:38.328755 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T18:31:37 UTC (1719340297) Jun 25 18:31:38.328766 kernel: efifb: probing for efifb Jun 25 18:31:38.328774 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:31:38.328782 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:31:38.328790 kernel: efifb: scrolling: redraw Jun 25 18:31:38.328800 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:31:38.328807 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:31:38.328815 kernel: fb0: EFI VGA frame buffer device Jun 25 18:31:38.328822 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 25 18:31:38.328829 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:31:38.328837 kernel: No ACPI PMU IRQ for CPU0 Jun 25 18:31:38.328844 kernel: No ACPI PMU IRQ for CPU1 Jun 25 18:31:38.328851 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jun 25 18:31:38.328859 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 25 18:31:38.328868 kernel: watchdog: Hard watchdog permanently disabled Jun 25 18:31:38.328876 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:31:38.328883 kernel: Segment Routing with IPv6 Jun 25 18:31:38.328890 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:31:38.328898 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:31:38.328905 kernel: Key type dns_resolver registered Jun 25 18:31:38.328912 kernel: registered taskstats version 1 Jun 25 18:31:38.328919 kernel: Loading compiled-in X.509 certificates Jun 25 18:31:38.328927 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 751918e575d02f96b0daadd44b8f442a8c39ecd3' Jun 25 18:31:38.328937 kernel: Key type .fscrypt registered Jun 25 18:31:38.328945 kernel: Key type fscrypt-provisioning registered Jun 25 18:31:38.328952 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:31:38.328959 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:31:38.328967 kernel: ima: No architecture policies found Jun 25 18:31:38.328974 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 18:31:38.328982 kernel: clk: Disabling unused clocks Jun 25 18:31:38.329003 kernel: Freeing unused kernel memory: 39040K Jun 25 18:31:38.329011 kernel: Run /init as init process Jun 25 18:31:38.329020 kernel: with arguments: Jun 25 18:31:38.329028 kernel: /init Jun 25 18:31:38.329036 kernel: with environment: Jun 25 18:31:38.329043 kernel: HOME=/ Jun 25 18:31:38.329050 kernel: TERM=linux Jun 25 18:31:38.329057 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:31:38.329067 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:31:38.329077 systemd[1]: Detected virtualization microsoft. Jun 25 18:31:38.329087 systemd[1]: Detected architecture arm64. Jun 25 18:31:38.329095 systemd[1]: Running in initrd. Jun 25 18:31:38.329103 systemd[1]: No hostname configured, using default hostname. Jun 25 18:31:38.329110 systemd[1]: Hostname set to . Jun 25 18:31:38.329136 systemd[1]: Initializing machine ID from random generator. Jun 25 18:31:38.329145 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:31:38.329154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:31:38.329162 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:31:38.329173 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:31:38.329181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:31:38.329189 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:31:38.329197 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:31:38.329207 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:31:38.329215 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:31:38.329223 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:31:38.329233 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:31:38.329241 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:31:38.329249 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:31:38.329257 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:31:38.329265 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:31:38.329273 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:31:38.329281 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:31:38.329289 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:31:38.329298 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:31:38.329306 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:31:38.329319 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:31:38.329327 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:31:38.329335 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:31:38.329343 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:31:38.329351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:31:38.329359 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:31:38.329367 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:31:38.329377 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:31:38.329407 systemd-journald[217]: Collecting audit messages is disabled. Jun 25 18:31:38.329428 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:31:38.329436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:38.329448 systemd-journald[217]: Journal started Jun 25 18:31:38.329467 systemd-journald[217]: Runtime Journal (/run/log/journal/7115e768633f4a979edff5d765476a55) is 8.0M, max 78.6M, 70.6M free. Jun 25 18:31:38.336039 systemd-modules-load[218]: Inserted module 'overlay' Jun 25 18:31:38.375960 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:31:38.375991 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:31:38.376005 kernel: Bridge firewalling registered Jun 25 18:31:38.378201 systemd-modules-load[218]: Inserted module 'br_netfilter' Jun 25 18:31:38.382798 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:31:38.393433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:31:38.406252 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:31:38.417898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:31:38.428666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:38.452478 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:31:38.461309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:31:38.486323 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:31:38.502347 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:31:38.508903 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:38.516156 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:31:38.528731 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:31:38.556764 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:31:38.570344 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:31:38.587298 dracut-cmdline[247]: dracut-dracut-053 Jun 25 18:31:38.587298 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:31:38.595980 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:31:38.686801 kernel: SCSI subsystem initialized Jun 25 18:31:38.686831 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:31:38.686843 kernel: iscsi: registered transport (tcp) Jun 25 18:31:38.645847 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:31:38.652487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:31:38.697744 systemd-resolved[321]: Positive Trust Anchors: Jun 25 18:31:38.697754 systemd-resolved[321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:31:38.728312 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:31:38.728338 kernel: QLogic iSCSI HBA Driver Jun 25 18:31:38.697783 systemd-resolved[321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:31:38.703086 systemd-resolved[321]: Defaulting to hostname 'linux'. Jun 25 18:31:38.706664 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:31:38.713011 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:31:38.823424 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:31:38.837579 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:31:38.869825 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:31:38.869887 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:31:38.877237 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:31:38.927148 kernel: raid6: neonx8 gen() 15729 MB/s Jun 25 18:31:38.947143 kernel: raid6: neonx4 gen() 15667 MB/s Jun 25 18:31:38.967134 kernel: raid6: neonx2 gen() 13272 MB/s Jun 25 18:31:38.988145 kernel: raid6: neonx1 gen() 10453 MB/s Jun 25 18:31:39.008139 kernel: raid6: int64x8 gen() 6960 MB/s Jun 25 18:31:39.028149 kernel: raid6: int64x4 gen() 7340 MB/s Jun 25 18:31:39.049143 kernel: raid6: int64x2 gen() 6127 MB/s Jun 25 18:31:39.072852 kernel: raid6: int64x1 gen() 5059 MB/s Jun 25 18:31:39.072891 kernel: raid6: using algorithm neonx8 gen() 15729 MB/s Jun 25 18:31:39.096228 kernel: raid6: .... xor() 11918 MB/s, rmw enabled Jun 25 18:31:39.096293 kernel: raid6: using neon recovery algorithm Jun 25 18:31:39.105137 kernel: xor: measuring software checksum speed Jun 25 18:31:39.109137 kernel: 8regs : 19859 MB/sec Jun 25 18:31:39.116193 kernel: 32regs : 19720 MB/sec Jun 25 18:31:39.116217 kernel: arm64_neon : 27206 MB/sec Jun 25 18:31:39.120512 kernel: xor: using function: arm64_neon (27206 MB/sec) Jun 25 18:31:39.172147 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:31:39.184261 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:31:39.200305 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:31:39.223444 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jun 25 18:31:39.229146 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:31:39.247265 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:31:39.271773 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jun 25 18:31:39.299456 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:31:39.313615 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:31:39.354693 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:31:39.374320 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:31:39.396029 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:31:39.411669 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:31:39.426601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:31:39.441042 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:31:39.459393 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:31:39.476661 kernel: hv_vmbus: Vmbus version:5.3 Jun 25 18:31:39.492470 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:31:39.505357 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:31:39.505379 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:31:39.505389 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:31:39.521385 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 18:31:39.526814 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:31:39.529695 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:31:39.551192 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:31:39.551217 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:31:39.551230 kernel: scsi host1: storvsc_host_t Jun 25 18:31:39.535298 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:39.615591 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 18:31:39.615636 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:31:39.615786 kernel: scsi host0: storvsc_host_t Jun 25 18:31:39.615888 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:31:39.615908 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:31:39.615923 kernel: hv_netvsc 002248bc-bea0-0022-48bc-bea0002248bc eth0: VF slot 1 added Jun 25 18:31:39.576301 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:31:39.590040 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:31:39.590314 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:39.620358 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:39.642588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:39.670609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:39.692232 kernel: PTP clock support registered Jun 25 18:31:39.692255 kernel: hv_vmbus: registering driver hv_pci Jun 25 18:31:39.697101 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:31:39.697174 kernel: hv_pci 308caed0-8c94-4816-b09f-ae572c0d0361: PCI VMBus probing: Using version 0x10004 Jun 25 18:31:39.907826 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:31:39.907845 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:31:39.907865 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:31:39.907876 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:31:39.907885 kernel: hv_pci 308caed0-8c94-4816-b09f-ae572c0d0361: PCI host bridge to bus 8c94:00 Jun 25 18:31:39.907999 kernel: pci_bus 8c94:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 25 18:31:39.908102 kernel: pci_bus 8c94:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 18:31:39.908199 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:31:39.911466 kernel: pci 8c94:00:02.0: [15b3:1018] type 00 class 0x020000 Jun 25 18:31:39.911655 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:31:39.911667 kernel: pci 8c94:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 18:31:39.911758 kernel: pci 8c94:00:02.0: enabling Extended Tags Jun 25 18:31:39.911844 kernel: pci 8c94:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8c94:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jun 25 18:31:39.911930 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:31:39.929152 kernel: pci_bus 8c94:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 18:31:39.929305 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:31:39.929405 kernel: pci 8c94:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 18:31:39.929504 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:31:39.929587 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:31:39.929678 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:31:39.929764 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:31:39.929850 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:31:39.929860 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:31:39.707051 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:31:39.815746 systemd-resolved[321]: Clock change detected. Flushing caches. Jun 25 18:31:39.861525 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:39.981323 kernel: mlx5_core 8c94:00:02.0: enabling device (0000 -> 0002) Jun 25 18:31:40.199880 kernel: mlx5_core 8c94:00:02.0: firmware version: 16.30.1284 Jun 25 18:31:40.200010 kernel: hv_netvsc 002248bc-bea0-0022-48bc-bea0002248bc eth0: VF registering: eth1 Jun 25 18:31:40.200106 kernel: mlx5_core 8c94:00:02.0 eth1: joined to eth0 Jun 25 18:31:40.200227 kernel: mlx5_core 8c94:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jun 25 18:31:40.208200 kernel: mlx5_core 8c94:00:02.0 enP35988s1: renamed from eth1 Jun 25 18:31:40.574058 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:31:40.666815 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (482) Jun 25 18:31:40.681476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:31:40.754683 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:31:40.776314 kernel: BTRFS: device fsid c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (489) Jun 25 18:31:40.787569 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:31:40.794799 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:31:40.825486 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:31:40.847198 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:31:40.856202 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:31:40.864200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:31:41.865260 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:31:41.865788 disk-uuid[599]: The operation has completed successfully. Jun 25 18:31:41.931080 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:31:41.933262 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:31:41.972333 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:31:41.985799 sh[712]: Success Jun 25 18:31:42.018234 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 18:31:42.218644 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:31:42.236310 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:31:42.246953 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:31:42.278113 kernel: BTRFS info (device dm-0): first mount of filesystem c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 Jun 25 18:31:42.278187 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:31:42.285065 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:31:42.290191 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:31:42.294223 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:31:42.663749 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:31:42.669319 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:31:42.688461 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:31:42.712838 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:31:42.712899 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:31:42.717402 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:31:42.715396 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:31:42.753250 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:31:42.764846 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:31:42.777569 kernel: BTRFS info (device sda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:31:42.786113 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:31:42.802709 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:31:42.851753 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:31:42.872344 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:31:42.899879 systemd-networkd[896]: lo: Link UP Jun 25 18:31:42.899891 systemd-networkd[896]: lo: Gained carrier Jun 25 18:31:42.901474 systemd-networkd[896]: Enumeration completed Jun 25 18:31:42.902065 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:31:42.902068 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:31:42.907130 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:31:42.913325 systemd[1]: Reached target network.target - Network. Jun 25 18:31:42.991198 kernel: mlx5_core 8c94:00:02.0 enP35988s1: Link up Jun 25 18:31:43.033212 kernel: hv_netvsc 002248bc-bea0-0022-48bc-bea0002248bc eth0: Data path switched to VF: enP35988s1 Jun 25 18:31:43.033629 systemd-networkd[896]: enP35988s1: Link UP Jun 25 18:31:43.033710 systemd-networkd[896]: eth0: Link UP Jun 25 18:31:43.033819 systemd-networkd[896]: eth0: Gained carrier Jun 25 18:31:43.033828 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:31:43.042406 systemd-networkd[896]: enP35988s1: Gained carrier Jun 25 18:31:43.065213 systemd-networkd[896]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 18:31:44.039883 ignition[839]: Ignition 2.19.0 Jun 25 18:31:44.039897 ignition[839]: Stage: fetch-offline Jun 25 18:31:44.042811 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:31:44.039934 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:44.039942 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:44.040040 ignition[839]: parsed url from cmdline: "" Jun 25 18:31:44.068345 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:31:44.040043 ignition[839]: no config URL provided Jun 25 18:31:44.040050 ignition[839]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:31:44.040057 ignition[839]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:31:44.040063 ignition[839]: failed to fetch config: resource requires networking Jun 25 18:31:44.040256 ignition[839]: Ignition finished successfully Jun 25 18:31:44.087632 ignition[906]: Ignition 2.19.0 Jun 25 18:31:44.087639 ignition[906]: Stage: fetch Jun 25 18:31:44.087898 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:44.087909 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:44.088012 ignition[906]: parsed url from cmdline: "" Jun 25 18:31:44.088016 ignition[906]: no config URL provided Jun 25 18:31:44.088028 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:31:44.088040 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:31:44.088063 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:31:44.194604 ignition[906]: GET result: OK Jun 25 18:31:44.194676 ignition[906]: config has been read from IMDS userdata Jun 25 18:31:44.194719 ignition[906]: parsing config with SHA512: 03dcb8511c4f8c8ac308520cdfa1ddc6ee4eee22ce4e14f1696bea944425f83b53e5138013990089fbd41669bfbd1fe4dba3589eaba70d5a50518880280fcc92 Jun 25 18:31:44.198446 unknown[906]: fetched base config from "system" Jun 25 18:31:44.198822 ignition[906]: fetch: fetch complete Jun 25 18:31:44.198453 unknown[906]: fetched base config from "system" Jun 25 18:31:44.198827 ignition[906]: fetch: fetch passed Jun 25 18:31:44.198458 unknown[906]: fetched user config from "azure" Jun 25 18:31:44.198867 ignition[906]: Ignition finished successfully Jun 25 18:31:44.204106 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:31:44.227325 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:31:44.228328 systemd-networkd[896]: enP35988s1: Gained IPv6LL Jun 25 18:31:44.255672 ignition[913]: Ignition 2.19.0 Jun 25 18:31:44.255679 ignition[913]: Stage: kargs Jun 25 18:31:44.265771 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:31:44.255907 ignition[913]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:44.255917 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:44.261574 ignition[913]: kargs: kargs passed Jun 25 18:31:44.289506 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:31:44.261630 ignition[913]: Ignition finished successfully Jun 25 18:31:44.292638 systemd-networkd[896]: eth0: Gained IPv6LL Jun 25 18:31:44.315116 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:31:44.310917 ignition[920]: Ignition 2.19.0 Jun 25 18:31:44.321499 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:31:44.310925 ignition[920]: Stage: disks Jun 25 18:31:44.332268 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:31:44.311260 ignition[920]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:44.351055 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:31:44.311275 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:44.362906 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:31:44.312374 ignition[920]: disks: disks passed Jun 25 18:31:44.372043 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:31:44.312431 ignition[920]: Ignition finished successfully Jun 25 18:31:44.401447 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:31:44.497708 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:31:44.507395 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:31:44.526435 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:31:44.584211 kernel: EXT4-fs (sda9): mounted filesystem 91548e21-ce72-437e-94b9-d3fed380163a r/w with ordered data mode. Quota mode: none. Jun 25 18:31:44.584524 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:31:44.589719 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:31:44.642259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:31:44.652305 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:31:44.663515 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:31:44.672651 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:31:44.672693 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:31:44.685382 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:31:44.719196 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Jun 25 18:31:44.719240 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:31:44.731326 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:31:44.731923 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:31:44.747998 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:31:44.755224 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:31:44.756769 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:31:45.326830 coreos-metadata[942]: Jun 25 18:31:45.326 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:31:45.337982 coreos-metadata[942]: Jun 25 18:31:45.337 INFO Fetch successful Jun 25 18:31:45.337982 coreos-metadata[942]: Jun 25 18:31:45.337 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:31:45.358043 coreos-metadata[942]: Jun 25 18:31:45.358 INFO Fetch successful Jun 25 18:31:45.397256 coreos-metadata[942]: Jun 25 18:31:45.397 INFO wrote hostname ci-4012.0.0-a-71b05979e1 to /sysroot/etc/hostname Jun 25 18:31:45.407384 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:31:45.809917 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:31:45.855387 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:31:45.861925 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:31:45.868418 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:31:47.095279 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:31:47.110362 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:31:47.119397 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:31:47.141915 kernel: BTRFS info (device sda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:31:47.137287 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:31:47.170062 ignition[1058]: INFO : Ignition 2.19.0 Jun 25 18:31:47.176302 ignition[1058]: INFO : Stage: mount Jun 25 18:31:47.176302 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:47.176302 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:47.176302 ignition[1058]: INFO : mount: mount passed Jun 25 18:31:47.176302 ignition[1058]: INFO : Ignition finished successfully Jun 25 18:31:47.174519 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:31:47.182664 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:31:47.205415 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:31:47.221394 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:31:47.264197 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1072) Jun 25 18:31:47.264243 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:31:47.270163 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:31:47.274439 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:31:47.281190 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:31:47.282727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:31:47.307941 ignition[1090]: INFO : Ignition 2.19.0 Jun 25 18:31:47.307941 ignition[1090]: INFO : Stage: files Jun 25 18:31:47.315838 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:47.315838 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:47.315838 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:31:47.315838 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:31:47.315838 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:31:47.413240 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:31:47.420806 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:31:47.428283 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:31:47.427833 unknown[1090]: wrote ssh authorized keys file for user: core Jun 25 18:31:47.498491 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:31:47.509717 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 18:31:47.610901 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:31:47.804986 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:31:47.804986 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jun 25 18:31:48.130893 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:31:48.334282 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:31:48.334282 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:31:48.354281 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: files passed Jun 25 18:31:48.366249 ignition[1090]: INFO : Ignition finished successfully Jun 25 18:31:48.366480 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:31:48.399041 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:31:48.413411 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:31:48.440166 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:31:48.494243 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:31:48.494243 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:31:48.440362 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:31:48.526756 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:31:48.469010 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:31:48.476713 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:31:48.503501 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:31:48.551576 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:31:48.551721 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:31:48.562060 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:31:48.572646 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:31:48.584544 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:31:48.603464 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:31:48.623439 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:31:48.644458 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:31:48.662637 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:31:48.669489 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:31:48.681769 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:31:48.693018 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:31:48.693218 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:31:48.709914 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:31:48.721633 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:31:48.731652 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:31:48.742192 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:31:48.754387 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:31:48.767260 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:31:48.778994 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:31:48.791378 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:31:48.803458 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:31:48.814150 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:31:48.824162 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:31:48.824350 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:31:48.839366 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:31:48.846132 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:31:48.858166 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:31:48.863268 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:31:48.870293 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:31:48.870457 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:31:48.886633 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:31:48.886809 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:31:48.900918 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:31:48.901077 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:31:48.910945 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:31:48.911090 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:31:48.977816 ignition[1142]: INFO : Ignition 2.19.0 Jun 25 18:31:48.977816 ignition[1142]: INFO : Stage: umount Jun 25 18:31:48.977816 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:48.977816 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:48.977816 ignition[1142]: INFO : umount: umount passed Jun 25 18:31:48.977816 ignition[1142]: INFO : Ignition finished successfully Jun 25 18:31:48.942324 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:31:48.958072 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:31:48.970672 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:31:48.970871 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:31:48.982812 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:31:48.982934 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:31:48.999949 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:31:49.000322 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:31:49.012355 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:31:49.012467 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:31:49.022334 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:31:49.022396 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:31:49.036548 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:31:49.036614 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:31:49.043143 systemd[1]: Stopped target network.target - Network. Jun 25 18:31:49.052923 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:31:49.052988 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:31:49.059862 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:31:49.065009 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:31:49.074940 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:31:49.081572 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:31:49.093326 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:31:49.103958 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:31:49.104019 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:31:49.115783 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:31:49.115838 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:31:49.126277 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:31:49.126331 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:31:49.137308 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:31:49.137349 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:31:49.373301 kernel: hv_netvsc 002248bc-bea0-0022-48bc-bea0002248bc eth0: Data path switched from VF: enP35988s1 Jun 25 18:31:49.150755 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:31:49.162132 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:31:49.180779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:31:49.181417 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:31:49.181512 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:31:49.191818 systemd-networkd[896]: eth0: DHCPv6 lease lost Jun 25 18:31:49.195100 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:31:49.195214 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:31:49.207201 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:31:49.207264 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:31:49.237401 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:31:49.246927 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:31:49.247020 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:31:49.259146 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:31:49.276126 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:31:49.276263 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:31:49.303160 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:31:49.303281 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:31:49.314047 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:31:49.314120 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:31:49.325116 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:31:49.325181 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:31:49.337112 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:31:49.337276 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:31:49.357426 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:31:49.357513 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:31:49.368103 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:31:49.368152 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:31:49.379020 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:31:49.379080 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:31:49.396143 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:31:49.396222 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:31:49.414562 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:31:49.414641 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:49.449479 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:31:49.462473 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:31:49.462549 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:31:49.476023 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:31:49.476090 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:31:49.488108 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:31:49.488157 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:31:49.501850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:31:49.501901 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:49.520646 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:31:49.520779 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:31:49.532026 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:31:49.532119 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:31:49.634922 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:31:49.635091 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:31:49.644511 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:31:49.657610 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:31:49.657695 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:31:49.694495 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:31:49.713598 systemd[1]: Switching root. Jun 25 18:31:49.786515 systemd-journald[217]: Journal stopped Jun 25 18:31:38.324772 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jun 25 18:31:38.324794 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Jun 25 17:19:03 -00 2024 Jun 25 18:31:38.324803 kernel: KASLR enabled Jun 25 18:31:38.324811 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jun 25 18:31:38.324816 kernel: printk: bootconsole [pl11] enabled Jun 25 18:31:38.324822 kernel: efi: EFI v2.7 by EDK II Jun 25 18:31:38.324829 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Jun 25 18:31:38.324835 kernel: random: crng init done Jun 25 18:31:38.324841 kernel: ACPI: Early table checksum verification disabled Jun 25 18:31:38.324847 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jun 25 18:31:38.324854 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324860 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324867 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 25 18:31:38.324873 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324881 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324887 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324894 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324902 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324909 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324915 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jun 25 18:31:38.324921 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 25 18:31:38.324928 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jun 25 18:31:38.324934 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jun 25 18:31:38.324941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jun 25 18:31:38.324947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jun 25 18:31:38.324953 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jun 25 18:31:38.324960 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jun 25 18:31:38.324967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jun 25 18:31:38.324975 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jun 25 18:31:38.324981 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jun 25 18:31:38.324987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jun 25 18:31:38.324994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jun 25 18:31:38.325000 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jun 25 18:31:38.325006 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jun 25 18:31:38.325013 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jun 25 18:31:38.325019 kernel: Zone ranges: Jun 25 18:31:38.325025 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jun 25 18:31:38.325031 kernel: DMA32 empty Jun 25 18:31:38.325038 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 18:31:38.325046 kernel: Movable zone start for each node Jun 25 18:31:38.325055 kernel: Early memory node ranges Jun 25 18:31:38.325061 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jun 25 18:31:38.325068 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jun 25 18:31:38.325075 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jun 25 18:31:38.325083 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jun 25 18:31:38.325090 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jun 25 18:31:38.325097 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jun 25 18:31:38.325104 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jun 25 18:31:38.325110 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jun 25 18:31:38.327151 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jun 25 18:31:38.327187 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jun 25 18:31:38.327195 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jun 25 18:31:38.327202 kernel: psci: probing for conduit method from ACPI. Jun 25 18:31:38.327210 kernel: psci: PSCIv1.1 detected in firmware. Jun 25 18:31:38.327216 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 18:31:38.327223 kernel: psci: MIGRATE_INFO_TYPE not supported. Jun 25 18:31:38.327238 kernel: psci: SMC Calling Convention v1.4 Jun 25 18:31:38.327245 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jun 25 18:31:38.327252 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jun 25 18:31:38.327259 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jun 25 18:31:38.327265 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jun 25 18:31:38.327273 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 25 18:31:38.327280 kernel: Detected PIPT I-cache on CPU0 Jun 25 18:31:38.327286 kernel: CPU features: detected: GIC system register CPU interface Jun 25 18:31:38.327293 kernel: CPU features: detected: Hardware dirty bit management Jun 25 18:31:38.327300 kernel: CPU features: detected: Spectre-BHB Jun 25 18:31:38.327307 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 18:31:38.327314 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 18:31:38.327322 kernel: CPU features: detected: ARM erratum 1418040 Jun 25 18:31:38.327329 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jun 25 18:31:38.327336 kernel: alternatives: applying boot alternatives Jun 25 18:31:38.327345 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:31:38.327352 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:31:38.327359 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:31:38.327366 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:31:38.327373 kernel: Fallback order for Node 0: 0 Jun 25 18:31:38.327380 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jun 25 18:31:38.327387 kernel: Policy zone: Normal Jun 25 18:31:38.327395 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:31:38.327402 kernel: software IO TLB: area num 2. Jun 25 18:31:38.327409 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Jun 25 18:31:38.327416 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Jun 25 18:31:38.327423 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:31:38.327430 kernel: trace event string verifier disabled Jun 25 18:31:38.327437 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:31:38.327445 kernel: rcu: RCU event tracing is enabled. Jun 25 18:31:38.327452 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:31:38.327459 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:31:38.327466 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:31:38.327473 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:31:38.327482 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:31:38.327489 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 18:31:38.327495 kernel: GICv3: 960 SPIs implemented Jun 25 18:31:38.327502 kernel: GICv3: 0 Extended SPIs implemented Jun 25 18:31:38.327509 kernel: Root IRQ handler: gic_handle_irq Jun 25 18:31:38.327516 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jun 25 18:31:38.327523 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jun 25 18:31:38.327529 kernel: ITS: No ITS available, not enabling LPIs Jun 25 18:31:38.327536 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:31:38.327543 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:31:38.327550 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jun 25 18:31:38.327559 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jun 25 18:31:38.327566 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jun 25 18:31:38.327573 kernel: Console: colour dummy device 80x25 Jun 25 18:31:38.327580 kernel: printk: console [tty1] enabled Jun 25 18:31:38.327587 kernel: ACPI: Core revision 20230628 Jun 25 18:31:38.327594 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jun 25 18:31:38.327601 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:31:38.327608 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:31:38.327616 kernel: SELinux: Initializing. Jun 25 18:31:38.327623 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:31:38.327632 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:31:38.327640 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:31:38.327647 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:31:38.327654 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jun 25 18:31:38.327661 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jun 25 18:31:38.327668 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 25 18:31:38.327675 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:31:38.327690 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:31:38.327697 kernel: Remapping and enabling EFI services. Jun 25 18:31:38.327705 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:31:38.327712 kernel: Detected PIPT I-cache on CPU1 Jun 25 18:31:38.327721 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jun 25 18:31:38.327729 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jun 25 18:31:38.327737 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jun 25 18:31:38.327744 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:31:38.327752 kernel: SMP: Total of 2 processors activated. Jun 25 18:31:38.327761 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 18:31:38.327769 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jun 25 18:31:38.327776 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jun 25 18:31:38.327784 kernel: CPU features: detected: CRC32 instructions Jun 25 18:31:38.327791 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jun 25 18:31:38.327799 kernel: CPU features: detected: LSE atomic instructions Jun 25 18:31:38.327806 kernel: CPU features: detected: Privileged Access Never Jun 25 18:31:38.327814 kernel: CPU: All CPU(s) started at EL1 Jun 25 18:31:38.327821 kernel: alternatives: applying system-wide alternatives Jun 25 18:31:38.327830 kernel: devtmpfs: initialized Jun 25 18:31:38.327838 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:31:38.327845 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:31:38.327853 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:31:38.327860 kernel: SMBIOS 3.1.0 present. Jun 25 18:31:38.327868 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jun 25 18:31:38.327875 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:31:38.327883 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 18:31:38.327890 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 18:31:38.327900 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 18:31:38.327907 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:31:38.327915 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jun 25 18:31:38.327922 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:31:38.327930 kernel: cpuidle: using governor menu Jun 25 18:31:38.327937 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 18:31:38.327945 kernel: ASID allocator initialised with 32768 entries Jun 25 18:31:38.327952 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:31:38.327960 kernel: Serial: AMBA PL011 UART driver Jun 25 18:31:38.327968 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jun 25 18:31:38.327976 kernel: Modules: 0 pages in range for non-PLT usage Jun 25 18:31:38.327984 kernel: Modules: 509120 pages in range for PLT usage Jun 25 18:31:38.327991 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:31:38.327999 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:31:38.328006 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 18:31:38.328014 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 18:31:38.328021 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:31:38.328029 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:31:38.328038 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 18:31:38.328046 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 18:31:38.328053 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:31:38.328060 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:31:38.328068 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:31:38.328075 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:31:38.328083 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:31:38.328090 kernel: ACPI: Interpreter enabled Jun 25 18:31:38.328098 kernel: ACPI: Using GIC for interrupt routing Jun 25 18:31:38.328107 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jun 25 18:31:38.328114 kernel: printk: console [ttyAMA0] enabled Jun 25 18:31:38.328248 kernel: printk: bootconsole [pl11] disabled Jun 25 18:31:38.328255 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jun 25 18:31:38.328263 kernel: iommu: Default domain type: Translated Jun 25 18:31:38.328270 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 18:31:38.328278 kernel: efivars: Registered efivars operations Jun 25 18:31:38.328286 kernel: vgaarb: loaded Jun 25 18:31:38.328293 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 18:31:38.328301 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:31:38.328311 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:31:38.328318 kernel: pnp: PnP ACPI init Jun 25 18:31:38.328326 kernel: pnp: PnP ACPI: found 0 devices Jun 25 18:31:38.328333 kernel: NET: Registered PF_INET protocol family Jun 25 18:31:38.328340 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:31:38.328348 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:31:38.328356 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:31:38.328363 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:31:38.328372 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:31:38.328380 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:31:38.328388 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:31:38.328395 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:31:38.328402 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:31:38.328410 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:31:38.328417 kernel: kvm [1]: HYP mode not available Jun 25 18:31:38.328425 kernel: Initialise system trusted keyrings Jun 25 18:31:38.328432 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:31:38.328441 kernel: Key type asymmetric registered Jun 25 18:31:38.328448 kernel: Asymmetric key parser 'x509' registered Jun 25 18:31:38.328456 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 25 18:31:38.328463 kernel: io scheduler mq-deadline registered Jun 25 18:31:38.328470 kernel: io scheduler kyber registered Jun 25 18:31:38.328478 kernel: io scheduler bfq registered Jun 25 18:31:38.328486 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:31:38.328493 kernel: thunder_xcv, ver 1.0 Jun 25 18:31:38.328500 kernel: thunder_bgx, ver 1.0 Jun 25 18:31:38.328508 kernel: nicpf, ver 1.0 Jun 25 18:31:38.328517 kernel: nicvf, ver 1.0 Jun 25 18:31:38.328683 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 18:31:38.328755 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T18:31:37 UTC (1719340297) Jun 25 18:31:38.328766 kernel: efifb: probing for efifb Jun 25 18:31:38.328774 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 25 18:31:38.328782 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 25 18:31:38.328790 kernel: efifb: scrolling: redraw Jun 25 18:31:38.328800 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 25 18:31:38.328807 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:31:38.328815 kernel: fb0: EFI VGA frame buffer device Jun 25 18:31:38.328822 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jun 25 18:31:38.328829 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:31:38.328837 kernel: No ACPI PMU IRQ for CPU0 Jun 25 18:31:38.328844 kernel: No ACPI PMU IRQ for CPU1 Jun 25 18:31:38.328851 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jun 25 18:31:38.328859 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 25 18:31:38.328868 kernel: watchdog: Hard watchdog permanently disabled Jun 25 18:31:38.328876 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:31:38.328883 kernel: Segment Routing with IPv6 Jun 25 18:31:38.328890 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:31:38.328898 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:31:38.328905 kernel: Key type dns_resolver registered Jun 25 18:31:38.328912 kernel: registered taskstats version 1 Jun 25 18:31:38.328919 kernel: Loading compiled-in X.509 certificates Jun 25 18:31:38.328927 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 751918e575d02f96b0daadd44b8f442a8c39ecd3' Jun 25 18:31:38.328937 kernel: Key type .fscrypt registered Jun 25 18:31:38.328945 kernel: Key type fscrypt-provisioning registered Jun 25 18:31:38.328952 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:31:38.328959 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:31:38.328967 kernel: ima: No architecture policies found Jun 25 18:31:38.328974 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 18:31:38.328982 kernel: clk: Disabling unused clocks Jun 25 18:31:38.329003 kernel: Freeing unused kernel memory: 39040K Jun 25 18:31:38.329011 kernel: Run /init as init process Jun 25 18:31:38.329020 kernel: with arguments: Jun 25 18:31:38.329028 kernel: /init Jun 25 18:31:38.329036 kernel: with environment: Jun 25 18:31:38.329043 kernel: HOME=/ Jun 25 18:31:38.329050 kernel: TERM=linux Jun 25 18:31:38.329057 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:31:38.329067 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:31:38.329077 systemd[1]: Detected virtualization microsoft. Jun 25 18:31:38.329087 systemd[1]: Detected architecture arm64. Jun 25 18:31:38.329095 systemd[1]: Running in initrd. Jun 25 18:31:38.329103 systemd[1]: No hostname configured, using default hostname. Jun 25 18:31:38.329110 systemd[1]: Hostname set to . Jun 25 18:31:38.329136 systemd[1]: Initializing machine ID from random generator. Jun 25 18:31:38.329145 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:31:38.329154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:31:38.329162 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:31:38.329173 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:31:38.329181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:31:38.329189 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:31:38.329197 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:31:38.329207 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:31:38.329215 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:31:38.329223 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:31:38.329233 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:31:38.329241 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:31:38.329249 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:31:38.329257 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:31:38.329265 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:31:38.329273 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:31:38.329281 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:31:38.329289 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:31:38.329298 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:31:38.329306 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:31:38.329319 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:31:38.329327 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:31:38.329335 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:31:38.329343 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:31:38.329351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:31:38.329359 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:31:38.329367 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:31:38.329377 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:31:38.329407 systemd-journald[217]: Collecting audit messages is disabled. Jun 25 18:31:38.329428 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:31:38.329436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:38.329448 systemd-journald[217]: Journal started Jun 25 18:31:38.329467 systemd-journald[217]: Runtime Journal (/run/log/journal/7115e768633f4a979edff5d765476a55) is 8.0M, max 78.6M, 70.6M free. Jun 25 18:31:38.336039 systemd-modules-load[218]: Inserted module 'overlay' Jun 25 18:31:38.375960 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:31:38.375991 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:31:38.376005 kernel: Bridge firewalling registered Jun 25 18:31:38.378201 systemd-modules-load[218]: Inserted module 'br_netfilter' Jun 25 18:31:38.382798 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:31:38.393433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:31:38.406252 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:31:38.417898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:31:38.428666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:38.452478 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:31:38.461309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:31:38.486323 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:31:38.502347 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:31:38.508903 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:38.516156 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:31:38.528731 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:31:38.556764 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:31:38.570344 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:31:38.587298 dracut-cmdline[247]: dracut-dracut-053 Jun 25 18:31:38.587298 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:31:38.595980 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:31:38.686801 kernel: SCSI subsystem initialized Jun 25 18:31:38.686831 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:31:38.686843 kernel: iscsi: registered transport (tcp) Jun 25 18:31:38.645847 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:31:38.652487 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:31:38.697744 systemd-resolved[321]: Positive Trust Anchors: Jun 25 18:31:38.697754 systemd-resolved[321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:31:38.728312 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:31:38.728338 kernel: QLogic iSCSI HBA Driver Jun 25 18:31:38.697783 systemd-resolved[321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:31:38.703086 systemd-resolved[321]: Defaulting to hostname 'linux'. Jun 25 18:31:38.706664 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:31:38.713011 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:31:38.823424 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:31:38.837579 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:31:38.869825 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:31:38.869887 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:31:38.877237 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:31:38.927148 kernel: raid6: neonx8 gen() 15729 MB/s Jun 25 18:31:38.947143 kernel: raid6: neonx4 gen() 15667 MB/s Jun 25 18:31:38.967134 kernel: raid6: neonx2 gen() 13272 MB/s Jun 25 18:31:38.988145 kernel: raid6: neonx1 gen() 10453 MB/s Jun 25 18:31:39.008139 kernel: raid6: int64x8 gen() 6960 MB/s Jun 25 18:31:39.028149 kernel: raid6: int64x4 gen() 7340 MB/s Jun 25 18:31:39.049143 kernel: raid6: int64x2 gen() 6127 MB/s Jun 25 18:31:39.072852 kernel: raid6: int64x1 gen() 5059 MB/s Jun 25 18:31:39.072891 kernel: raid6: using algorithm neonx8 gen() 15729 MB/s Jun 25 18:31:39.096228 kernel: raid6: .... xor() 11918 MB/s, rmw enabled Jun 25 18:31:39.096293 kernel: raid6: using neon recovery algorithm Jun 25 18:31:39.105137 kernel: xor: measuring software checksum speed Jun 25 18:31:39.109137 kernel: 8regs : 19859 MB/sec Jun 25 18:31:39.116193 kernel: 32regs : 19720 MB/sec Jun 25 18:31:39.116217 kernel: arm64_neon : 27206 MB/sec Jun 25 18:31:39.120512 kernel: xor: using function: arm64_neon (27206 MB/sec) Jun 25 18:31:39.172147 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:31:39.184261 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:31:39.200305 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:31:39.223444 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jun 25 18:31:39.229146 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:31:39.247265 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:31:39.271773 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jun 25 18:31:39.299456 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:31:39.313615 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:31:39.354693 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:31:39.374320 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:31:39.396029 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:31:39.411669 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:31:39.426601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:31:39.441042 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:31:39.459393 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:31:39.476661 kernel: hv_vmbus: Vmbus version:5.3 Jun 25 18:31:39.492470 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:31:39.505357 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 18:31:39.505379 kernel: hv_vmbus: registering driver hv_netvsc Jun 25 18:31:39.505389 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 25 18:31:39.521385 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 25 18:31:39.526814 kernel: hv_vmbus: registering driver hid_hyperv Jun 25 18:31:39.529695 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:31:39.551192 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 18:31:39.551217 kernel: hv_vmbus: registering driver hv_storvsc Jun 25 18:31:39.551230 kernel: scsi host1: storvsc_host_t Jun 25 18:31:39.535298 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:39.615591 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 25 18:31:39.615636 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 25 18:31:39.615786 kernel: scsi host0: storvsc_host_t Jun 25 18:31:39.615888 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 25 18:31:39.615908 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 25 18:31:39.615923 kernel: hv_netvsc 002248bc-bea0-0022-48bc-bea0002248bc eth0: VF slot 1 added Jun 25 18:31:39.576301 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:31:39.590040 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:31:39.590314 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:39.620358 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:39.642588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:39.670609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:39.692232 kernel: PTP clock support registered Jun 25 18:31:39.692255 kernel: hv_vmbus: registering driver hv_pci Jun 25 18:31:39.697101 kernel: hv_utils: Registering HyperV Utility Driver Jun 25 18:31:39.697174 kernel: hv_pci 308caed0-8c94-4816-b09f-ae572c0d0361: PCI VMBus probing: Using version 0x10004 Jun 25 18:31:39.907826 kernel: hv_vmbus: registering driver hv_utils Jun 25 18:31:39.907845 kernel: hv_utils: Shutdown IC version 3.2 Jun 25 18:31:39.907865 kernel: hv_utils: Heartbeat IC version 3.0 Jun 25 18:31:39.907876 kernel: hv_utils: TimeSync IC version 4.0 Jun 25 18:31:39.907885 kernel: hv_pci 308caed0-8c94-4816-b09f-ae572c0d0361: PCI host bridge to bus 8c94:00 Jun 25 18:31:39.907999 kernel: pci_bus 8c94:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jun 25 18:31:39.908102 kernel: pci_bus 8c94:00: No busn resource found for root bus, will use [bus 00-ff] Jun 25 18:31:39.908199 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 25 18:31:39.911466 kernel: pci 8c94:00:02.0: [15b3:1018] type 00 class 0x020000 Jun 25 18:31:39.911655 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 25 18:31:39.911667 kernel: pci 8c94:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 18:31:39.911758 kernel: pci 8c94:00:02.0: enabling Extended Tags Jun 25 18:31:39.911844 kernel: pci 8c94:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 8c94:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jun 25 18:31:39.911930 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 25 18:31:39.929152 kernel: pci_bus 8c94:00: busn_res: [bus 00-ff] end is updated to 00 Jun 25 18:31:39.929305 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 25 18:31:39.929405 kernel: pci 8c94:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jun 25 18:31:39.929504 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 25 18:31:39.929587 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 25 18:31:39.929678 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 25 18:31:39.929764 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 25 18:31:39.929850 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:31:39.929860 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 25 18:31:39.707051 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:31:39.815746 systemd-resolved[321]: Clock change detected. Flushing caches. Jun 25 18:31:39.861525 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:39.981323 kernel: mlx5_core 8c94:00:02.0: enabling device (0000 -> 0002) Jun 25 18:31:40.199880 kernel: mlx5_core 8c94:00:02.0: firmware version: 16.30.1284 Jun 25 18:31:40.200010 kernel: hv_netvsc 002248bc-bea0-0022-48bc-bea0002248bc eth0: VF registering: eth1 Jun 25 18:31:40.200106 kernel: mlx5_core 8c94:00:02.0 eth1: joined to eth0 Jun 25 18:31:40.200227 kernel: mlx5_core 8c94:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jun 25 18:31:40.208200 kernel: mlx5_core 8c94:00:02.0 enP35988s1: renamed from eth1 Jun 25 18:31:40.574058 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 25 18:31:40.666815 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (482) Jun 25 18:31:40.681476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:31:40.754683 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 25 18:31:40.776314 kernel: BTRFS: device fsid c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (489) Jun 25 18:31:40.787569 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 25 18:31:40.794799 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 25 18:31:40.825486 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:31:40.847198 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:31:40.856202 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:31:40.864200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:31:41.865260 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 25 18:31:41.865788 disk-uuid[599]: The operation has completed successfully. Jun 25 18:31:41.931080 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:31:41.933262 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:31:41.972333 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:31:41.985799 sh[712]: Success Jun 25 18:31:42.018234 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 18:31:42.218644 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:31:42.236310 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:31:42.246953 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:31:42.278113 kernel: BTRFS info (device dm-0): first mount of filesystem c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 Jun 25 18:31:42.278187 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:31:42.285065 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:31:42.290191 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:31:42.294223 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:31:42.663749 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:31:42.669319 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:31:42.688461 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:31:42.712838 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:31:42.712899 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:31:42.717402 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:31:42.715396 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:31:42.753250 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:31:42.764846 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:31:42.777569 kernel: BTRFS info (device sda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:31:42.786113 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:31:42.802709 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:31:42.851753 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:31:42.872344 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:31:42.899879 systemd-networkd[896]: lo: Link UP Jun 25 18:31:42.899891 systemd-networkd[896]: lo: Gained carrier Jun 25 18:31:42.901474 systemd-networkd[896]: Enumeration completed Jun 25 18:31:42.902065 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:31:42.902068 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:31:42.907130 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:31:42.913325 systemd[1]: Reached target network.target - Network. Jun 25 18:31:42.991198 kernel: mlx5_core 8c94:00:02.0 enP35988s1: Link up Jun 25 18:31:43.033212 kernel: hv_netvsc 002248bc-bea0-0022-48bc-bea0002248bc eth0: Data path switched to VF: enP35988s1 Jun 25 18:31:43.033629 systemd-networkd[896]: enP35988s1: Link UP Jun 25 18:31:43.033710 systemd-networkd[896]: eth0: Link UP Jun 25 18:31:43.033819 systemd-networkd[896]: eth0: Gained carrier Jun 25 18:31:43.033828 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:31:43.042406 systemd-networkd[896]: enP35988s1: Gained carrier Jun 25 18:31:43.065213 systemd-networkd[896]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 18:31:44.039883 ignition[839]: Ignition 2.19.0 Jun 25 18:31:44.039897 ignition[839]: Stage: fetch-offline Jun 25 18:31:44.042811 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:31:44.039934 ignition[839]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:44.039942 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:44.040040 ignition[839]: parsed url from cmdline: "" Jun 25 18:31:44.068345 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:31:44.040043 ignition[839]: no config URL provided Jun 25 18:31:44.040050 ignition[839]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:31:44.040057 ignition[839]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:31:44.040063 ignition[839]: failed to fetch config: resource requires networking Jun 25 18:31:44.040256 ignition[839]: Ignition finished successfully Jun 25 18:31:44.087632 ignition[906]: Ignition 2.19.0 Jun 25 18:31:44.087639 ignition[906]: Stage: fetch Jun 25 18:31:44.087898 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:44.087909 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:44.088012 ignition[906]: parsed url from cmdline: "" Jun 25 18:31:44.088016 ignition[906]: no config URL provided Jun 25 18:31:44.088028 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:31:44.088040 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:31:44.088063 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 25 18:31:44.194604 ignition[906]: GET result: OK Jun 25 18:31:44.194676 ignition[906]: config has been read from IMDS userdata Jun 25 18:31:44.194719 ignition[906]: parsing config with SHA512: 03dcb8511c4f8c8ac308520cdfa1ddc6ee4eee22ce4e14f1696bea944425f83b53e5138013990089fbd41669bfbd1fe4dba3589eaba70d5a50518880280fcc92 Jun 25 18:31:44.198446 unknown[906]: fetched base config from "system" Jun 25 18:31:44.198822 ignition[906]: fetch: fetch complete Jun 25 18:31:44.198453 unknown[906]: fetched base config from "system" Jun 25 18:31:44.198827 ignition[906]: fetch: fetch passed Jun 25 18:31:44.198458 unknown[906]: fetched user config from "azure" Jun 25 18:31:44.198867 ignition[906]: Ignition finished successfully Jun 25 18:31:44.204106 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:31:44.227325 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:31:44.228328 systemd-networkd[896]: enP35988s1: Gained IPv6LL Jun 25 18:31:44.255672 ignition[913]: Ignition 2.19.0 Jun 25 18:31:44.255679 ignition[913]: Stage: kargs Jun 25 18:31:44.265771 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:31:44.255907 ignition[913]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:44.255917 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:44.261574 ignition[913]: kargs: kargs passed Jun 25 18:31:44.289506 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:31:44.261630 ignition[913]: Ignition finished successfully Jun 25 18:31:44.292638 systemd-networkd[896]: eth0: Gained IPv6LL Jun 25 18:31:44.315116 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:31:44.310917 ignition[920]: Ignition 2.19.0 Jun 25 18:31:44.321499 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:31:44.310925 ignition[920]: Stage: disks Jun 25 18:31:44.332268 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:31:44.311260 ignition[920]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:44.351055 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:31:44.311275 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:44.362906 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:31:44.312374 ignition[920]: disks: disks passed Jun 25 18:31:44.372043 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:31:44.312431 ignition[920]: Ignition finished successfully Jun 25 18:31:44.401447 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:31:44.497708 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 25 18:31:44.507395 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:31:44.526435 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:31:44.584211 kernel: EXT4-fs (sda9): mounted filesystem 91548e21-ce72-437e-94b9-d3fed380163a r/w with ordered data mode. Quota mode: none. Jun 25 18:31:44.584524 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:31:44.589719 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:31:44.642259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:31:44.652305 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:31:44.663515 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 25 18:31:44.672651 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:31:44.672693 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:31:44.685382 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:31:44.719196 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Jun 25 18:31:44.719240 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:31:44.731326 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:31:44.731923 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:31:44.747998 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:31:44.755224 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:31:44.756769 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:31:45.326830 coreos-metadata[942]: Jun 25 18:31:45.326 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:31:45.337982 coreos-metadata[942]: Jun 25 18:31:45.337 INFO Fetch successful Jun 25 18:31:45.337982 coreos-metadata[942]: Jun 25 18:31:45.337 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:31:45.358043 coreos-metadata[942]: Jun 25 18:31:45.358 INFO Fetch successful Jun 25 18:31:45.397256 coreos-metadata[942]: Jun 25 18:31:45.397 INFO wrote hostname ci-4012.0.0-a-71b05979e1 to /sysroot/etc/hostname Jun 25 18:31:45.407384 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:31:45.809917 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:31:45.855387 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:31:45.861925 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:31:45.868418 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:31:47.095279 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:31:47.110362 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:31:47.119397 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:31:47.141915 kernel: BTRFS info (device sda6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:31:47.137287 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:31:47.170062 ignition[1058]: INFO : Ignition 2.19.0 Jun 25 18:31:47.176302 ignition[1058]: INFO : Stage: mount Jun 25 18:31:47.176302 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:47.176302 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:47.176302 ignition[1058]: INFO : mount: mount passed Jun 25 18:31:47.176302 ignition[1058]: INFO : Ignition finished successfully Jun 25 18:31:47.174519 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:31:47.182664 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:31:47.205415 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:31:47.221394 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:31:47.264197 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1072) Jun 25 18:31:47.264243 kernel: BTRFS info (device sda6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:31:47.270163 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:31:47.274439 kernel: BTRFS info (device sda6): using free space tree Jun 25 18:31:47.281190 kernel: BTRFS info (device sda6): auto enabling async discard Jun 25 18:31:47.282727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:31:47.307941 ignition[1090]: INFO : Ignition 2.19.0 Jun 25 18:31:47.307941 ignition[1090]: INFO : Stage: files Jun 25 18:31:47.315838 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:47.315838 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:47.315838 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:31:47.315838 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:31:47.315838 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:31:47.413240 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:31:47.420806 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:31:47.428283 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:31:47.427833 unknown[1090]: wrote ssh authorized keys file for user: core Jun 25 18:31:47.498491 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:31:47.509717 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 18:31:47.610901 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 18:31:47.804986 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:31:47.804986 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:31:47.827495 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jun 25 18:31:48.130893 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 18:31:48.334282 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:31:48.334282 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 18:31:48.354281 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:31:48.366249 ignition[1090]: INFO : files: files passed Jun 25 18:31:48.366249 ignition[1090]: INFO : Ignition finished successfully Jun 25 18:31:48.366480 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:31:48.399041 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:31:48.413411 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:31:48.440166 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:31:48.494243 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:31:48.494243 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:31:48.440362 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:31:48.526756 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:31:48.469010 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:31:48.476713 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:31:48.503501 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:31:48.551576 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:31:48.551721 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:31:48.562060 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:31:48.572646 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:31:48.584544 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:31:48.603464 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:31:48.623439 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:31:48.644458 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:31:48.662637 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:31:48.669489 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:31:48.681769 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:31:48.693018 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:31:48.693218 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:31:48.709914 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:31:48.721633 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:31:48.731652 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:31:48.742192 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:31:48.754387 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:31:48.767260 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:31:48.778994 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:31:48.791378 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:31:48.803458 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:31:48.814150 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:31:48.824162 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:31:48.824350 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:31:48.839366 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:31:48.846132 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:31:48.858166 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:31:48.863268 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:31:48.870293 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:31:48.870457 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:31:48.886633 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:31:48.886809 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:31:48.900918 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:31:48.901077 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:31:48.910945 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 25 18:31:48.911090 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 25 18:31:48.977816 ignition[1142]: INFO : Ignition 2.19.0 Jun 25 18:31:48.977816 ignition[1142]: INFO : Stage: umount Jun 25 18:31:48.977816 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:31:48.977816 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 25 18:31:48.977816 ignition[1142]: INFO : umount: umount passed Jun 25 18:31:48.977816 ignition[1142]: INFO : Ignition finished successfully Jun 25 18:31:48.942324 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:31:48.958072 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:31:48.970672 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:31:48.970871 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:31:48.982812 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:31:48.982934 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:31:48.999949 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:31:49.000322 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:31:49.012355 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:31:49.012467 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:31:49.022334 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:31:49.022396 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:31:49.036548 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:31:49.036614 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:31:49.043143 systemd[1]: Stopped target network.target - Network. Jun 25 18:31:49.052923 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:31:49.052988 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:31:49.059862 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:31:49.065009 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:31:49.074940 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:31:49.081572 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:31:49.093326 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:31:49.103958 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:31:49.104019 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:31:49.115783 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:31:49.115838 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:31:49.126277 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:31:49.126331 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:31:49.137308 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:31:49.137349 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:31:49.373301 kernel: hv_netvsc 002248bc-bea0-0022-48bc-bea0002248bc eth0: Data path switched from VF: enP35988s1 Jun 25 18:31:49.150755 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:31:49.162132 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:31:49.180779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:31:49.181417 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:31:49.181512 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:31:49.191818 systemd-networkd[896]: eth0: DHCPv6 lease lost Jun 25 18:31:49.195100 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:31:49.195214 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:31:49.207201 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:31:49.207264 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:31:49.237401 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:31:49.246927 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:31:49.247020 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:31:49.259146 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:31:49.276126 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:31:49.276263 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:31:49.303160 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:31:49.303281 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:31:49.314047 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:31:49.314120 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:31:49.325116 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:31:49.325181 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:31:49.337112 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:31:49.337276 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:31:49.357426 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:31:49.357513 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:31:49.368103 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:31:49.368152 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:31:49.379020 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:31:49.379080 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:31:49.396143 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:31:49.396222 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:31:49.414562 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:31:49.414641 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:31:49.449479 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:31:49.462473 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:31:49.462549 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:31:49.476023 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:31:49.476090 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:31:49.488108 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:31:49.488157 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:31:49.501850 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:31:49.501901 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:49.520646 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:31:49.520779 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:31:49.532026 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:31:49.532119 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:31:49.634922 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:31:49.635091 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:31:49.644511 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:31:49.657610 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:31:49.657695 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:31:49.694495 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:31:49.713598 systemd[1]: Switching root. Jun 25 18:31:49.786515 systemd-journald[217]: Journal stopped Jun 25 18:31:55.902772 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jun 25 18:31:55.902797 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:31:55.902808 kernel: SELinux: policy capability open_perms=1 Jun 25 18:31:55.902818 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:31:55.902825 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:31:55.902835 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:31:55.902843 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:31:55.902852 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:31:55.902860 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:31:55.902868 kernel: audit: type=1403 audit(1719340311.129:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:31:55.902878 systemd[1]: Successfully loaded SELinux policy in 233.919ms. Jun 25 18:31:55.902888 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.210ms. Jun 25 18:31:55.902898 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:31:55.902907 systemd[1]: Detected virtualization microsoft. Jun 25 18:31:55.902918 systemd[1]: Detected architecture arm64. Jun 25 18:31:55.902927 systemd[1]: Detected first boot. Jun 25 18:31:55.902937 systemd[1]: Hostname set to . Jun 25 18:31:55.902946 systemd[1]: Initializing machine ID from random generator. Jun 25 18:31:55.902955 zram_generator::config[1183]: No configuration found. Jun 25 18:31:55.902965 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:31:55.902974 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 18:31:55.902985 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 18:31:55.902994 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 18:31:55.903004 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:31:55.903014 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:31:55.903024 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:31:55.903034 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:31:55.903043 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:31:55.903054 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:31:55.903064 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:31:55.903073 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:31:55.903082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:31:55.903092 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:31:55.903101 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:31:55.903110 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:31:55.903120 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:31:55.903131 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:31:55.903140 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jun 25 18:31:55.903149 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:31:55.903159 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 18:31:55.903229 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 18:31:55.903244 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 18:31:55.903258 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:31:55.903271 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:31:55.903286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:31:55.903299 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:31:55.903311 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:31:55.903323 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:31:55.903335 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:31:55.903349 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:31:55.903361 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:31:55.903375 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:31:55.903387 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:31:55.903399 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:31:55.903411 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:31:55.903422 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:31:55.903434 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:31:55.903446 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:31:55.903456 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:31:55.903468 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:31:55.903480 systemd[1]: Reached target machines.target - Containers. Jun 25 18:31:55.903492 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:31:55.903504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:31:55.903516 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:31:55.903529 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:31:55.903542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:31:55.903554 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:31:55.903566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:31:55.903576 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:31:55.903586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:31:55.903597 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:31:55.903607 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 18:31:55.903617 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 18:31:55.903629 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 18:31:55.903639 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 18:31:55.903648 kernel: loop: module loaded Jun 25 18:31:55.903657 kernel: fuse: init (API version 7.39) Jun 25 18:31:55.903666 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:31:55.903675 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:31:55.903685 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:31:55.903694 kernel: ACPI: bus type drm_connector registered Jun 25 18:31:55.903723 systemd-journald[1278]: Collecting audit messages is disabled. Jun 25 18:31:55.903749 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:31:55.903760 systemd-journald[1278]: Journal started Jun 25 18:31:55.903781 systemd-journald[1278]: Runtime Journal (/run/log/journal/deec131bc21c435483596793537d3ba4) is 8.0M, max 78.6M, 70.6M free. Jun 25 18:31:54.884383 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:31:55.059999 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 25 18:31:55.060488 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 18:31:55.060821 systemd[1]: systemd-journald.service: Consumed 3.127s CPU time. Jun 25 18:31:55.940735 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:31:55.950206 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 18:31:55.950293 systemd[1]: Stopped verity-setup.service. Jun 25 18:31:55.968198 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:31:55.967759 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:31:55.973567 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:31:55.979687 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:31:55.985078 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:31:55.991561 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:31:55.997849 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:31:56.003508 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:31:56.010311 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:31:56.017785 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:31:56.017943 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:31:56.024638 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:31:56.024794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:31:56.031794 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:31:56.031941 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:31:56.038047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:31:56.038448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:31:56.045858 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:31:56.046004 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:31:56.052249 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:31:56.052386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:31:56.059639 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:31:56.066672 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:31:56.073968 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:31:56.081387 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:31:56.099063 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:31:56.109276 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:31:56.121424 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:31:56.127600 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:31:56.127644 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:31:56.134504 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:31:56.142597 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:31:56.150200 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:31:56.155878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:31:56.286357 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:31:56.293262 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:31:56.299684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:31:56.300810 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:31:56.306718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:31:56.309415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:31:56.318141 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:31:56.328427 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:31:56.336546 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:31:56.344838 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:31:56.355879 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:31:56.363876 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:31:56.371230 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:31:56.393241 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:31:56.404416 kernel: loop0: detected capacity change from 0 to 193208 Jun 25 18:31:56.404495 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:31:56.410532 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:31:56.418516 systemd-journald[1278]: Time spent on flushing to /var/log/journal/deec131bc21c435483596793537d3ba4 is 29.532ms for 905 entries. Jun 25 18:31:56.418516 systemd-journald[1278]: System Journal (/var/log/journal/deec131bc21c435483596793537d3ba4) is 8.0M, max 2.6G, 2.6G free. Jun 25 18:31:56.464060 systemd-journald[1278]: Received client request to flush runtime journal. Jun 25 18:31:56.424515 udevadm[1319]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:31:56.455317 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:31:56.455949 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jun 25 18:31:56.455960 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jun 25 18:31:56.463050 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:31:56.471973 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:31:56.486430 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:31:56.496408 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:31:56.546468 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:31:56.547268 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:31:56.607224 kernel: loop1: detected capacity change from 0 to 59688 Jun 25 18:31:56.738263 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:31:56.753356 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:31:56.769617 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jun 25 18:31:56.769635 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jun 25 18:31:56.774106 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:31:57.007192 kernel: loop2: detected capacity change from 0 to 62152 Jun 25 18:31:57.427197 kernel: loop3: detected capacity change from 0 to 113712 Jun 25 18:31:57.729548 kernel: loop4: detected capacity change from 0 to 193208 Jun 25 18:31:57.736068 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:31:57.754222 kernel: loop5: detected capacity change from 0 to 59688 Jun 25 18:31:57.755406 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:31:57.775409 kernel: loop6: detected capacity change from 0 to 62152 Jun 25 18:31:57.775692 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Jun 25 18:31:57.785200 kernel: loop7: detected capacity change from 0 to 113712 Jun 25 18:31:57.788797 (sd-merge)[1343]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 25 18:31:57.789321 (sd-merge)[1343]: Merged extensions into '/usr'. Jun 25 18:31:57.795005 systemd[1]: Reloading requested from client PID 1316 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:31:57.795035 systemd[1]: Reloading... Jun 25 18:31:57.856232 zram_generator::config[1369]: No configuration found. Jun 25 18:31:58.037920 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:31:58.050311 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1434) Jun 25 18:31:58.126790 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jun 25 18:31:58.127200 systemd[1]: Reloading finished in 331 ms. Jun 25 18:31:58.142737 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:31:58.142838 kernel: hv_vmbus: registering driver hv_balloon Jun 25 18:31:58.142854 kernel: hv_vmbus: registering driver hyperv_fb Jun 25 18:31:58.143389 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 25 18:31:58.158135 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 25 18:31:58.158270 kernel: hv_balloon: Memory hot add disabled on ARM64 Jun 25 18:31:58.166257 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 25 18:31:58.171562 kernel: Console: switching to colour dummy device 80x25 Jun 25 18:31:58.181100 kernel: Console: switching to colour frame buffer device 128x48 Jun 25 18:31:58.185279 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:31:58.197535 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:31:58.229404 systemd[1]: Starting ensure-sysext.service... Jun 25 18:31:58.239431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:31:58.249401 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:31:58.268208 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1420) Jun 25 18:31:58.289512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:58.304839 systemd-tmpfiles[1473]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:31:58.305503 systemd-tmpfiles[1473]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:31:58.306456 systemd-tmpfiles[1473]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:31:58.308531 systemd-tmpfiles[1473]: ACLs are not supported, ignoring. Jun 25 18:31:58.308591 systemd-tmpfiles[1473]: ACLs are not supported, ignoring. Jun 25 18:31:58.313189 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:31:58.319806 systemd-tmpfiles[1473]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:31:58.320790 systemd-tmpfiles[1473]: Skipping /boot Jun 25 18:31:58.324947 systemd[1]: Reloading requested from client PID 1467 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:31:58.324967 systemd[1]: Reloading... Jun 25 18:31:58.340603 systemd-tmpfiles[1473]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:31:58.343853 systemd-tmpfiles[1473]: Skipping /boot Jun 25 18:31:58.412206 zram_generator::config[1521]: No configuration found. Jun 25 18:31:58.567892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:31:58.617687 systemd-networkd[1472]: lo: Link UP Jun 25 18:31:58.617697 systemd-networkd[1472]: lo: Gained carrier Jun 25 18:31:58.622152 systemd-networkd[1472]: Enumeration completed Jun 25 18:31:58.622737 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:31:58.622827 systemd-networkd[1472]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:31:58.644091 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 25 18:31:58.651044 systemd[1]: Reloading finished in 325 ms. Jun 25 18:31:58.671100 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:31:58.673191 kernel: mlx5_core 8c94:00:02.0 enP35988s1: Link up Jun 25 18:31:58.677718 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:31:58.701531 kernel: hv_netvsc 002248bc-bea0-0022-48bc-bea0002248bc eth0: Data path switched to VF: enP35988s1 Jun 25 18:31:58.702217 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:31:58.702722 systemd-networkd[1472]: enP35988s1: Link UP Jun 25 18:31:58.703311 systemd-networkd[1472]: eth0: Link UP Jun 25 18:31:58.703315 systemd-networkd[1472]: eth0: Gained carrier Jun 25 18:31:58.703335 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:31:58.710053 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:31:58.710289 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:58.721643 systemd-networkd[1472]: enP35988s1: Gained carrier Jun 25 18:31:58.727225 systemd-networkd[1472]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 18:31:58.731858 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:31:58.750464 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:31:58.760512 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:31:58.767683 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:31:58.769596 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:31:58.778261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:31:58.788284 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:31:58.800061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:31:58.806459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:31:58.810329 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:31:58.829071 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:31:58.839570 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:31:58.848985 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:31:58.856455 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:31:58.873448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:31:58.883026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:31:58.883834 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:31:58.891725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:31:58.892014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:31:58.901109 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:31:58.901791 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:31:58.909741 lvm[1598]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:31:58.911730 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:31:58.930952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:31:58.941314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:31:58.950508 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:31:58.966551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:31:58.986644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:31:58.994641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:31:58.994863 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:31:59.002064 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:31:59.009825 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:31:59.020144 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:31:59.021772 augenrules[1633]: No rules Jun 25 18:31:59.027520 systemd-resolved[1611]: Positive Trust Anchors: Jun 25 18:31:59.027535 systemd-resolved[1611]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:31:59.027566 systemd-resolved[1611]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:31:59.028055 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:31:59.035651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:31:59.035789 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:31:59.043394 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:31:59.043533 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:31:59.052286 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:31:59.052421 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:31:59.059823 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:31:59.059971 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:31:59.060817 systemd-resolved[1611]: Using system hostname 'ci-4012.0.0-a-71b05979e1'. Jun 25 18:31:59.066758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:31:59.076476 systemd[1]: Finished ensure-sysext.service. Jun 25 18:31:59.088557 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:31:59.095146 systemd[1]: Reached target network.target - Network. Jun 25 18:31:59.100302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:31:59.114355 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:31:59.118900 lvm[1647]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:31:59.120606 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:31:59.120683 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:31:59.146784 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:31:59.257371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:31:59.431416 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:31:59.439306 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:32:00.097346 systemd-networkd[1472]: enP35988s1: Gained IPv6LL Jun 25 18:32:00.289369 systemd-networkd[1472]: eth0: Gained IPv6LL Jun 25 18:32:00.291206 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:32:00.300669 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:32:03.063161 ldconfig[1311]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:32:03.078833 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:32:03.092346 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:32:03.122428 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:32:03.129142 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:32:03.135110 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:32:03.142848 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:32:03.151353 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:32:03.157805 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:32:03.164548 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:32:03.171649 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:32:03.171688 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:32:03.176828 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:32:03.199966 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:32:03.207583 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:32:03.220168 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:32:03.226289 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:32:03.232337 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:32:03.237746 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:32:03.243008 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:32:03.243039 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:32:03.257290 systemd[1]: Starting chronyd.service - NTP client/server... Jun 25 18:32:03.265337 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:32:03.276407 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:32:03.285425 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:32:03.293382 (chronyd)[1659]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 25 18:32:03.294268 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:32:03.304122 jq[1665]: false Jun 25 18:32:03.305104 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:32:03.311145 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:32:03.313391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:03.327433 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:32:03.335979 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:32:03.348358 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:32:03.349704 chronyd[1674]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 25 18:32:03.354583 extend-filesystems[1666]: Found loop4 Jun 25 18:32:03.369485 extend-filesystems[1666]: Found loop5 Jun 25 18:32:03.369485 extend-filesystems[1666]: Found loop6 Jun 25 18:32:03.369485 extend-filesystems[1666]: Found loop7 Jun 25 18:32:03.369485 extend-filesystems[1666]: Found sda Jun 25 18:32:03.369485 extend-filesystems[1666]: Found sda1 Jun 25 18:32:03.369485 extend-filesystems[1666]: Found sda2 Jun 25 18:32:03.369485 extend-filesystems[1666]: Found sda3 Jun 25 18:32:03.369485 extend-filesystems[1666]: Found usr Jun 25 18:32:03.369485 extend-filesystems[1666]: Found sda4 Jun 25 18:32:03.369485 extend-filesystems[1666]: Found sda6 Jun 25 18:32:03.369485 extend-filesystems[1666]: Found sda7 Jun 25 18:32:03.369485 extend-filesystems[1666]: Found sda9 Jun 25 18:32:03.369485 extend-filesystems[1666]: Checking size of /dev/sda9 Jun 25 18:32:03.361536 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:32:03.387105 chronyd[1674]: Timezone right/UTC failed leap second check, ignoring Jun 25 18:32:03.568036 extend-filesystems[1666]: Old size kept for /dev/sda9 Jun 25 18:32:03.568036 extend-filesystems[1666]: Found sr0 Jun 25 18:32:03.377028 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:32:03.387295 chronyd[1674]: Loaded seccomp filter (level 2) Jun 25 18:32:03.390488 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:32:03.532652 dbus-daemon[1662]: [system] SELinux support is enabled Jun 25 18:32:03.633735 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1708) Jun 25 18:32:03.404116 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:32:03.404716 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:32:03.410500 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:32:03.634144 update_engine[1688]: I0625 18:32:03.492822 1688 main.cc:92] Flatcar Update Engine starting Jun 25 18:32:03.634144 update_engine[1688]: I0625 18:32:03.551784 1688 update_check_scheduler.cc:74] Next update check in 5m28s Jun 25 18:32:03.423883 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:32:03.635220 jq[1693]: true Jun 25 18:32:03.434230 systemd[1]: Started chronyd.service - NTP client/server. Jun 25 18:32:03.450762 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:32:03.450942 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:32:03.451214 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:32:03.635701 jq[1706]: true Jun 25 18:32:03.452397 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:32:03.468614 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:32:03.469327 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:32:03.487496 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:32:03.504623 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:32:03.504877 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:32:03.533355 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:32:03.553368 (ntainerd)[1711]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:32:03.555425 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:32:03.555464 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:32:03.579821 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:32:03.579844 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:32:03.590577 systemd-logind[1685]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:32:03.600581 systemd-logind[1685]: New seat seat0. Jun 25 18:32:03.615965 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:32:03.653295 tar[1705]: linux-arm64/helm Jun 25 18:32:03.653616 coreos-metadata[1661]: Jun 25 18:32:03.651 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 25 18:32:03.646000 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:32:03.658772 coreos-metadata[1661]: Jun 25 18:32:03.656 INFO Fetch successful Jun 25 18:32:03.658772 coreos-metadata[1661]: Jun 25 18:32:03.656 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 25 18:32:03.664661 coreos-metadata[1661]: Jun 25 18:32:03.664 INFO Fetch successful Jun 25 18:32:03.664661 coreos-metadata[1661]: Jun 25 18:32:03.664 INFO Fetching http://168.63.129.16/machine/76919d02-5175-40b4-a99c-032c862557d0/83a82a10%2D45b0%2D44c5%2D89a6%2D763e6aa6a780.%5Fci%2D4012.0.0%2Da%2D71b05979e1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 25 18:32:03.666796 coreos-metadata[1661]: Jun 25 18:32:03.666 INFO Fetch successful Jun 25 18:32:03.666796 coreos-metadata[1661]: Jun 25 18:32:03.666 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 25 18:32:03.682258 coreos-metadata[1661]: Jun 25 18:32:03.682 INFO Fetch successful Jun 25 18:32:03.713737 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:32:03.770618 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:32:03.787548 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:32:03.810027 bash[1753]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:32:03.811602 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:32:03.828951 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 25 18:32:04.042282 locksmithd[1745]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:32:04.432590 containerd[1711]: time="2024-06-25T18:32:04.432489880Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:32:04.444385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:04.458578 (kubelet)[1795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:32:04.483971 containerd[1711]: time="2024-06-25T18:32:04.483910040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:32:04.483971 containerd[1711]: time="2024-06-25T18:32:04.483973080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:32:04.487957 containerd[1711]: time="2024-06-25T18:32:04.487890320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:32:04.487957 containerd[1711]: time="2024-06-25T18:32:04.487942680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:32:04.489535 containerd[1711]: time="2024-06-25T18:32:04.488249000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:32:04.489535 containerd[1711]: time="2024-06-25T18:32:04.488275520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:32:04.489535 containerd[1711]: time="2024-06-25T18:32:04.488363160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:32:04.489535 containerd[1711]: time="2024-06-25T18:32:04.488410440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:32:04.489535 containerd[1711]: time="2024-06-25T18:32:04.488423160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:32:04.489535 containerd[1711]: time="2024-06-25T18:32:04.488479440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:32:04.489535 containerd[1711]: time="2024-06-25T18:32:04.488673040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:32:04.489535 containerd[1711]: time="2024-06-25T18:32:04.488692880Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:32:04.489535 containerd[1711]: time="2024-06-25T18:32:04.488703040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:32:04.489535 containerd[1711]: time="2024-06-25T18:32:04.488810800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:32:04.489535 containerd[1711]: time="2024-06-25T18:32:04.488824920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:32:04.489805 containerd[1711]: time="2024-06-25T18:32:04.488874040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:32:04.489805 containerd[1711]: time="2024-06-25T18:32:04.488884120Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:32:04.496830 tar[1705]: linux-arm64/LICENSE Jun 25 18:32:04.496830 tar[1705]: linux-arm64/README.md Jun 25 18:32:04.512930 containerd[1711]: time="2024-06-25T18:32:04.512794360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:32:04.512930 containerd[1711]: time="2024-06-25T18:32:04.512841960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:32:04.512930 containerd[1711]: time="2024-06-25T18:32:04.512857240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:32:04.512930 containerd[1711]: time="2024-06-25T18:32:04.512894280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:32:04.512930 containerd[1711]: time="2024-06-25T18:32:04.512913720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:32:04.512930 containerd[1711]: time="2024-06-25T18:32:04.512924560Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:32:04.512930 containerd[1711]: time="2024-06-25T18:32:04.512938000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:32:04.513141 containerd[1711]: time="2024-06-25T18:32:04.513088240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:32:04.513141 containerd[1711]: time="2024-06-25T18:32:04.513105240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:32:04.513141 containerd[1711]: time="2024-06-25T18:32:04.513118880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:32:04.513141 containerd[1711]: time="2024-06-25T18:32:04.513132800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:32:04.513243 containerd[1711]: time="2024-06-25T18:32:04.513146400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:32:04.513243 containerd[1711]: time="2024-06-25T18:32:04.513163320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:32:04.513243 containerd[1711]: time="2024-06-25T18:32:04.513198480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:32:04.513243 containerd[1711]: time="2024-06-25T18:32:04.513219160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:32:04.513243 containerd[1711]: time="2024-06-25T18:32:04.513233440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:32:04.513326 containerd[1711]: time="2024-06-25T18:32:04.513246520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:32:04.513326 containerd[1711]: time="2024-06-25T18:32:04.513257920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:32:04.513326 containerd[1711]: time="2024-06-25T18:32:04.513270480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:32:04.513373 containerd[1711]: time="2024-06-25T18:32:04.513365560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:32:04.513697 containerd[1711]: time="2024-06-25T18:32:04.513628840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:32:04.513697 containerd[1711]: time="2024-06-25T18:32:04.513662080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.513697 containerd[1711]: time="2024-06-25T18:32:04.513676840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:32:04.513802 containerd[1711]: time="2024-06-25T18:32:04.513699920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:32:04.513802 containerd[1711]: time="2024-06-25T18:32:04.513748040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.513802 containerd[1711]: time="2024-06-25T18:32:04.513761560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.513802 containerd[1711]: time="2024-06-25T18:32:04.513773440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.513802 containerd[1711]: time="2024-06-25T18:32:04.513784760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.513802 containerd[1711]: time="2024-06-25T18:32:04.513797520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.513943 containerd[1711]: time="2024-06-25T18:32:04.513811800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.513943 containerd[1711]: time="2024-06-25T18:32:04.513824920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.513943 containerd[1711]: time="2024-06-25T18:32:04.513836840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.513943 containerd[1711]: time="2024-06-25T18:32:04.513849640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:32:04.514037 containerd[1711]: time="2024-06-25T18:32:04.513985600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.514037 containerd[1711]: time="2024-06-25T18:32:04.514006160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.514037 containerd[1711]: time="2024-06-25T18:32:04.514019920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.514037 containerd[1711]: time="2024-06-25T18:32:04.514033120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.514804 containerd[1711]: time="2024-06-25T18:32:04.514046640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.514804 containerd[1711]: time="2024-06-25T18:32:04.514060440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.514804 containerd[1711]: time="2024-06-25T18:32:04.514076080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.514804 containerd[1711]: time="2024-06-25T18:32:04.514087160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:32:04.514922 containerd[1711]: time="2024-06-25T18:32:04.514355640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:32:04.514922 containerd[1711]: time="2024-06-25T18:32:04.514414560Z" level=info msg="Connect containerd service" Jun 25 18:32:04.514922 containerd[1711]: time="2024-06-25T18:32:04.514448000Z" level=info msg="using legacy CRI server" Jun 25 18:32:04.514922 containerd[1711]: time="2024-06-25T18:32:04.514454520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:32:04.514922 containerd[1711]: time="2024-06-25T18:32:04.514535360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:32:04.515440 containerd[1711]: time="2024-06-25T18:32:04.515091640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:32:04.515440 containerd[1711]: time="2024-06-25T18:32:04.515128200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:32:04.515440 containerd[1711]: time="2024-06-25T18:32:04.515148520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:32:04.515440 containerd[1711]: time="2024-06-25T18:32:04.515158560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:32:04.516523 containerd[1711]: time="2024-06-25T18:32:04.516454880Z" level=info msg="Start subscribing containerd event" Jun 25 18:32:04.516748 containerd[1711]: time="2024-06-25T18:32:04.516605480Z" level=info msg="Start recovering state" Jun 25 18:32:04.516748 containerd[1711]: time="2024-06-25T18:32:04.516688880Z" level=info msg="Start event monitor" Jun 25 18:32:04.516748 containerd[1711]: time="2024-06-25T18:32:04.516703320Z" level=info msg="Start snapshots syncer" Jun 25 18:32:04.516748 containerd[1711]: time="2024-06-25T18:32:04.516712680Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:32:04.516748 containerd[1711]: time="2024-06-25T18:32:04.516725720Z" level=info msg="Start streaming server" Jun 25 18:32:04.520223 containerd[1711]: time="2024-06-25T18:32:04.520183560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:32:04.520424 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:32:04.528342 containerd[1711]: time="2024-06-25T18:32:04.520438520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:32:04.528342 containerd[1711]: time="2024-06-25T18:32:04.520476280Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:32:04.528342 containerd[1711]: time="2024-06-25T18:32:04.520525240Z" level=info msg="containerd successfully booted in 0.089614s" Jun 25 18:32:04.530927 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:32:04.612902 sshd_keygen[1701]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:32:04.635317 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:32:04.650438 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:32:04.661551 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 25 18:32:04.683814 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:32:04.685414 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:32:04.703652 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:32:04.715474 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 25 18:32:04.726213 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:32:04.741012 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:32:04.750618 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jun 25 18:32:04.759632 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:32:04.766643 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:32:04.774150 systemd[1]: Startup finished in 686ms (kernel) + 13.056s (initrd) + 13.876s (userspace) = 27.619s. Jun 25 18:32:04.975309 kubelet[1795]: E0625 18:32:04.975092 1795 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:32:04.977965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:32:04.978116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:32:05.107018 login[1829]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jun 25 18:32:05.107550 login[1828]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:32:05.115384 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:32:05.123994 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:32:05.126984 systemd-logind[1685]: New session 2 of user core. Jun 25 18:32:05.136552 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:32:05.144539 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:32:05.149242 (systemd)[1839]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:05.303986 systemd[1839]: Queued start job for default target default.target. Jun 25 18:32:05.316202 systemd[1839]: Created slice app.slice - User Application Slice. Jun 25 18:32:05.316713 systemd[1839]: Reached target paths.target - Paths. Jun 25 18:32:05.316804 systemd[1839]: Reached target timers.target - Timers. Jun 25 18:32:05.318308 systemd[1839]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:32:05.330439 systemd[1839]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:32:05.330569 systemd[1839]: Reached target sockets.target - Sockets. Jun 25 18:32:05.330583 systemd[1839]: Reached target basic.target - Basic System. Jun 25 18:32:05.330627 systemd[1839]: Reached target default.target - Main User Target. Jun 25 18:32:05.330655 systemd[1839]: Startup finished in 174ms. Jun 25 18:32:05.330757 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:32:05.342502 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:32:06.108579 login[1829]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 25 18:32:06.112937 systemd-logind[1685]: New session 1 of user core. Jun 25 18:32:06.120394 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:32:06.554266 waagent[1825]: 2024-06-25T18:32:06.550629Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 25 18:32:06.556959 waagent[1825]: 2024-06-25T18:32:06.556879Z INFO Daemon Daemon OS: flatcar 4012.0.0 Jun 25 18:32:06.561734 waagent[1825]: 2024-06-25T18:32:06.561659Z INFO Daemon Daemon Python: 3.11.9 Jun 25 18:32:06.566631 waagent[1825]: 2024-06-25T18:32:06.566222Z INFO Daemon Daemon Run daemon Jun 25 18:32:06.570376 waagent[1825]: 2024-06-25T18:32:06.570313Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4012.0.0' Jun 25 18:32:06.579244 waagent[1825]: 2024-06-25T18:32:06.579148Z INFO Daemon Daemon Using waagent for provisioning Jun 25 18:32:06.584884 waagent[1825]: 2024-06-25T18:32:06.584826Z INFO Daemon Daemon Activate resource disk Jun 25 18:32:06.589540 waagent[1825]: 2024-06-25T18:32:06.589477Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 25 18:32:06.600563 waagent[1825]: 2024-06-25T18:32:06.600489Z INFO Daemon Daemon Found device: None Jun 25 18:32:06.605141 waagent[1825]: 2024-06-25T18:32:06.605080Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 25 18:32:06.613454 waagent[1825]: 2024-06-25T18:32:06.613389Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 25 18:32:06.627533 waagent[1825]: 2024-06-25T18:32:06.627445Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:32:06.633954 waagent[1825]: 2024-06-25T18:32:06.633855Z INFO Daemon Daemon Running default provisioning handler Jun 25 18:32:06.646685 waagent[1825]: 2024-06-25T18:32:06.646588Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 25 18:32:06.660128 waagent[1825]: 2024-06-25T18:32:06.660052Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 25 18:32:06.669656 waagent[1825]: 2024-06-25T18:32:06.669585Z INFO Daemon Daemon cloud-init is enabled: False Jun 25 18:32:06.674540 waagent[1825]: 2024-06-25T18:32:06.674479Z INFO Daemon Daemon Copying ovf-env.xml Jun 25 18:32:06.764807 waagent[1825]: 2024-06-25T18:32:06.761646Z INFO Daemon Daemon Successfully mounted dvd Jun 25 18:32:06.793270 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 25 18:32:06.796205 waagent[1825]: 2024-06-25T18:32:06.795753Z INFO Daemon Daemon Detect protocol endpoint Jun 25 18:32:06.800830 waagent[1825]: 2024-06-25T18:32:06.800745Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 25 18:32:06.806681 waagent[1825]: 2024-06-25T18:32:06.806576Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 25 18:32:06.813153 waagent[1825]: 2024-06-25T18:32:06.813081Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 25 18:32:06.818614 waagent[1825]: 2024-06-25T18:32:06.818544Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 25 18:32:06.823765 waagent[1825]: 2024-06-25T18:32:06.823702Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 25 18:32:06.941892 waagent[1825]: 2024-06-25T18:32:06.941839Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 25 18:32:06.948590 waagent[1825]: 2024-06-25T18:32:06.948553Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 25 18:32:06.954254 waagent[1825]: 2024-06-25T18:32:06.954194Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 25 18:32:07.190524 waagent[1825]: 2024-06-25T18:32:07.190359Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 25 18:32:07.197263 waagent[1825]: 2024-06-25T18:32:07.197185Z INFO Daemon Daemon Forcing an update of the goal state. Jun 25 18:32:07.206419 waagent[1825]: 2024-06-25T18:32:07.206361Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:32:07.253841 waagent[1825]: 2024-06-25T18:32:07.253785Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jun 25 18:32:07.259901 waagent[1825]: 2024-06-25T18:32:07.259844Z INFO Daemon Jun 25 18:32:07.262809 waagent[1825]: 2024-06-25T18:32:07.262753Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 943f593d-a23b-47d5-b5ce-ce1cf0ac9a2c eTag: 5365616198547042325 source: Fabric] Jun 25 18:32:07.274230 waagent[1825]: 2024-06-25T18:32:07.274136Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 25 18:32:07.281738 waagent[1825]: 2024-06-25T18:32:07.281672Z INFO Daemon Jun 25 18:32:07.284642 waagent[1825]: 2024-06-25T18:32:07.284582Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:32:07.295705 waagent[1825]: 2024-06-25T18:32:07.295660Z INFO Daemon Daemon Downloading artifacts profile blob Jun 25 18:32:07.387538 waagent[1825]: 2024-06-25T18:32:07.387431Z INFO Daemon Downloaded certificate {'thumbprint': '97F2736B77573F22F33245E3DD0F58AB223391FB', 'hasPrivateKey': False} Jun 25 18:32:07.397864 waagent[1825]: 2024-06-25T18:32:07.397779Z INFO Daemon Downloaded certificate {'thumbprint': '5E4D2053049AC4C54FC04533B7F6D68A42B67364', 'hasPrivateKey': True} Jun 25 18:32:07.408281 waagent[1825]: 2024-06-25T18:32:07.408212Z INFO Daemon Fetch goal state completed Jun 25 18:32:07.420040 waagent[1825]: 2024-06-25T18:32:07.419952Z INFO Daemon Daemon Starting provisioning Jun 25 18:32:07.425127 waagent[1825]: 2024-06-25T18:32:07.424968Z INFO Daemon Daemon Handle ovf-env.xml. Jun 25 18:32:07.429835 waagent[1825]: 2024-06-25T18:32:07.429774Z INFO Daemon Daemon Set hostname [ci-4012.0.0-a-71b05979e1] Jun 25 18:32:07.457530 waagent[1825]: 2024-06-25T18:32:07.457441Z INFO Daemon Daemon Publish hostname [ci-4012.0.0-a-71b05979e1] Jun 25 18:32:07.463677 waagent[1825]: 2024-06-25T18:32:07.463601Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 25 18:32:07.470480 waagent[1825]: 2024-06-25T18:32:07.470395Z INFO Daemon Daemon Primary interface is [eth0] Jun 25 18:32:07.536645 systemd-networkd[1472]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:32:07.536652 systemd-networkd[1472]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:32:07.536682 systemd-networkd[1472]: eth0: DHCP lease lost Jun 25 18:32:07.538226 waagent[1825]: 2024-06-25T18:32:07.537796Z INFO Daemon Daemon Create user account if not exists Jun 25 18:32:07.547881 waagent[1825]: 2024-06-25T18:32:07.547805Z INFO Daemon Daemon User core already exists, skip useradd Jun 25 18:32:07.553822 waagent[1825]: 2024-06-25T18:32:07.553748Z INFO Daemon Daemon Configure sudoer Jun 25 18:32:07.558680 waagent[1825]: 2024-06-25T18:32:07.558606Z INFO Daemon Daemon Configure sshd Jun 25 18:32:07.563458 waagent[1825]: 2024-06-25T18:32:07.563387Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 25 18:32:07.564301 systemd-networkd[1472]: eth0: DHCPv6 lease lost Jun 25 18:32:07.576630 waagent[1825]: 2024-06-25T18:32:07.576545Z INFO Daemon Daemon Deploy ssh public key. Jun 25 18:32:07.596262 systemd-networkd[1472]: eth0: DHCPv4 address 10.200.20.27/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jun 25 18:32:08.777214 waagent[1825]: 2024-06-25T18:32:08.776790Z INFO Daemon Daemon Provisioning complete Jun 25 18:32:08.797662 waagent[1825]: 2024-06-25T18:32:08.797602Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 25 18:32:08.803838 waagent[1825]: 2024-06-25T18:32:08.803764Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 25 18:32:08.813484 waagent[1825]: 2024-06-25T18:32:08.813413Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 25 18:32:08.956472 waagent[1888]: 2024-06-25T18:32:08.956293Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 25 18:32:08.956472 waagent[1888]: 2024-06-25T18:32:08.956465Z INFO ExtHandler ExtHandler OS: flatcar 4012.0.0 Jun 25 18:32:08.956862 waagent[1888]: 2024-06-25T18:32:08.956520Z INFO ExtHandler ExtHandler Python: 3.11.9 Jun 25 18:32:09.135409 waagent[1888]: 2024-06-25T18:32:09.135252Z INFO ExtHandler ExtHandler Distro: flatcar-4012.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 25 18:32:09.135581 waagent[1888]: 2024-06-25T18:32:09.135537Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:32:09.135651 waagent[1888]: 2024-06-25T18:32:09.135619Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:32:09.145267 waagent[1888]: 2024-06-25T18:32:09.145142Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 25 18:32:09.153394 waagent[1888]: 2024-06-25T18:32:09.153340Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jun 25 18:32:09.153968 waagent[1888]: 2024-06-25T18:32:09.153918Z INFO ExtHandler Jun 25 18:32:09.154048 waagent[1888]: 2024-06-25T18:32:09.154015Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d5f920b9-6e99-4514-a1d7-ac5300c8bf0d eTag: 5365616198547042325 source: Fabric] Jun 25 18:32:09.154385 waagent[1888]: 2024-06-25T18:32:09.154339Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 25 18:32:09.154995 waagent[1888]: 2024-06-25T18:32:09.154945Z INFO ExtHandler Jun 25 18:32:09.155060 waagent[1888]: 2024-06-25T18:32:09.155031Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 25 18:32:09.159310 waagent[1888]: 2024-06-25T18:32:09.159266Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 25 18:32:09.244797 waagent[1888]: 2024-06-25T18:32:09.244689Z INFO ExtHandler Downloaded certificate {'thumbprint': '97F2736B77573F22F33245E3DD0F58AB223391FB', 'hasPrivateKey': False} Jun 25 18:32:09.245275 waagent[1888]: 2024-06-25T18:32:09.245227Z INFO ExtHandler Downloaded certificate {'thumbprint': '5E4D2053049AC4C54FC04533B7F6D68A42B67364', 'hasPrivateKey': True} Jun 25 18:32:09.245715 waagent[1888]: 2024-06-25T18:32:09.245668Z INFO ExtHandler Fetch goal state completed Jun 25 18:32:09.263861 waagent[1888]: 2024-06-25T18:32:09.263789Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1888 Jun 25 18:32:09.264033 waagent[1888]: 2024-06-25T18:32:09.263993Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 25 18:32:09.265786 waagent[1888]: 2024-06-25T18:32:09.265728Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4012.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 25 18:32:09.266178 waagent[1888]: 2024-06-25T18:32:09.266136Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 25 18:32:09.302664 waagent[1888]: 2024-06-25T18:32:09.302615Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 25 18:32:09.302878 waagent[1888]: 2024-06-25T18:32:09.302836Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 25 18:32:09.310114 waagent[1888]: 2024-06-25T18:32:09.310054Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 25 18:32:09.317530 systemd[1]: Reloading requested from client PID 1903 ('systemctl') (unit waagent.service)... Jun 25 18:32:09.317548 systemd[1]: Reloading... Jun 25 18:32:09.398382 zram_generator::config[1940]: No configuration found. Jun 25 18:32:09.499734 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:32:09.580696 systemd[1]: Reloading finished in 262 ms. Jun 25 18:32:09.604000 waagent[1888]: 2024-06-25T18:32:09.603608Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 25 18:32:09.610317 systemd[1]: Reloading requested from client PID 1988 ('systemctl') (unit waagent.service)... Jun 25 18:32:09.610490 systemd[1]: Reloading... Jun 25 18:32:09.689354 zram_generator::config[2022]: No configuration found. Jun 25 18:32:09.795955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:32:09.876083 systemd[1]: Reloading finished in 265 ms. Jun 25 18:32:09.899243 waagent[1888]: 2024-06-25T18:32:09.898534Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 25 18:32:09.899243 waagent[1888]: 2024-06-25T18:32:09.898715Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 25 18:32:10.266911 waagent[1888]: 2024-06-25T18:32:10.266777Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 25 18:32:10.267879 waagent[1888]: 2024-06-25T18:32:10.267820Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 25 18:32:10.268827 waagent[1888]: 2024-06-25T18:32:10.268770Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 25 18:32:10.269212 waagent[1888]: 2024-06-25T18:32:10.268996Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:32:10.269417 waagent[1888]: 2024-06-25T18:32:10.269360Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 25 18:32:10.269558 waagent[1888]: 2024-06-25T18:32:10.269476Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:32:10.269934 waagent[1888]: 2024-06-25T18:32:10.269884Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 25 18:32:10.270123 waagent[1888]: 2024-06-25T18:32:10.270084Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 25 18:32:10.270230 waagent[1888]: 2024-06-25T18:32:10.270160Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 25 18:32:10.270381 waagent[1888]: 2024-06-25T18:32:10.270338Z INFO EnvHandler ExtHandler Configure routes Jun 25 18:32:10.270451 waagent[1888]: 2024-06-25T18:32:10.270418Z INFO EnvHandler ExtHandler Gateway:None Jun 25 18:32:10.270527 waagent[1888]: 2024-06-25T18:32:10.270478Z INFO EnvHandler ExtHandler Routes:None Jun 25 18:32:10.270703 waagent[1888]: 2024-06-25T18:32:10.270574Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 25 18:32:10.271052 waagent[1888]: 2024-06-25T18:32:10.270836Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 25 18:32:10.271743 waagent[1888]: 2024-06-25T18:32:10.271678Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 25 18:32:10.271986 waagent[1888]: 2024-06-25T18:32:10.271934Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 25 18:32:10.271986 waagent[1888]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 25 18:32:10.271986 waagent[1888]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jun 25 18:32:10.271986 waagent[1888]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 25 18:32:10.271986 waagent[1888]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:32:10.271986 waagent[1888]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:32:10.271986 waagent[1888]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 25 18:32:10.272475 waagent[1888]: 2024-06-25T18:32:10.272406Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 25 18:32:10.272603 waagent[1888]: 2024-06-25T18:32:10.272512Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 25 18:32:10.279055 waagent[1888]: 2024-06-25T18:32:10.278972Z INFO ExtHandler ExtHandler Jun 25 18:32:10.279160 waagent[1888]: 2024-06-25T18:32:10.279114Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 46d0116c-613f-4bbb-862c-e3d9cd579f16 correlation bc8af67a-b048-4145-923f-909a13f501b8 created: 2024-06-25T18:30:45.443687Z] Jun 25 18:32:10.280204 waagent[1888]: 2024-06-25T18:32:10.280091Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 25 18:32:10.282572 waagent[1888]: 2024-06-25T18:32:10.282504Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jun 25 18:32:10.326640 waagent[1888]: 2024-06-25T18:32:10.326470Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 752B1294-EEBC-4A3D-AB06-32BE452327D3;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 25 18:32:10.378940 waagent[1888]: 2024-06-25T18:32:10.378469Z INFO MonitorHandler ExtHandler Network interfaces: Jun 25 18:32:10.378940 waagent[1888]: Executing ['ip', '-a', '-o', 'link']: Jun 25 18:32:10.378940 waagent[1888]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 25 18:32:10.378940 waagent[1888]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:be:a0 brd ff:ff:ff:ff:ff:ff Jun 25 18:32:10.378940 waagent[1888]: 3: enP35988s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bc:be:a0 brd ff:ff:ff:ff:ff:ff\ altname enP35988p0s2 Jun 25 18:32:10.378940 waagent[1888]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 25 18:32:10.378940 waagent[1888]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 25 18:32:10.378940 waagent[1888]: 2: eth0 inet 10.200.20.27/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 25 18:32:10.378940 waagent[1888]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 25 18:32:10.378940 waagent[1888]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 25 18:32:10.378940 waagent[1888]: 2: eth0 inet6 fe80::222:48ff:febc:bea0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:32:10.378940 waagent[1888]: 3: enP35988s1 inet6 fe80::222:48ff:febc:bea0/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 25 18:32:10.412557 waagent[1888]: 2024-06-25T18:32:10.412467Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 25 18:32:10.412557 waagent[1888]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:32:10.412557 waagent[1888]: pkts bytes target prot opt in out source destination Jun 25 18:32:10.412557 waagent[1888]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:32:10.412557 waagent[1888]: pkts bytes target prot opt in out source destination Jun 25 18:32:10.412557 waagent[1888]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:32:10.412557 waagent[1888]: pkts bytes target prot opt in out source destination Jun 25 18:32:10.412557 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:32:10.412557 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:32:10.412557 waagent[1888]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:32:10.417029 waagent[1888]: 2024-06-25T18:32:10.416956Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 25 18:32:10.417029 waagent[1888]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:32:10.417029 waagent[1888]: pkts bytes target prot opt in out source destination Jun 25 18:32:10.417029 waagent[1888]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:32:10.417029 waagent[1888]: pkts bytes target prot opt in out source destination Jun 25 18:32:10.417029 waagent[1888]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 25 18:32:10.417029 waagent[1888]: pkts bytes target prot opt in out source destination Jun 25 18:32:10.417029 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 25 18:32:10.417029 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 25 18:32:10.417029 waagent[1888]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 25 18:32:10.417966 waagent[1888]: 2024-06-25T18:32:10.417830Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 25 18:32:15.228736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:32:15.239382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:15.340745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:15.348605 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:32:15.393071 kubelet[2114]: E0625 18:32:15.392991 2114 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:32:15.397459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:32:15.397741 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:32:25.503931 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:32:25.511407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:25.602875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:25.606773 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:32:25.670220 kubelet[2130]: E0625 18:32:25.670138 2130 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:32:25.673270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:32:25.673518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:32:27.177534 chronyd[1674]: Selected source PHC0 Jun 25 18:32:35.753956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 18:32:35.762340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:35.841937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:35.846319 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:32:35.887292 kubelet[2146]: E0625 18:32:35.887213 2146 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:32:35.890221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:32:35.890372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:32:46.003984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 25 18:32:46.012455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:46.109321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:46.109596 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:32:46.148960 kubelet[2165]: E0625 18:32:46.148880 2165 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:32:46.151526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:32:46.151674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:32:46.295894 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jun 25 18:32:49.180779 update_engine[1688]: I0625 18:32:49.180204 1688 update_attempter.cc:509] Updating boot flags... Jun 25 18:32:49.226183 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2186) Jun 25 18:32:49.312787 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2190) Jun 25 18:32:52.342447 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:32:52.343665 systemd[1]: Started sshd@0-10.200.20.27:22-10.200.16.10:54706.service - OpenSSH per-connection server daemon (10.200.16.10:54706). Jun 25 18:32:52.919033 sshd[2241]: Accepted publickey for core from 10.200.16.10 port 54706 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:32:52.920356 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:52.924077 systemd-logind[1685]: New session 3 of user core. Jun 25 18:32:52.931416 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:32:53.343678 systemd[1]: Started sshd@1-10.200.20.27:22-10.200.16.10:54708.service - OpenSSH per-connection server daemon (10.200.16.10:54708). Jun 25 18:32:53.771723 sshd[2246]: Accepted publickey for core from 10.200.16.10 port 54708 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:32:53.772987 sshd[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:53.777051 systemd-logind[1685]: New session 4 of user core. Jun 25 18:32:53.784322 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:32:54.085814 sshd[2246]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:54.090264 systemd[1]: sshd@1-10.200.20.27:22-10.200.16.10:54708.service: Deactivated successfully. Jun 25 18:32:54.091698 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:32:54.092667 systemd-logind[1685]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:32:54.093832 systemd-logind[1685]: Removed session 4. Jun 25 18:32:54.164557 systemd[1]: Started sshd@2-10.200.20.27:22-10.200.16.10:54716.service - OpenSSH per-connection server daemon (10.200.16.10:54716). Jun 25 18:32:54.598402 sshd[2253]: Accepted publickey for core from 10.200.16.10 port 54716 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:32:54.599710 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:54.604304 systemd-logind[1685]: New session 5 of user core. Jun 25 18:32:54.613337 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:32:54.927529 sshd[2253]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:54.931050 systemd[1]: sshd@2-10.200.20.27:22-10.200.16.10:54716.service: Deactivated successfully. Jun 25 18:32:54.932872 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:32:54.934002 systemd-logind[1685]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:32:54.934868 systemd-logind[1685]: Removed session 5. Jun 25 18:32:55.010410 systemd[1]: Started sshd@3-10.200.20.27:22-10.200.16.10:41512.service - OpenSSH per-connection server daemon (10.200.16.10:41512). Jun 25 18:32:55.438393 sshd[2260]: Accepted publickey for core from 10.200.16.10 port 41512 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:32:55.439679 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:55.443506 systemd-logind[1685]: New session 6 of user core. Jun 25 18:32:55.451320 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:32:55.752923 sshd[2260]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:55.756343 systemd[1]: sshd@3-10.200.20.27:22-10.200.16.10:41512.service: Deactivated successfully. Jun 25 18:32:55.757819 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:32:55.758418 systemd-logind[1685]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:32:55.759353 systemd-logind[1685]: Removed session 6. Jun 25 18:32:55.829390 systemd[1]: Started sshd@4-10.200.20.27:22-10.200.16.10:41520.service - OpenSSH per-connection server daemon (10.200.16.10:41520). Jun 25 18:32:56.179689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 25 18:32:56.192363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:32:56.255862 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 41520 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:32:56.257545 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:56.263237 systemd-logind[1685]: New session 7 of user core. Jun 25 18:32:56.274909 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:32:56.284208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:32:56.288770 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:32:56.329821 kubelet[2278]: E0625 18:32:56.329699 2278 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:32:56.332041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:32:56.332191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:32:56.858090 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:32:56.858352 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:32:56.895817 sudo[2286]: pam_unix(sudo:session): session closed for user root Jun 25 18:32:56.965470 sshd[2267]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:56.968557 systemd[1]: sshd@4-10.200.20.27:22-10.200.16.10:41520.service: Deactivated successfully. Jun 25 18:32:56.970373 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:32:56.971757 systemd-logind[1685]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:32:56.973167 systemd-logind[1685]: Removed session 7. Jun 25 18:32:57.052762 systemd[1]: Started sshd@5-10.200.20.27:22-10.200.16.10:41532.service - OpenSSH per-connection server daemon (10.200.16.10:41532). Jun 25 18:32:57.515983 sshd[2291]: Accepted publickey for core from 10.200.16.10 port 41532 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:32:57.517347 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:57.521065 systemd-logind[1685]: New session 8 of user core. Jun 25 18:32:57.528336 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:32:57.778708 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:32:57.779366 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:32:57.782398 sudo[2295]: pam_unix(sudo:session): session closed for user root Jun 25 18:32:57.786818 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:32:57.787049 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:32:57.799469 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:32:57.800792 auditctl[2298]: No rules Jun 25 18:32:57.801087 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:32:57.801270 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:32:57.811617 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:32:57.830699 augenrules[2316]: No rules Jun 25 18:32:57.833214 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:32:57.834636 sudo[2294]: pam_unix(sudo:session): session closed for user root Jun 25 18:32:57.914419 sshd[2291]: pam_unix(sshd:session): session closed for user core Jun 25 18:32:57.916942 systemd[1]: sshd@5-10.200.20.27:22-10.200.16.10:41532.service: Deactivated successfully. Jun 25 18:32:57.918541 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:32:57.919964 systemd-logind[1685]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:32:57.920832 systemd-logind[1685]: Removed session 8. Jun 25 18:32:57.997373 systemd[1]: Started sshd@6-10.200.20.27:22-10.200.16.10:41542.service - OpenSSH per-connection server daemon (10.200.16.10:41542). Jun 25 18:32:58.459364 sshd[2324]: Accepted publickey for core from 10.200.16.10 port 41542 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:32:58.460628 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:32:58.465166 systemd-logind[1685]: New session 9 of user core. Jun 25 18:32:58.471409 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:32:58.721475 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:32:58.721698 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:32:59.270394 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:32:59.271815 (dockerd)[2336]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:33:00.333113 dockerd[2336]: time="2024-06-25T18:33:00.332881025Z" level=info msg="Starting up" Jun 25 18:33:00.366272 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1139390021-merged.mount: Deactivated successfully. Jun 25 18:33:00.553343 dockerd[2336]: time="2024-06-25T18:33:00.553115029Z" level=info msg="Loading containers: start." Jun 25 18:33:00.766219 kernel: Initializing XFRM netlink socket Jun 25 18:33:00.906500 systemd-networkd[1472]: docker0: Link UP Jun 25 18:33:00.933539 dockerd[2336]: time="2024-06-25T18:33:00.933493756Z" level=info msg="Loading containers: done." Jun 25 18:33:01.257768 dockerd[2336]: time="2024-06-25T18:33:01.257694034Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:33:01.257961 dockerd[2336]: time="2024-06-25T18:33:01.257936874Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:33:01.258092 dockerd[2336]: time="2024-06-25T18:33:01.258066433Z" level=info msg="Daemon has completed initialization" Jun 25 18:33:01.308955 dockerd[2336]: time="2024-06-25T18:33:01.308461054Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:33:01.309271 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:33:02.944893 containerd[1711]: time="2024-06-25T18:33:02.944798734Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 18:33:04.046781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1232950632.mount: Deactivated successfully. Jun 25 18:33:05.510218 containerd[1711]: time="2024-06-25T18:33:05.509449177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:05.512699 containerd[1711]: time="2024-06-25T18:33:05.512465971Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671538" Jun 25 18:33:05.517854 containerd[1711]: time="2024-06-25T18:33:05.517797280Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:05.523901 containerd[1711]: time="2024-06-25T18:33:05.523856948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:05.524877 containerd[1711]: time="2024-06-25T18:33:05.524848586Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 2.580010572s" Jun 25 18:33:05.525118 containerd[1711]: time="2024-06-25T18:33:05.524970546Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jun 25 18:33:05.545642 containerd[1711]: time="2024-06-25T18:33:05.545561905Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 18:33:06.503869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 25 18:33:06.513443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:33:06.599372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:33:06.599947 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:33:06.638923 kubelet[2531]: E0625 18:33:06.638816 2531 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:33:06.641220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:33:06.641355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:33:07.675214 containerd[1711]: time="2024-06-25T18:33:07.674781392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:07.677303 containerd[1711]: time="2024-06-25T18:33:07.677270427Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893118" Jun 25 18:33:07.681161 containerd[1711]: time="2024-06-25T18:33:07.681095059Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:07.686830 containerd[1711]: time="2024-06-25T18:33:07.686769686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:07.687946 containerd[1711]: time="2024-06-25T18:33:07.687799044Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 2.142200739s" Jun 25 18:33:07.687946 containerd[1711]: time="2024-06-25T18:33:07.687833284Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jun 25 18:33:07.707865 containerd[1711]: time="2024-06-25T18:33:07.707774440Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 18:33:09.645323 containerd[1711]: time="2024-06-25T18:33:09.645267656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:09.650704 containerd[1711]: time="2024-06-25T18:33:09.650661684Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358438" Jun 25 18:33:09.657507 containerd[1711]: time="2024-06-25T18:33:09.657437109Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:09.663144 containerd[1711]: time="2024-06-25T18:33:09.663046257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:09.664240 containerd[1711]: time="2024-06-25T18:33:09.664095414Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 1.956286334s" Jun 25 18:33:09.664240 containerd[1711]: time="2024-06-25T18:33:09.664130134Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jun 25 18:33:09.683598 containerd[1711]: time="2024-06-25T18:33:09.683541252Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 18:33:10.838314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359313469.mount: Deactivated successfully. Jun 25 18:33:11.193099 containerd[1711]: time="2024-06-25T18:33:11.192960330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:11.195796 containerd[1711]: time="2024-06-25T18:33:11.195643964Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772461" Jun 25 18:33:11.200597 containerd[1711]: time="2024-06-25T18:33:11.200536193Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:11.211106 containerd[1711]: time="2024-06-25T18:33:11.211057810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:11.211753 containerd[1711]: time="2024-06-25T18:33:11.211584369Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 1.527990757s" Jun 25 18:33:11.211753 containerd[1711]: time="2024-06-25T18:33:11.211621129Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jun 25 18:33:11.231380 containerd[1711]: time="2024-06-25T18:33:11.231317045Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:33:11.887876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436778627.mount: Deactivated successfully. Jun 25 18:33:11.923409 containerd[1711]: time="2024-06-25T18:33:11.923349482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:11.934887 containerd[1711]: time="2024-06-25T18:33:11.934852777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jun 25 18:33:11.942691 containerd[1711]: time="2024-06-25T18:33:11.942644080Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:11.948665 containerd[1711]: time="2024-06-25T18:33:11.948612227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:11.949393 containerd[1711]: time="2024-06-25T18:33:11.949278025Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 717.71294ms" Jun 25 18:33:11.949393 containerd[1711]: time="2024-06-25T18:33:11.949310185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 18:33:11.968273 containerd[1711]: time="2024-06-25T18:33:11.968035704Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:33:12.704092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2657885691.mount: Deactivated successfully. Jun 25 18:33:15.655220 containerd[1711]: time="2024-06-25T18:33:15.654944255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:15.658295 containerd[1711]: time="2024-06-25T18:33:15.658262334Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jun 25 18:33:15.664570 containerd[1711]: time="2024-06-25T18:33:15.664507972Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:15.688056 containerd[1711]: time="2024-06-25T18:33:15.688007563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:15.689444 containerd[1711]: time="2024-06-25T18:33:15.689321802Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.721251018s" Jun 25 18:33:15.689444 containerd[1711]: time="2024-06-25T18:33:15.689356122Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jun 25 18:33:15.709635 containerd[1711]: time="2024-06-25T18:33:15.709362034Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 18:33:16.582979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869321776.mount: Deactivated successfully. Jun 25 18:33:16.753796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jun 25 18:33:16.764679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:33:16.863819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:33:16.872411 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:33:16.916805 kubelet[2646]: E0625 18:33:16.916744 2646 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:33:16.919422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:33:16.919680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:33:17.327205 containerd[1711]: time="2024-06-25T18:33:17.327093378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:17.330783 containerd[1711]: time="2024-06-25T18:33:17.330747937Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Jun 25 18:33:17.336804 containerd[1711]: time="2024-06-25T18:33:17.336773415Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:17.342117 containerd[1711]: time="2024-06-25T18:33:17.342072053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:17.343074 containerd[1711]: time="2024-06-25T18:33:17.342696692Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.633297618s" Jun 25 18:33:17.343074 containerd[1711]: time="2024-06-25T18:33:17.342730172Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jun 25 18:33:22.468569 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:33:22.482792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:33:22.503498 systemd[1]: Reloading requested from client PID 2714 ('systemctl') (unit session-9.scope)... Jun 25 18:33:22.503518 systemd[1]: Reloading... Jun 25 18:33:22.599205 zram_generator::config[2751]: No configuration found. Jun 25 18:33:22.696891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:33:22.773741 systemd[1]: Reloading finished in 269 ms. Jun 25 18:33:22.820845 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:33:22.824104 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:33:22.824325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:33:22.828466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:33:22.984524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:33:22.989687 (kubelet)[2820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:33:23.049654 kubelet[2820]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:33:23.049654 kubelet[2820]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:33:23.049654 kubelet[2820]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:33:23.049654 kubelet[2820]: I0625 18:33:23.049624 2820 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:33:23.732194 kubelet[2820]: I0625 18:33:23.732109 2820 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:33:23.732194 kubelet[2820]: I0625 18:33:23.732135 2820 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:33:23.732370 kubelet[2820]: I0625 18:33:23.732348 2820 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:33:23.745735 kubelet[2820]: I0625 18:33:23.745706 2820 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:33:23.749771 kubelet[2820]: E0625 18:33:23.749741 2820 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:23.755832 kubelet[2820]: W0625 18:33:23.755742 2820 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 18:33:23.757073 kubelet[2820]: I0625 18:33:23.757034 2820 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:33:23.757268 kubelet[2820]: I0625 18:33:23.757251 2820 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:33:23.757426 kubelet[2820]: I0625 18:33:23.757407 2820 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:33:23.757514 kubelet[2820]: I0625 18:33:23.757446 2820 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:33:23.757514 kubelet[2820]: I0625 18:33:23.757455 2820 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:33:23.757559 kubelet[2820]: I0625 18:33:23.757550 2820 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:33:23.759163 kubelet[2820]: I0625 18:33:23.759143 2820 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:33:23.760646 kubelet[2820]: I0625 18:33:23.759168 2820 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:33:23.760646 kubelet[2820]: I0625 18:33:23.759503 2820 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:33:23.760646 kubelet[2820]: I0625 18:33:23.759517 2820 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:33:23.760646 kubelet[2820]: W0625 18:33:23.759572 2820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-71b05979e1&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:23.760646 kubelet[2820]: E0625 18:33:23.759624 2820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-71b05979e1&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:23.760646 kubelet[2820]: W0625 18:33:23.760544 2820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:23.760646 kubelet[2820]: E0625 18:33:23.760576 2820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:23.761345 kubelet[2820]: I0625 18:33:23.761073 2820 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:33:23.762433 kubelet[2820]: W0625 18:33:23.762420 2820 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:33:23.762932 kubelet[2820]: I0625 18:33:23.762917 2820 server.go:1232] "Started kubelet" Jun 25 18:33:23.763704 kubelet[2820]: I0625 18:33:23.763687 2820 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:33:23.764476 kubelet[2820]: I0625 18:33:23.764461 2820 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:33:23.765497 kubelet[2820]: I0625 18:33:23.764497 2820 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:33:23.769750 kubelet[2820]: E0625 18:33:23.769661 2820 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012.0.0-a-71b05979e1.17dc52fcde439f68", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012.0.0-a-71b05979e1", UID:"ci-4012.0.0-a-71b05979e1", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.0.0-a-71b05979e1"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 33, 23, 762896744, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 33, 23, 762896744, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.0.0-a-71b05979e1"}': 'Post "https://10.200.20.27:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.27:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:33:23.769887 kubelet[2820]: E0625 18:33:23.769829 2820 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:33:23.769887 kubelet[2820]: E0625 18:33:23.769845 2820 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:33:23.770751 kubelet[2820]: I0625 18:33:23.770297 2820 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:33:23.770751 kubelet[2820]: I0625 18:33:23.770490 2820 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:33:23.770751 kubelet[2820]: I0625 18:33:23.770588 2820 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:33:23.770751 kubelet[2820]: I0625 18:33:23.770651 2820 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:33:23.770751 kubelet[2820]: I0625 18:33:23.770686 2820 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:33:23.770957 kubelet[2820]: W0625 18:33:23.770909 2820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:23.770957 kubelet[2820]: E0625 18:33:23.770956 2820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:23.771409 kubelet[2820]: E0625 18:33:23.771395 2820 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-71b05979e1?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="200ms" Jun 25 18:33:23.801542 kubelet[2820]: I0625 18:33:23.801511 2820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:33:23.802654 kubelet[2820]: I0625 18:33:23.802585 2820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:33:23.802654 kubelet[2820]: I0625 18:33:23.802617 2820 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:33:23.802654 kubelet[2820]: I0625 18:33:23.802634 2820 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:33:23.802798 kubelet[2820]: E0625 18:33:23.802773 2820 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:33:23.803671 kubelet[2820]: W0625 18:33:23.803535 2820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:23.803671 kubelet[2820]: E0625 18:33:23.803574 2820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:23.903761 kubelet[2820]: E0625 18:33:23.903731 2820 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:33:23.911550 kubelet[2820]: I0625 18:33:23.911520 2820 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:23.911856 kubelet[2820]: E0625 18:33:23.911828 2820 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:23.912373 kubelet[2820]: I0625 18:33:23.912350 2820 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:33:23.912373 kubelet[2820]: I0625 18:33:23.912371 2820 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:33:23.912472 kubelet[2820]: I0625 18:33:23.912390 2820 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:33:23.917754 kubelet[2820]: I0625 18:33:23.917729 2820 policy_none.go:49] "None policy: Start" Jun 25 18:33:23.918347 kubelet[2820]: I0625 18:33:23.918321 2820 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:33:23.918418 kubelet[2820]: I0625 18:33:23.918353 2820 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:33:23.928716 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 18:33:23.938626 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 18:33:23.943328 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 18:33:23.950988 kubelet[2820]: I0625 18:33:23.950803 2820 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:33:23.951074 kubelet[2820]: I0625 18:33:23.951061 2820 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:33:23.952405 kubelet[2820]: E0625 18:33:23.952275 2820 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:23.972471 kubelet[2820]: E0625 18:33:23.972447 2820 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-71b05979e1?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="400ms" Jun 25 18:33:24.104930 kubelet[2820]: I0625 18:33:24.104888 2820 topology_manager.go:215] "Topology Admit Handler" podUID="b7580d4be9d5ba8b785defc1121c4ddb" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.106604 kubelet[2820]: I0625 18:33:24.106575 2820 topology_manager.go:215] "Topology Admit Handler" podUID="a78f59d8085d8bb586ef5bc9c9f4427d" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.108242 kubelet[2820]: I0625 18:33:24.108047 2820 topology_manager.go:215] "Topology Admit Handler" podUID="9f24703a8cae2164979154f65b2e74a3" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.113975 kubelet[2820]: I0625 18:33:24.113956 2820 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.114782 kubelet[2820]: E0625 18:33:24.114598 2820 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.115465 systemd[1]: Created slice kubepods-burstable-podb7580d4be9d5ba8b785defc1121c4ddb.slice - libcontainer container kubepods-burstable-podb7580d4be9d5ba8b785defc1121c4ddb.slice. Jun 25 18:33:24.125817 systemd[1]: Created slice kubepods-burstable-poda78f59d8085d8bb586ef5bc9c9f4427d.slice - libcontainer container kubepods-burstable-poda78f59d8085d8bb586ef5bc9c9f4427d.slice. Jun 25 18:33:24.130642 systemd[1]: Created slice kubepods-burstable-pod9f24703a8cae2164979154f65b2e74a3.slice - libcontainer container kubepods-burstable-pod9f24703a8cae2164979154f65b2e74a3.slice. Jun 25 18:33:24.172901 kubelet[2820]: I0625 18:33:24.172871 2820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a78f59d8085d8bb586ef5bc9c9f4427d-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-71b05979e1\" (UID: \"a78f59d8085d8bb586ef5bc9c9f4427d\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.173082 kubelet[2820]: I0625 18:33:24.173071 2820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a78f59d8085d8bb586ef5bc9c9f4427d-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-71b05979e1\" (UID: \"a78f59d8085d8bb586ef5bc9c9f4427d\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.173168 kubelet[2820]: I0625 18:33:24.173158 2820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a78f59d8085d8bb586ef5bc9c9f4427d-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-71b05979e1\" (UID: \"a78f59d8085d8bb586ef5bc9c9f4427d\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.173271 kubelet[2820]: I0625 18:33:24.173262 2820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f24703a8cae2164979154f65b2e74a3-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-71b05979e1\" (UID: \"9f24703a8cae2164979154f65b2e74a3\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.173343 kubelet[2820]: I0625 18:33:24.173334 2820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7580d4be9d5ba8b785defc1121c4ddb-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-71b05979e1\" (UID: \"b7580d4be9d5ba8b785defc1121c4ddb\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.173493 kubelet[2820]: I0625 18:33:24.173403 2820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a78f59d8085d8bb586ef5bc9c9f4427d-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-71b05979e1\" (UID: \"a78f59d8085d8bb586ef5bc9c9f4427d\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.173493 kubelet[2820]: I0625 18:33:24.173431 2820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a78f59d8085d8bb586ef5bc9c9f4427d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-71b05979e1\" (UID: \"a78f59d8085d8bb586ef5bc9c9f4427d\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.173493 kubelet[2820]: I0625 18:33:24.173450 2820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7580d4be9d5ba8b785defc1121c4ddb-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-71b05979e1\" (UID: \"b7580d4be9d5ba8b785defc1121c4ddb\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.173493 kubelet[2820]: I0625 18:33:24.173470 2820 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7580d4be9d5ba8b785defc1121c4ddb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-71b05979e1\" (UID: \"b7580d4be9d5ba8b785defc1121c4ddb\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.373902 kubelet[2820]: E0625 18:33:24.373796 2820 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-71b05979e1?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="800ms" Jun 25 18:33:24.425021 containerd[1711]: time="2024-06-25T18:33:24.424762848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-71b05979e1,Uid:b7580d4be9d5ba8b785defc1121c4ddb,Namespace:kube-system,Attempt:0,}" Jun 25 18:33:24.429021 containerd[1711]: time="2024-06-25T18:33:24.428985880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-71b05979e1,Uid:a78f59d8085d8bb586ef5bc9c9f4427d,Namespace:kube-system,Attempt:0,}" Jun 25 18:33:24.433710 containerd[1711]: time="2024-06-25T18:33:24.433676231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-71b05979e1,Uid:9f24703a8cae2164979154f65b2e74a3,Namespace:kube-system,Attempt:0,}" Jun 25 18:33:24.519465 kubelet[2820]: I0625 18:33:24.519376 2820 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.519788 kubelet[2820]: E0625 18:33:24.519698 2820 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:24.691421 kubelet[2820]: W0625 18:33:24.691309 2820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:24.691421 kubelet[2820]: E0625 18:33:24.691350 2820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:24.730899 kubelet[2820]: W0625 18:33:24.730818 2820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:24.730899 kubelet[2820]: E0625 18:33:24.730874 2820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:24.860246 kubelet[2820]: W0625 18:33:24.860166 2820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-71b05979e1&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:24.860246 kubelet[2820]: E0625 18:33:24.860250 2820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-71b05979e1&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:25.403349 kubelet[2820]: W0625 18:33:25.162982 2820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:25.403349 kubelet[2820]: E0625 18:33:25.163037 2820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:25.403349 kubelet[2820]: E0625 18:33:25.174520 2820 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-71b05979e1?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="1.6s" Jun 25 18:33:25.403349 kubelet[2820]: I0625 18:33:25.321660 2820 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:25.403349 kubelet[2820]: E0625 18:33:25.321941 2820 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:25.822840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033916315.mount: Deactivated successfully. Jun 25 18:33:25.831783 kubelet[2820]: E0625 18:33:25.831744 2820 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:25.861700 containerd[1711]: time="2024-06-25T18:33:25.861643841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:33:25.864295 containerd[1711]: time="2024-06-25T18:33:25.864251596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jun 25 18:33:25.870665 containerd[1711]: time="2024-06-25T18:33:25.870622184Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:33:25.874879 containerd[1711]: time="2024-06-25T18:33:25.874144937Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:33:25.880854 containerd[1711]: time="2024-06-25T18:33:25.880825004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:33:25.887586 containerd[1711]: time="2024-06-25T18:33:25.886788793Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:33:25.892190 containerd[1711]: time="2024-06-25T18:33:25.892112983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:33:25.897859 containerd[1711]: time="2024-06-25T18:33:25.897814572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:33:25.898777 containerd[1711]: time="2024-06-25T18:33:25.898561851Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.469496091s" Jun 25 18:33:25.900962 containerd[1711]: time="2024-06-25T18:33:25.900888446Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.467139135s" Jun 25 18:33:25.901539 containerd[1711]: time="2024-06-25T18:33:25.901512445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.476660917s" Jun 25 18:33:26.753321 containerd[1711]: time="2024-06-25T18:33:26.753085549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:26.755658 containerd[1711]: time="2024-06-25T18:33:26.755197985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:26.755658 containerd[1711]: time="2024-06-25T18:33:26.755219945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:26.755658 containerd[1711]: time="2024-06-25T18:33:26.755230145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:26.756804 containerd[1711]: time="2024-06-25T18:33:26.756652022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:26.757329 containerd[1711]: time="2024-06-25T18:33:26.756709902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:26.757329 containerd[1711]: time="2024-06-25T18:33:26.757276341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:26.757329 containerd[1711]: time="2024-06-25T18:33:26.757300781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:26.758805 containerd[1711]: time="2024-06-25T18:33:26.758675258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:26.759991 containerd[1711]: time="2024-06-25T18:33:26.758725938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:26.759991 containerd[1711]: time="2024-06-25T18:33:26.758988538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:26.759991 containerd[1711]: time="2024-06-25T18:33:26.759002618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:26.774990 kubelet[2820]: E0625 18:33:26.774934 2820 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.0.0-a-71b05979e1?timeout=10s\": dial tcp 10.200.20.27:6443: connect: connection refused" interval="3.2s" Jun 25 18:33:26.796895 systemd[1]: Started cri-containerd-b6088bfdbee0ad1395de1303d50364950e893bb5b541fcaa315c53797b80286d.scope - libcontainer container b6088bfdbee0ad1395de1303d50364950e893bb5b541fcaa315c53797b80286d. Jun 25 18:33:26.802006 systemd[1]: Started cri-containerd-0a5e30aca5f25d6481d5939efd77c4eaced0d1f1d1624d4a9b798642a9da6c3f.scope - libcontainer container 0a5e30aca5f25d6481d5939efd77c4eaced0d1f1d1624d4a9b798642a9da6c3f. Jun 25 18:33:26.803485 systemd[1]: Started cri-containerd-4b675beea0a87ab9cce5fa5b4ea8f244097fab2485eeab3230e7e8eb9a3eb707.scope - libcontainer container 4b675beea0a87ab9cce5fa5b4ea8f244097fab2485eeab3230e7e8eb9a3eb707. Jun 25 18:33:26.851718 containerd[1711]: time="2024-06-25T18:33:26.851676722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.0.0-a-71b05979e1,Uid:9f24703a8cae2164979154f65b2e74a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b675beea0a87ab9cce5fa5b4ea8f244097fab2485eeab3230e7e8eb9a3eb707\"" Jun 25 18:33:26.856399 containerd[1711]: time="2024-06-25T18:33:26.856365513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.0.0-a-71b05979e1,Uid:a78f59d8085d8bb586ef5bc9c9f4427d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6088bfdbee0ad1395de1303d50364950e893bb5b541fcaa315c53797b80286d\"" Jun 25 18:33:26.859487 containerd[1711]: time="2024-06-25T18:33:26.859456267Z" level=info msg="CreateContainer within sandbox \"4b675beea0a87ab9cce5fa5b4ea8f244097fab2485eeab3230e7e8eb9a3eb707\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:33:26.859862 containerd[1711]: time="2024-06-25T18:33:26.859816026Z" level=info msg="CreateContainer within sandbox \"b6088bfdbee0ad1395de1303d50364950e893bb5b541fcaa315c53797b80286d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:33:26.864846 containerd[1711]: time="2024-06-25T18:33:26.864701097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.0.0-a-71b05979e1,Uid:b7580d4be9d5ba8b785defc1121c4ddb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a5e30aca5f25d6481d5939efd77c4eaced0d1f1d1624d4a9b798642a9da6c3f\"" Jun 25 18:33:26.868444 containerd[1711]: time="2024-06-25T18:33:26.868326450Z" level=info msg="CreateContainer within sandbox \"0a5e30aca5f25d6481d5939efd77c4eaced0d1f1d1624d4a9b798642a9da6c3f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:33:26.924072 kubelet[2820]: I0625 18:33:26.924041 2820 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:26.924438 kubelet[2820]: E0625 18:33:26.924415 2820 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.27:6443/api/v1/nodes\": dial tcp 10.200.20.27:6443: connect: connection refused" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:26.943471 containerd[1711]: time="2024-06-25T18:33:26.943418588Z" level=info msg="CreateContainer within sandbox \"4b675beea0a87ab9cce5fa5b4ea8f244097fab2485eeab3230e7e8eb9a3eb707\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"876b626dd9ec7a4e0ae6a0396fe1356e42c6c0e673c707d7bbe7673ab935c3ac\"" Jun 25 18:33:26.944261 containerd[1711]: time="2024-06-25T18:33:26.943987867Z" level=info msg="StartContainer for \"876b626dd9ec7a4e0ae6a0396fe1356e42c6c0e673c707d7bbe7673ab935c3ac\"" Jun 25 18:33:26.964739 containerd[1711]: time="2024-06-25T18:33:26.964695267Z" level=info msg="CreateContainer within sandbox \"b6088bfdbee0ad1395de1303d50364950e893bb5b541fcaa315c53797b80286d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cbd10c77306290252fdbd85356ba7b7521ac34faa6bff6a771eb20f6cd88bace\"" Jun 25 18:33:26.965437 containerd[1711]: time="2024-06-25T18:33:26.965142586Z" level=info msg="StartContainer for \"cbd10c77306290252fdbd85356ba7b7521ac34faa6bff6a771eb20f6cd88bace\"" Jun 25 18:33:26.965884 containerd[1711]: time="2024-06-25T18:33:26.965850025Z" level=info msg="CreateContainer within sandbox \"0a5e30aca5f25d6481d5939efd77c4eaced0d1f1d1624d4a9b798642a9da6c3f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5194c9594f385e1947f5555c346c1a3683050053527c54042fc335e2994a32f5\"" Jun 25 18:33:26.967617 containerd[1711]: time="2024-06-25T18:33:26.966195224Z" level=info msg="StartContainer for \"5194c9594f385e1947f5555c346c1a3683050053527c54042fc335e2994a32f5\"" Jun 25 18:33:26.967355 systemd[1]: Started cri-containerd-876b626dd9ec7a4e0ae6a0396fe1356e42c6c0e673c707d7bbe7673ab935c3ac.scope - libcontainer container 876b626dd9ec7a4e0ae6a0396fe1356e42c6c0e673c707d7bbe7673ab935c3ac. Jun 25 18:33:27.002458 systemd[1]: Started cri-containerd-cbd10c77306290252fdbd85356ba7b7521ac34faa6bff6a771eb20f6cd88bace.scope - libcontainer container cbd10c77306290252fdbd85356ba7b7521ac34faa6bff6a771eb20f6cd88bace. Jun 25 18:33:27.011509 systemd[1]: Started cri-containerd-5194c9594f385e1947f5555c346c1a3683050053527c54042fc335e2994a32f5.scope - libcontainer container 5194c9594f385e1947f5555c346c1a3683050053527c54042fc335e2994a32f5. Jun 25 18:33:27.025115 containerd[1711]: time="2024-06-25T18:33:27.025077793Z" level=info msg="StartContainer for \"876b626dd9ec7a4e0ae6a0396fe1356e42c6c0e673c707d7bbe7673ab935c3ac\" returns successfully" Jun 25 18:33:27.059418 containerd[1711]: time="2024-06-25T18:33:27.059372568Z" level=info msg="StartContainer for \"cbd10c77306290252fdbd85356ba7b7521ac34faa6bff6a771eb20f6cd88bace\" returns successfully" Jun 25 18:33:27.067471 containerd[1711]: time="2024-06-25T18:33:27.067426432Z" level=info msg="StartContainer for \"5194c9594f385e1947f5555c346c1a3683050053527c54042fc335e2994a32f5\" returns successfully" Jun 25 18:33:27.133721 kubelet[2820]: W0625 18:33:27.133662 2820 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-71b05979e1&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:27.133721 kubelet[2820]: E0625 18:33:27.133701 2820 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.0.0-a-71b05979e1&limit=500&resourceVersion=0": dial tcp 10.200.20.27:6443: connect: connection refused Jun 25 18:33:29.597591 kubelet[2820]: E0625 18:33:29.597559 2820 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4012.0.0-a-71b05979e1" not found Jun 25 18:33:29.959261 kubelet[2820]: E0625 18:33:29.958929 2820 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4012.0.0-a-71b05979e1" not found Jun 25 18:33:29.979378 kubelet[2820]: E0625 18:33:29.979338 2820 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012.0.0-a-71b05979e1\" not found" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:30.126365 kubelet[2820]: I0625 18:33:30.126314 2820 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:30.132052 kubelet[2820]: I0625 18:33:30.132011 2820 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:30.142021 kubelet[2820]: E0625 18:33:30.141481 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:30.242912 kubelet[2820]: E0625 18:33:30.242597 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:30.343729 kubelet[2820]: E0625 18:33:30.343689 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:30.444525 kubelet[2820]: E0625 18:33:30.444473 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:30.545843 kubelet[2820]: E0625 18:33:30.545632 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:30.646078 kubelet[2820]: E0625 18:33:30.646042 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:30.746542 kubelet[2820]: E0625 18:33:30.746504 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:30.847392 kubelet[2820]: E0625 18:33:30.847287 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:30.947863 kubelet[2820]: E0625 18:33:30.947821 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:31.048593 kubelet[2820]: E0625 18:33:31.048560 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:31.149134 kubelet[2820]: E0625 18:33:31.149024 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:31.249643 kubelet[2820]: E0625 18:33:31.249599 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:31.350110 kubelet[2820]: E0625 18:33:31.350069 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:31.450868 kubelet[2820]: E0625 18:33:31.450621 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:31.551052 kubelet[2820]: E0625 18:33:31.551015 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:31.617103 systemd[1]: Reloading requested from client PID 3092 ('systemctl') (unit session-9.scope)... Jun 25 18:33:31.617434 systemd[1]: Reloading... Jun 25 18:33:31.651487 kubelet[2820]: E0625 18:33:31.651451 2820 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.0.0-a-71b05979e1\" not found" Jun 25 18:33:31.693217 zram_generator::config[3129]: No configuration found. Jun 25 18:33:31.764751 kubelet[2820]: I0625 18:33:31.764672 2820 apiserver.go:52] "Watching apiserver" Jun 25 18:33:31.771364 kubelet[2820]: I0625 18:33:31.771312 2820 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:33:31.814246 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:33:31.904869 systemd[1]: Reloading finished in 287 ms. Jun 25 18:33:31.944930 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:33:31.960051 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:33:31.960325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:33:31.960379 systemd[1]: kubelet.service: Consumed 1.045s CPU time, 117.1M memory peak, 0B memory swap peak. Jun 25 18:33:31.965391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:33:32.054521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:33:32.063458 (kubelet)[3193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:33:32.128597 kubelet[3193]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:33:32.128597 kubelet[3193]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:33:32.128597 kubelet[3193]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:33:32.128597 kubelet[3193]: I0625 18:33:32.125165 3193 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:33:32.433198 kubelet[3193]: I0625 18:33:32.143319 3193 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:33:32.433198 kubelet[3193]: I0625 18:33:32.143341 3193 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:33:32.433198 kubelet[3193]: I0625 18:33:32.143546 3193 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:33:32.436374 kubelet[3193]: I0625 18:33:32.436351 3193 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:33:32.437536 kubelet[3193]: I0625 18:33:32.437502 3193 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:33:32.442419 kubelet[3193]: W0625 18:33:32.442399 3193 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 18:33:32.442937 kubelet[3193]: I0625 18:33:32.442921 3193 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:33:32.443118 kubelet[3193]: I0625 18:33:32.443101 3193 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:33:32.443293 kubelet[3193]: I0625 18:33:32.443276 3193 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:33:32.443388 kubelet[3193]: I0625 18:33:32.443307 3193 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:33:32.443388 kubelet[3193]: I0625 18:33:32.443315 3193 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:33:32.443388 kubelet[3193]: I0625 18:33:32.443349 3193 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:33:32.443480 kubelet[3193]: I0625 18:33:32.443432 3193 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:33:32.443480 kubelet[3193]: I0625 18:33:32.443446 3193 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:33:32.443480 kubelet[3193]: I0625 18:33:32.443468 3193 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:33:32.443480 kubelet[3193]: I0625 18:33:32.443477 3193 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:33:32.444826 kubelet[3193]: I0625 18:33:32.444808 3193 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:33:32.445363 kubelet[3193]: I0625 18:33:32.445344 3193 server.go:1232] "Started kubelet" Jun 25 18:33:32.448821 kubelet[3193]: I0625 18:33:32.448796 3193 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:33:32.460225 kubelet[3193]: I0625 18:33:32.460169 3193 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:33:32.460662 kubelet[3193]: I0625 18:33:32.460634 3193 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:33:32.460804 kubelet[3193]: I0625 18:33:32.460778 3193 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:33:32.463240 kubelet[3193]: I0625 18:33:32.462717 3193 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:33:32.464132 kubelet[3193]: I0625 18:33:32.463776 3193 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:33:32.467158 kubelet[3193]: I0625 18:33:32.466826 3193 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:33:32.468902 kubelet[3193]: I0625 18:33:32.468158 3193 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:33:32.468990 kubelet[3193]: I0625 18:33:32.468967 3193 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:33:32.469031 kubelet[3193]: I0625 18:33:32.468994 3193 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:33:32.469031 kubelet[3193]: I0625 18:33:32.469009 3193 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:33:32.469073 kubelet[3193]: E0625 18:33:32.469058 3193 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:33:32.469143 kubelet[3193]: I0625 18:33:32.469131 3193 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:33:32.472621 kubelet[3193]: E0625 18:33:32.472601 3193 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:33:32.473368 kubelet[3193]: E0625 18:33:32.473354 3193 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:33:32.564486 kubelet[3193]: I0625 18:33:32.564453 3193 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.570495 kubelet[3193]: E0625 18:33:32.570416 3193 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:33:32.583160 kubelet[3193]: I0625 18:33:32.583139 3193 kubelet_node_status.go:108] "Node was previously registered" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.583333 kubelet[3193]: I0625 18:33:32.583317 3193 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.585518 kubelet[3193]: I0625 18:33:32.585395 3193 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:33:32.585635 kubelet[3193]: I0625 18:33:32.585624 3193 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:33:32.586373 kubelet[3193]: I0625 18:33:32.585683 3193 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:33:32.586591 kubelet[3193]: I0625 18:33:32.586483 3193 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:33:32.586591 kubelet[3193]: I0625 18:33:32.586511 3193 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:33:32.586591 kubelet[3193]: I0625 18:33:32.586537 3193 policy_none.go:49] "None policy: Start" Jun 25 18:33:32.588448 kubelet[3193]: I0625 18:33:32.588430 3193 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:33:32.589604 kubelet[3193]: I0625 18:33:32.588633 3193 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:33:32.589604 kubelet[3193]: I0625 18:33:32.588819 3193 state_mem.go:75] "Updated machine memory state" Jun 25 18:33:32.597343 kubelet[3193]: I0625 18:33:32.597324 3193 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:33:32.598336 kubelet[3193]: I0625 18:33:32.597890 3193 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:33:32.770891 kubelet[3193]: I0625 18:33:32.770847 3193 topology_manager.go:215] "Topology Admit Handler" podUID="b7580d4be9d5ba8b785defc1121c4ddb" podNamespace="kube-system" podName="kube-apiserver-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.771020 kubelet[3193]: I0625 18:33:32.770956 3193 topology_manager.go:215] "Topology Admit Handler" podUID="a78f59d8085d8bb586ef5bc9c9f4427d" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.771020 kubelet[3193]: I0625 18:33:32.771008 3193 topology_manager.go:215] "Topology Admit Handler" podUID="9f24703a8cae2164979154f65b2e74a3" podNamespace="kube-system" podName="kube-scheduler-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.781206 kubelet[3193]: W0625 18:33:32.780979 3193 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:33:32.782358 kubelet[3193]: W0625 18:33:32.782329 3193 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:33:32.782854 kubelet[3193]: W0625 18:33:32.782832 3193 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:33:32.862927 kubelet[3193]: I0625 18:33:32.862886 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7580d4be9d5ba8b785defc1121c4ddb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.0.0-a-71b05979e1\" (UID: \"b7580d4be9d5ba8b785defc1121c4ddb\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.862927 kubelet[3193]: I0625 18:33:32.862930 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a78f59d8085d8bb586ef5bc9c9f4427d-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.0.0-a-71b05979e1\" (UID: \"a78f59d8085d8bb586ef5bc9c9f4427d\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.863095 kubelet[3193]: I0625 18:33:32.862952 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f24703a8cae2164979154f65b2e74a3-kubeconfig\") pod \"kube-scheduler-ci-4012.0.0-a-71b05979e1\" (UID: \"9f24703a8cae2164979154f65b2e74a3\") " pod="kube-system/kube-scheduler-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.863095 kubelet[3193]: I0625 18:33:32.862984 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7580d4be9d5ba8b785defc1121c4ddb-ca-certs\") pod \"kube-apiserver-ci-4012.0.0-a-71b05979e1\" (UID: \"b7580d4be9d5ba8b785defc1121c4ddb\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.863095 kubelet[3193]: I0625 18:33:32.863002 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7580d4be9d5ba8b785defc1121c4ddb-k8s-certs\") pod \"kube-apiserver-ci-4012.0.0-a-71b05979e1\" (UID: \"b7580d4be9d5ba8b785defc1121c4ddb\") " pod="kube-system/kube-apiserver-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.863095 kubelet[3193]: I0625 18:33:32.863019 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a78f59d8085d8bb586ef5bc9c9f4427d-ca-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-71b05979e1\" (UID: \"a78f59d8085d8bb586ef5bc9c9f4427d\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.863095 kubelet[3193]: I0625 18:33:32.863039 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a78f59d8085d8bb586ef5bc9c9f4427d-k8s-certs\") pod \"kube-controller-manager-ci-4012.0.0-a-71b05979e1\" (UID: \"a78f59d8085d8bb586ef5bc9c9f4427d\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.863236 kubelet[3193]: I0625 18:33:32.863070 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a78f59d8085d8bb586ef5bc9c9f4427d-kubeconfig\") pod \"kube-controller-manager-ci-4012.0.0-a-71b05979e1\" (UID: \"a78f59d8085d8bb586ef5bc9c9f4427d\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:32.863236 kubelet[3193]: I0625 18:33:32.863096 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a78f59d8085d8bb586ef5bc9c9f4427d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.0.0-a-71b05979e1\" (UID: \"a78f59d8085d8bb586ef5bc9c9f4427d\") " pod="kube-system/kube-controller-manager-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:33.444652 kubelet[3193]: I0625 18:33:33.444587 3193 apiserver.go:52] "Watching apiserver" Jun 25 18:33:33.461754 kubelet[3193]: I0625 18:33:33.461713 3193 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:33:33.555706 kubelet[3193]: W0625 18:33:33.555666 3193 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 25 18:33:33.555847 kubelet[3193]: E0625 18:33:33.555731 3193 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.0.0-a-71b05979e1\" already exists" pod="kube-system/kube-apiserver-ci-4012.0.0-a-71b05979e1" Jun 25 18:33:33.616865 kubelet[3193]: I0625 18:33:33.616826 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.0.0-a-71b05979e1" podStartSLOduration=1.616765403 podCreationTimestamp="2024-06-25 18:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:33:33.596531125 +0000 UTC m=+1.529029250" watchObservedRunningTime="2024-06-25 18:33:33.616765403 +0000 UTC m=+1.549263488" Jun 25 18:33:33.636288 kubelet[3193]: I0625 18:33:33.636246 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.0.0-a-71b05979e1" podStartSLOduration=1.636205884 podCreationTimestamp="2024-06-25 18:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:33:33.620798915 +0000 UTC m=+1.553297040" watchObservedRunningTime="2024-06-25 18:33:33.636205884 +0000 UTC m=+1.568704009" Jun 25 18:33:37.634910 kubelet[3193]: I0625 18:33:37.634772 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.0.0-a-71b05979e1" podStartSLOduration=5.634734067 podCreationTimestamp="2024-06-25 18:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:33:33.639014158 +0000 UTC m=+1.571512283" watchObservedRunningTime="2024-06-25 18:33:37.634734067 +0000 UTC m=+5.567232192" Jun 25 18:33:37.901337 sudo[2327]: pam_unix(sudo:session): session closed for user root Jun 25 18:33:37.985721 sshd[2324]: pam_unix(sshd:session): session closed for user core Jun 25 18:33:37.988770 systemd-logind[1685]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:33:37.989001 systemd[1]: sshd@6-10.200.20.27:22-10.200.16.10:41542.service: Deactivated successfully. Jun 25 18:33:37.990698 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:33:37.990971 systemd[1]: session-9.scope: Consumed 6.492s CPU time, 133.7M memory peak, 0B memory swap peak. Jun 25 18:33:37.993633 systemd-logind[1685]: Removed session 9. Jun 25 18:33:45.248768 kubelet[3193]: I0625 18:33:45.248731 3193 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:33:45.249196 containerd[1711]: time="2024-06-25T18:33:45.249106515Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:33:45.249555 kubelet[3193]: I0625 18:33:45.249529 3193 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:33:45.748659 kubelet[3193]: I0625 18:33:45.747966 3193 topology_manager.go:215] "Topology Admit Handler" podUID="4b723059-4f8e-4d72-a5c9-97ff7b2a688e" podNamespace="kube-system" podName="kube-proxy-ssh8p" Jun 25 18:33:45.756496 systemd[1]: Created slice kubepods-besteffort-pod4b723059_4f8e_4d72_a5c9_97ff7b2a688e.slice - libcontainer container kubepods-besteffort-pod4b723059_4f8e_4d72_a5c9_97ff7b2a688e.slice. Jun 25 18:33:45.836087 kubelet[3193]: I0625 18:33:45.836059 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mhc5\" (UniqueName: \"kubernetes.io/projected/4b723059-4f8e-4d72-a5c9-97ff7b2a688e-kube-api-access-7mhc5\") pod \"kube-proxy-ssh8p\" (UID: \"4b723059-4f8e-4d72-a5c9-97ff7b2a688e\") " pod="kube-system/kube-proxy-ssh8p" Jun 25 18:33:45.836291 kubelet[3193]: I0625 18:33:45.836279 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b723059-4f8e-4d72-a5c9-97ff7b2a688e-lib-modules\") pod \"kube-proxy-ssh8p\" (UID: \"4b723059-4f8e-4d72-a5c9-97ff7b2a688e\") " pod="kube-system/kube-proxy-ssh8p" Jun 25 18:33:45.836441 kubelet[3193]: I0625 18:33:45.836360 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4b723059-4f8e-4d72-a5c9-97ff7b2a688e-kube-proxy\") pod \"kube-proxy-ssh8p\" (UID: \"4b723059-4f8e-4d72-a5c9-97ff7b2a688e\") " pod="kube-system/kube-proxy-ssh8p" Jun 25 18:33:45.836441 kubelet[3193]: I0625 18:33:45.836388 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b723059-4f8e-4d72-a5c9-97ff7b2a688e-xtables-lock\") pod \"kube-proxy-ssh8p\" (UID: \"4b723059-4f8e-4d72-a5c9-97ff7b2a688e\") " pod="kube-system/kube-proxy-ssh8p" Jun 25 18:33:46.065799 containerd[1711]: time="2024-06-25T18:33:46.065749414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ssh8p,Uid:4b723059-4f8e-4d72-a5c9-97ff7b2a688e,Namespace:kube-system,Attempt:0,}" Jun 25 18:33:46.113683 containerd[1711]: time="2024-06-25T18:33:46.113350360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:46.113683 containerd[1711]: time="2024-06-25T18:33:46.113399680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:46.113683 containerd[1711]: time="2024-06-25T18:33:46.113417200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:46.113683 containerd[1711]: time="2024-06-25T18:33:46.113430199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:46.128490 systemd[1]: run-containerd-runc-k8s.io-b8b676a50715a626acf4efb8c77d9f8257c45c02946f8ff2baf8de4d39aea8d8-runc.RV7Dg9.mount: Deactivated successfully. Jun 25 18:33:46.136397 systemd[1]: Started cri-containerd-b8b676a50715a626acf4efb8c77d9f8257c45c02946f8ff2baf8de4d39aea8d8.scope - libcontainer container b8b676a50715a626acf4efb8c77d9f8257c45c02946f8ff2baf8de4d39aea8d8. Jun 25 18:33:46.160205 kubelet[3193]: I0625 18:33:46.158679 3193 topology_manager.go:215] "Topology Admit Handler" podUID="ced136da-6eb7-442b-8303-e9432a3a831c" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-85vtm" Jun 25 18:33:46.170460 systemd[1]: Created slice kubepods-besteffort-podced136da_6eb7_442b_8303_e9432a3a831c.slice - libcontainer container kubepods-besteffort-podced136da_6eb7_442b_8303_e9432a3a831c.slice. Jun 25 18:33:46.203741 containerd[1711]: time="2024-06-25T18:33:46.203328901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ssh8p,Uid:4b723059-4f8e-4d72-a5c9-97ff7b2a688e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8b676a50715a626acf4efb8c77d9f8257c45c02946f8ff2baf8de4d39aea8d8\"" Jun 25 18:33:46.211569 containerd[1711]: time="2024-06-25T18:33:46.211243725Z" level=info msg="CreateContainer within sandbox \"b8b676a50715a626acf4efb8c77d9f8257c45c02946f8ff2baf8de4d39aea8d8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:33:46.240104 kubelet[3193]: I0625 18:33:46.240065 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ced136da-6eb7-442b-8303-e9432a3a831c-var-lib-calico\") pod \"tigera-operator-76c4974c85-85vtm\" (UID: \"ced136da-6eb7-442b-8303-e9432a3a831c\") " pod="tigera-operator/tigera-operator-76c4974c85-85vtm" Jun 25 18:33:46.240104 kubelet[3193]: I0625 18:33:46.240113 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdwd7\" (UniqueName: \"kubernetes.io/projected/ced136da-6eb7-442b-8303-e9432a3a831c-kube-api-access-kdwd7\") pod \"tigera-operator-76c4974c85-85vtm\" (UID: \"ced136da-6eb7-442b-8303-e9432a3a831c\") " pod="tigera-operator/tigera-operator-76c4974c85-85vtm" Jun 25 18:33:46.249167 containerd[1711]: time="2024-06-25T18:33:46.249123490Z" level=info msg="CreateContainer within sandbox \"b8b676a50715a626acf4efb8c77d9f8257c45c02946f8ff2baf8de4d39aea8d8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"538fb66763b4b19674c158c983206cce5850f2b0f8865d35e0034effc6f78032\"" Jun 25 18:33:46.251166 containerd[1711]: time="2024-06-25T18:33:46.251132846Z" level=info msg="StartContainer for \"538fb66763b4b19674c158c983206cce5850f2b0f8865d35e0034effc6f78032\"" Jun 25 18:33:46.273359 systemd[1]: Started cri-containerd-538fb66763b4b19674c158c983206cce5850f2b0f8865d35e0034effc6f78032.scope - libcontainer container 538fb66763b4b19674c158c983206cce5850f2b0f8865d35e0034effc6f78032. Jun 25 18:33:46.304104 containerd[1711]: time="2024-06-25T18:33:46.304056261Z" level=info msg="StartContainer for \"538fb66763b4b19674c158c983206cce5850f2b0f8865d35e0034effc6f78032\" returns successfully" Jun 25 18:33:46.474107 containerd[1711]: time="2024-06-25T18:33:46.473999644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-85vtm,Uid:ced136da-6eb7-442b-8303-e9432a3a831c,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:33:46.525211 containerd[1711]: time="2024-06-25T18:33:46.525070983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:46.526207 containerd[1711]: time="2024-06-25T18:33:46.525519382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:46.526290 containerd[1711]: time="2024-06-25T18:33:46.526214700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:46.526290 containerd[1711]: time="2024-06-25T18:33:46.526231140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:46.543335 systemd[1]: Started cri-containerd-cb51ec02a6f253554b81c1a090d0271fab7631e564a64946dae3db77c89ed2ed.scope - libcontainer container cb51ec02a6f253554b81c1a090d0271fab7631e564a64946dae3db77c89ed2ed. Jun 25 18:33:46.576355 containerd[1711]: time="2024-06-25T18:33:46.576297641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-85vtm,Uid:ced136da-6eb7-442b-8303-e9432a3a831c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cb51ec02a6f253554b81c1a090d0271fab7631e564a64946dae3db77c89ed2ed\"" Jun 25 18:33:46.579606 containerd[1711]: time="2024-06-25T18:33:46.579414155Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:33:48.109940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040676522.mount: Deactivated successfully. Jun 25 18:33:48.741065 containerd[1711]: time="2024-06-25T18:33:48.740301789Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:48.744372 containerd[1711]: time="2024-06-25T18:33:48.744332060Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473594" Jun 25 18:33:48.749084 containerd[1711]: time="2024-06-25T18:33:48.749027651Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:48.755006 containerd[1711]: time="2024-06-25T18:33:48.754953758Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:48.756588 containerd[1711]: time="2024-06-25T18:33:48.755985876Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.176537521s" Jun 25 18:33:48.756588 containerd[1711]: time="2024-06-25T18:33:48.756023036Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 18:33:48.759228 containerd[1711]: time="2024-06-25T18:33:48.758775150Z" level=info msg="CreateContainer within sandbox \"cb51ec02a6f253554b81c1a090d0271fab7631e564a64946dae3db77c89ed2ed\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:33:48.802681 containerd[1711]: time="2024-06-25T18:33:48.802636660Z" level=info msg="CreateContainer within sandbox \"cb51ec02a6f253554b81c1a090d0271fab7631e564a64946dae3db77c89ed2ed\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"acd6b727b5fd20c7a1fdf1c7449d93d8a01e4f57fd1bc5a7bd440c8151380432\"" Jun 25 18:33:48.803526 containerd[1711]: time="2024-06-25T18:33:48.803495978Z" level=info msg="StartContainer for \"acd6b727b5fd20c7a1fdf1c7449d93d8a01e4f57fd1bc5a7bd440c8151380432\"" Jun 25 18:33:48.829335 systemd[1]: Started cri-containerd-acd6b727b5fd20c7a1fdf1c7449d93d8a01e4f57fd1bc5a7bd440c8151380432.scope - libcontainer container acd6b727b5fd20c7a1fdf1c7449d93d8a01e4f57fd1bc5a7bd440c8151380432. Jun 25 18:33:48.856025 containerd[1711]: time="2024-06-25T18:33:48.855970550Z" level=info msg="StartContainer for \"acd6b727b5fd20c7a1fdf1c7449d93d8a01e4f57fd1bc5a7bd440c8151380432\" returns successfully" Jun 25 18:33:49.586759 kubelet[3193]: I0625 18:33:49.586546 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ssh8p" podStartSLOduration=4.58651296 podCreationTimestamp="2024-06-25 18:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:33:46.581395871 +0000 UTC m=+14.513893996" watchObservedRunningTime="2024-06-25 18:33:49.58651296 +0000 UTC m=+17.519011085" Jun 25 18:33:49.586759 kubelet[3193]: I0625 18:33:49.586625 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-85vtm" podStartSLOduration=1.408643562 podCreationTimestamp="2024-06-25 18:33:46 +0000 UTC" firstStartedPulling="2024-06-25 18:33:46.578391517 +0000 UTC m=+14.510889642" lastFinishedPulling="2024-06-25 18:33:48.756357555 +0000 UTC m=+16.688855680" observedRunningTime="2024-06-25 18:33:49.586458681 +0000 UTC m=+17.518956766" watchObservedRunningTime="2024-06-25 18:33:49.5866096 +0000 UTC m=+17.519107725" Jun 25 18:33:53.419430 kubelet[3193]: I0625 18:33:53.419379 3193 topology_manager.go:215] "Topology Admit Handler" podUID="3f73f535-2c81-4c68-b3d3-81f0aed1692a" podNamespace="calico-system" podName="calico-typha-56db76dcf6-vpl7x" Jun 25 18:33:53.428033 systemd[1]: Created slice kubepods-besteffort-pod3f73f535_2c81_4c68_b3d3_81f0aed1692a.slice - libcontainer container kubepods-besteffort-pod3f73f535_2c81_4c68_b3d3_81f0aed1692a.slice. Jun 25 18:33:53.483186 kubelet[3193]: I0625 18:33:53.483130 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3f73f535-2c81-4c68-b3d3-81f0aed1692a-typha-certs\") pod \"calico-typha-56db76dcf6-vpl7x\" (UID: \"3f73f535-2c81-4c68-b3d3-81f0aed1692a\") " pod="calico-system/calico-typha-56db76dcf6-vpl7x" Jun 25 18:33:53.483934 kubelet[3193]: I0625 18:33:53.483730 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4krv\" (UniqueName: \"kubernetes.io/projected/3f73f535-2c81-4c68-b3d3-81f0aed1692a-kube-api-access-v4krv\") pod \"calico-typha-56db76dcf6-vpl7x\" (UID: \"3f73f535-2c81-4c68-b3d3-81f0aed1692a\") " pod="calico-system/calico-typha-56db76dcf6-vpl7x" Jun 25 18:33:53.483934 kubelet[3193]: I0625 18:33:53.483783 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f73f535-2c81-4c68-b3d3-81f0aed1692a-tigera-ca-bundle\") pod \"calico-typha-56db76dcf6-vpl7x\" (UID: \"3f73f535-2c81-4c68-b3d3-81f0aed1692a\") " pod="calico-system/calico-typha-56db76dcf6-vpl7x" Jun 25 18:33:53.505832 kubelet[3193]: I0625 18:33:53.505548 3193 topology_manager.go:215] "Topology Admit Handler" podUID="30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" podNamespace="calico-system" podName="calico-node-z8bsw" Jun 25 18:33:53.518019 systemd[1]: Created slice kubepods-besteffort-pod30e0d1d4_d9e6_48b9_8af6_c5f7621de9a7.slice - libcontainer container kubepods-besteffort-pod30e0d1d4_d9e6_48b9_8af6_c5f7621de9a7.slice. Jun 25 18:33:53.584738 kubelet[3193]: I0625 18:33:53.584026 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-net-dir\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.584738 kubelet[3193]: I0625 18:33:53.584070 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-tigera-ca-bundle\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.584738 kubelet[3193]: I0625 18:33:53.584092 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-flexvol-driver-host\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.584738 kubelet[3193]: I0625 18:33:53.584122 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-policysync\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.584738 kubelet[3193]: I0625 18:33:53.584142 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-var-lib-calico\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.584968 kubelet[3193]: I0625 18:33:53.584186 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-bin-dir\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.584968 kubelet[3193]: I0625 18:33:53.584210 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-log-dir\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.584968 kubelet[3193]: I0625 18:33:53.584230 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-node-certs\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.584968 kubelet[3193]: I0625 18:33:53.584249 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-var-run-calico\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.584968 kubelet[3193]: I0625 18:33:53.584279 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-lib-modules\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.585074 kubelet[3193]: I0625 18:33:53.584297 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-xtables-lock\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.585074 kubelet[3193]: I0625 18:33:53.584317 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz8hv\" (UniqueName: \"kubernetes.io/projected/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-kube-api-access-mz8hv\") pod \"calico-node-z8bsw\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " pod="calico-system/calico-node-z8bsw" Jun 25 18:33:53.628853 kubelet[3193]: I0625 18:33:53.628819 3193 topology_manager.go:215] "Topology Admit Handler" podUID="13f88024-04f7-4d51-8fb3-1cee9d125eda" podNamespace="calico-system" podName="csi-node-driver-5d2z5" Jun 25 18:33:53.629490 kubelet[3193]: E0625 18:33:53.629345 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5d2z5" podUID="13f88024-04f7-4d51-8fb3-1cee9d125eda" Jun 25 18:33:53.689373 kubelet[3193]: I0625 18:33:53.686234 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/13f88024-04f7-4d51-8fb3-1cee9d125eda-socket-dir\") pod \"csi-node-driver-5d2z5\" (UID: \"13f88024-04f7-4d51-8fb3-1cee9d125eda\") " pod="calico-system/csi-node-driver-5d2z5" Jun 25 18:33:53.689373 kubelet[3193]: I0625 18:33:53.686291 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/13f88024-04f7-4d51-8fb3-1cee9d125eda-registration-dir\") pod \"csi-node-driver-5d2z5\" (UID: \"13f88024-04f7-4d51-8fb3-1cee9d125eda\") " pod="calico-system/csi-node-driver-5d2z5" Jun 25 18:33:53.689373 kubelet[3193]: I0625 18:33:53.686350 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/13f88024-04f7-4d51-8fb3-1cee9d125eda-varrun\") pod \"csi-node-driver-5d2z5\" (UID: \"13f88024-04f7-4d51-8fb3-1cee9d125eda\") " pod="calico-system/csi-node-driver-5d2z5" Jun 25 18:33:53.689373 kubelet[3193]: I0625 18:33:53.686371 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/13f88024-04f7-4d51-8fb3-1cee9d125eda-kubelet-dir\") pod \"csi-node-driver-5d2z5\" (UID: \"13f88024-04f7-4d51-8fb3-1cee9d125eda\") " pod="calico-system/csi-node-driver-5d2z5" Jun 25 18:33:53.689373 kubelet[3193]: I0625 18:33:53.686431 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbsp\" (UniqueName: \"kubernetes.io/projected/13f88024-04f7-4d51-8fb3-1cee9d125eda-kube-api-access-gpbsp\") pod \"csi-node-driver-5d2z5\" (UID: \"13f88024-04f7-4d51-8fb3-1cee9d125eda\") " pod="calico-system/csi-node-driver-5d2z5" Jun 25 18:33:53.709302 kubelet[3193]: E0625 18:33:53.709264 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.709302 kubelet[3193]: W0625 18:33:53.709292 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.709441 kubelet[3193]: E0625 18:33:53.709323 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.732128 containerd[1711]: time="2024-06-25T18:33:53.732082476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56db76dcf6-vpl7x,Uid:3f73f535-2c81-4c68-b3d3-81f0aed1692a,Namespace:calico-system,Attempt:0,}" Jun 25 18:33:53.776514 containerd[1711]: time="2024-06-25T18:33:53.776046465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:53.776903 containerd[1711]: time="2024-06-25T18:33:53.776663784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:53.776903 containerd[1711]: time="2024-06-25T18:33:53.776685064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:53.776903 containerd[1711]: time="2024-06-25T18:33:53.776709144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:53.788331 kubelet[3193]: E0625 18:33:53.788143 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.788331 kubelet[3193]: W0625 18:33:53.788167 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.788331 kubelet[3193]: E0625 18:33:53.788205 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.790337 kubelet[3193]: E0625 18:33:53.790198 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.790337 kubelet[3193]: W0625 18:33:53.790216 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.790337 kubelet[3193]: E0625 18:33:53.790234 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.791123 kubelet[3193]: E0625 18:33:53.790764 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.791123 kubelet[3193]: W0625 18:33:53.790779 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.791123 kubelet[3193]: E0625 18:33:53.790798 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.791123 kubelet[3193]: E0625 18:33:53.791043 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.791123 kubelet[3193]: W0625 18:33:53.791051 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.791123 kubelet[3193]: E0625 18:33:53.791064 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.792758 kubelet[3193]: E0625 18:33:53.792563 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.792758 kubelet[3193]: W0625 18:33:53.792582 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.792758 kubelet[3193]: E0625 18:33:53.792618 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.793154 kubelet[3193]: E0625 18:33:53.793065 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.793154 kubelet[3193]: W0625 18:33:53.793080 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.793471 kubelet[3193]: E0625 18:33:53.793457 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.793666 kubelet[3193]: W0625 18:33:53.793556 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.793986 kubelet[3193]: E0625 18:33:53.793826 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.793986 kubelet[3193]: W0625 18:33:53.793839 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.793986 kubelet[3193]: E0625 18:33:53.793853 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.793986 kubelet[3193]: E0625 18:33:53.793880 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.794209 kubelet[3193]: E0625 18:33:53.794161 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.794264 kubelet[3193]: W0625 18:33:53.794253 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.794326 kubelet[3193]: E0625 18:33:53.794317 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.794561 kubelet[3193]: E0625 18:33:53.794549 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.794742 kubelet[3193]: W0625 18:33:53.794624 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.794742 kubelet[3193]: E0625 18:33:53.794642 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.795037 kubelet[3193]: E0625 18:33:53.794845 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.795160 kubelet[3193]: E0625 18:33:53.795146 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.795486 systemd[1]: Started cri-containerd-7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec.scope - libcontainer container 7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec. Jun 25 18:33:53.795940 kubelet[3193]: W0625 18:33:53.795651 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.795940 kubelet[3193]: E0625 18:33:53.795679 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.796720 kubelet[3193]: E0625 18:33:53.796591 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.796720 kubelet[3193]: W0625 18:33:53.796605 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.796720 kubelet[3193]: E0625 18:33:53.796628 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.797001 kubelet[3193]: E0625 18:33:53.796907 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.797001 kubelet[3193]: W0625 18:33:53.796922 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.797001 kubelet[3193]: E0625 18:33:53.796957 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.797576 kubelet[3193]: E0625 18:33:53.797398 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.797576 kubelet[3193]: W0625 18:33:53.797412 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.797576 kubelet[3193]: E0625 18:33:53.797445 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.798085 kubelet[3193]: E0625 18:33:53.797888 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.798085 kubelet[3193]: W0625 18:33:53.797902 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.798085 kubelet[3193]: E0625 18:33:53.797940 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.798456 kubelet[3193]: E0625 18:33:53.798283 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.798456 kubelet[3193]: W0625 18:33:53.798295 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.798456 kubelet[3193]: E0625 18:33:53.798331 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.798776 kubelet[3193]: E0625 18:33:53.798683 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.798776 kubelet[3193]: W0625 18:33:53.798696 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.798776 kubelet[3193]: E0625 18:33:53.798731 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.799515 kubelet[3193]: E0625 18:33:53.799383 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.799515 kubelet[3193]: W0625 18:33:53.799398 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.799515 kubelet[3193]: E0625 18:33:53.799448 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.799894 kubelet[3193]: E0625 18:33:53.799720 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.799894 kubelet[3193]: W0625 18:33:53.799733 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.799894 kubelet[3193]: E0625 18:33:53.799767 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.800280 kubelet[3193]: E0625 18:33:53.800097 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.800280 kubelet[3193]: W0625 18:33:53.800110 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.800280 kubelet[3193]: E0625 18:33:53.800142 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.800700 kubelet[3193]: E0625 18:33:53.800535 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.800700 kubelet[3193]: W0625 18:33:53.800549 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.800700 kubelet[3193]: E0625 18:33:53.800598 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.801026 kubelet[3193]: E0625 18:33:53.800964 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.801026 kubelet[3193]: W0625 18:33:53.800977 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.801026 kubelet[3193]: E0625 18:33:53.800990 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.801771 kubelet[3193]: E0625 18:33:53.801594 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.801771 kubelet[3193]: W0625 18:33:53.801612 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.801771 kubelet[3193]: E0625 18:33:53.801629 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.802138 kubelet[3193]: E0625 18:33:53.802016 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.802138 kubelet[3193]: W0625 18:33:53.802029 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.802138 kubelet[3193]: E0625 18:33:53.802062 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.802501 kubelet[3193]: E0625 18:33:53.802488 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.802661 kubelet[3193]: W0625 18:33:53.802602 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.802661 kubelet[3193]: E0625 18:33:53.802622 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.818187 kubelet[3193]: E0625 18:33:53.817986 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:53.818187 kubelet[3193]: W0625 18:33:53.818009 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:53.818187 kubelet[3193]: E0625 18:33:53.818029 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:53.825844 containerd[1711]: time="2024-06-25T18:33:53.825796082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z8bsw,Uid:30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7,Namespace:calico-system,Attempt:0,}" Jun 25 18:33:53.846069 containerd[1711]: time="2024-06-25T18:33:53.845965361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56db76dcf6-vpl7x,Uid:3f73f535-2c81-4c68-b3d3-81f0aed1692a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\"" Jun 25 18:33:53.848441 containerd[1711]: time="2024-06-25T18:33:53.848384956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:33:53.881252 containerd[1711]: time="2024-06-25T18:33:53.880237770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:53.881252 containerd[1711]: time="2024-06-25T18:33:53.881000448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:53.881252 containerd[1711]: time="2024-06-25T18:33:53.881019408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:53.881252 containerd[1711]: time="2024-06-25T18:33:53.881029128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:53.897353 systemd[1]: Started cri-containerd-3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151.scope - libcontainer container 3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151. Jun 25 18:33:53.923353 containerd[1711]: time="2024-06-25T18:33:53.923191281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z8bsw,Uid:30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\"" Jun 25 18:33:55.469719 kubelet[3193]: E0625 18:33:55.469663 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5d2z5" podUID="13f88024-04f7-4d51-8fb3-1cee9d125eda" Jun 25 18:33:55.719209 containerd[1711]: time="2024-06-25T18:33:55.718864394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:55.722730 containerd[1711]: time="2024-06-25T18:33:55.722626867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 18:33:55.727660 containerd[1711]: time="2024-06-25T18:33:55.727604178Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:55.733907 containerd[1711]: time="2024-06-25T18:33:55.733844846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:55.734958 containerd[1711]: time="2024-06-25T18:33:55.734422245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 1.885974769s" Jun 25 18:33:55.734958 containerd[1711]: time="2024-06-25T18:33:55.734461085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 18:33:55.737836 containerd[1711]: time="2024-06-25T18:33:55.736842841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:33:55.750740 containerd[1711]: time="2024-06-25T18:33:55.750698656Z" level=info msg="CreateContainer within sandbox \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:33:55.784315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985894972.mount: Deactivated successfully. Jun 25 18:33:55.802861 containerd[1711]: time="2024-06-25T18:33:55.802339722Z" level=info msg="CreateContainer within sandbox \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\"" Jun 25 18:33:55.805371 containerd[1711]: time="2024-06-25T18:33:55.803156320Z" level=info msg="StartContainer for \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\"" Jun 25 18:33:55.833341 systemd[1]: Started cri-containerd-b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926.scope - libcontainer container b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926. Jun 25 18:33:55.867152 containerd[1711]: time="2024-06-25T18:33:55.867105564Z" level=info msg="StartContainer for \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\" returns successfully" Jun 25 18:33:56.598715 containerd[1711]: time="2024-06-25T18:33:56.598667272Z" level=info msg="StopContainer for \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\" with timeout 300 (s)" Jun 25 18:33:56.599210 containerd[1711]: time="2024-06-25T18:33:56.599126511Z" level=info msg="Stop container \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\" with signal terminated" Jun 25 18:33:56.615589 kubelet[3193]: I0625 18:33:56.615544 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-56db76dcf6-vpl7x" podStartSLOduration=1.726312197 podCreationTimestamp="2024-06-25 18:33:53 +0000 UTC" firstStartedPulling="2024-06-25 18:33:53.847334478 +0000 UTC m=+21.779832603" lastFinishedPulling="2024-06-25 18:33:55.735318924 +0000 UTC m=+23.667817049" observedRunningTime="2024-06-25 18:33:56.612128767 +0000 UTC m=+24.544626852" watchObservedRunningTime="2024-06-25 18:33:56.614296643 +0000 UTC m=+24.546794728" Jun 25 18:33:56.617022 systemd[1]: cri-containerd-b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926.scope: Deactivated successfully. Jun 25 18:33:56.740742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926-rootfs.mount: Deactivated successfully. Jun 25 18:33:57.470065 kubelet[3193]: E0625 18:33:57.469992 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5d2z5" podUID="13f88024-04f7-4d51-8fb3-1cee9d125eda" Jun 25 18:33:57.509275 containerd[1711]: time="2024-06-25T18:33:57.509212694Z" level=info msg="shim disconnected" id=b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926 namespace=k8s.io Jun 25 18:33:57.509769 containerd[1711]: time="2024-06-25T18:33:57.509633333Z" level=warning msg="cleaning up after shim disconnected" id=b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926 namespace=k8s.io Jun 25 18:33:57.509769 containerd[1711]: time="2024-06-25T18:33:57.509655293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:33:57.525821 containerd[1711]: time="2024-06-25T18:33:57.525686584Z" level=info msg="StopContainer for \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\" returns successfully" Jun 25 18:33:57.528194 containerd[1711]: time="2024-06-25T18:33:57.526383943Z" level=info msg="StopPodSandbox for \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\"" Jun 25 18:33:57.528194 containerd[1711]: time="2024-06-25T18:33:57.526419023Z" level=info msg="Container to stop \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:33:57.531047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec-shm.mount: Deactivated successfully. Jun 25 18:33:57.544640 systemd[1]: cri-containerd-7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec.scope: Deactivated successfully. Jun 25 18:33:57.564775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec-rootfs.mount: Deactivated successfully. Jun 25 18:33:57.578080 containerd[1711]: time="2024-06-25T18:33:57.577865009Z" level=info msg="shim disconnected" id=7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec namespace=k8s.io Jun 25 18:33:57.578080 containerd[1711]: time="2024-06-25T18:33:57.577914289Z" level=warning msg="cleaning up after shim disconnected" id=7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec namespace=k8s.io Jun 25 18:33:57.578080 containerd[1711]: time="2024-06-25T18:33:57.577924129Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:33:57.593811 containerd[1711]: time="2024-06-25T18:33:57.593662620Z" level=info msg="TearDown network for sandbox \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\" successfully" Jun 25 18:33:57.593811 containerd[1711]: time="2024-06-25T18:33:57.593693100Z" level=info msg="StopPodSandbox for \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\" returns successfully" Jun 25 18:33:57.602681 kubelet[3193]: I0625 18:33:57.602295 3193 scope.go:117] "RemoveContainer" containerID="b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926" Jun 25 18:33:57.605843 containerd[1711]: time="2024-06-25T18:33:57.605388839Z" level=info msg="RemoveContainer for \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\"" Jun 25 18:33:57.619676 containerd[1711]: time="2024-06-25T18:33:57.619631173Z" level=info msg="RemoveContainer for \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\" returns successfully" Jun 25 18:33:57.619958 kubelet[3193]: E0625 18:33:57.619922 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.619958 kubelet[3193]: W0625 18:33:57.619944 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.621821 kubelet[3193]: E0625 18:33:57.619963 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.621821 kubelet[3193]: I0625 18:33:57.619989 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3f73f535-2c81-4c68-b3d3-81f0aed1692a-typha-certs\") pod \"3f73f535-2c81-4c68-b3d3-81f0aed1692a\" (UID: \"3f73f535-2c81-4c68-b3d3-81f0aed1692a\") " Jun 25 18:33:57.621821 kubelet[3193]: E0625 18:33:57.620122 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.621821 kubelet[3193]: W0625 18:33:57.620128 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.621821 kubelet[3193]: E0625 18:33:57.620140 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.621821 kubelet[3193]: I0625 18:33:57.620159 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f73f535-2c81-4c68-b3d3-81f0aed1692a-tigera-ca-bundle\") pod \"3f73f535-2c81-4c68-b3d3-81f0aed1692a\" (UID: \"3f73f535-2c81-4c68-b3d3-81f0aed1692a\") " Jun 25 18:33:57.621821 kubelet[3193]: E0625 18:33:57.620309 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.621821 kubelet[3193]: W0625 18:33:57.620318 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.621821 kubelet[3193]: E0625 18:33:57.620328 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.622070 kubelet[3193]: I0625 18:33:57.620347 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4krv\" (UniqueName: \"kubernetes.io/projected/3f73f535-2c81-4c68-b3d3-81f0aed1692a-kube-api-access-v4krv\") pod \"3f73f535-2c81-4c68-b3d3-81f0aed1692a\" (UID: \"3f73f535-2c81-4c68-b3d3-81f0aed1692a\") " Jun 25 18:33:57.622070 kubelet[3193]: E0625 18:33:57.620569 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.622070 kubelet[3193]: W0625 18:33:57.620578 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.622070 kubelet[3193]: E0625 18:33:57.620589 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.624574 systemd[1]: var-lib-kubelet-pods-3f73f535\x2d2c81\x2d4c68\x2db3d3\x2d81f0aed1692a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv4krv.mount: Deactivated successfully. Jun 25 18:33:57.626698 kubelet[3193]: I0625 18:33:57.626322 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f73f535-2c81-4c68-b3d3-81f0aed1692a-kube-api-access-v4krv" (OuterVolumeSpecName: "kube-api-access-v4krv") pod "3f73f535-2c81-4c68-b3d3-81f0aed1692a" (UID: "3f73f535-2c81-4c68-b3d3-81f0aed1692a"). InnerVolumeSpecName "kube-api-access-v4krv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:33:57.627061 kubelet[3193]: E0625 18:33:57.626967 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.629045 kubelet[3193]: W0625 18:33:57.626997 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.629045 kubelet[3193]: E0625 18:33:57.627343 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.629045 kubelet[3193]: I0625 18:33:57.627404 3193 scope.go:117] "RemoveContainer" containerID="b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926" Jun 25 18:33:57.630700 kubelet[3193]: I0625 18:33:57.629770 3193 topology_manager.go:215] "Topology Admit Handler" podUID="f5203218-9faa-4454-9eea-92e349493be5" podNamespace="calico-system" podName="calico-typha-86f54f78c-5ftvk" Jun 25 18:33:57.630700 kubelet[3193]: E0625 18:33:57.629820 3193 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3f73f535-2c81-4c68-b3d3-81f0aed1692a" containerName="calico-typha" Jun 25 18:33:57.630700 kubelet[3193]: I0625 18:33:57.629845 3193 memory_manager.go:346] "RemoveStaleState removing state" podUID="3f73f535-2c81-4c68-b3d3-81f0aed1692a" containerName="calico-typha" Jun 25 18:33:57.636215 systemd[1]: var-lib-kubelet-pods-3f73f535\x2d2c81\x2d4c68\x2db3d3\x2d81f0aed1692a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jun 25 18:33:57.637534 kubelet[3193]: E0625 18:33:57.636530 3193 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\": not found" containerID="b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926" Jun 25 18:33:57.637534 kubelet[3193]: I0625 18:33:57.636581 3193 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926"} err="failed to get container status \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\": rpc error: code = NotFound desc = an error occurred when try to find container \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\": not found" Jun 25 18:33:57.637594 containerd[1711]: time="2024-06-25T18:33:57.636321942Z" level=error msg="ContainerStatus for \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b618e175c59c0caed1623e245ac9a9aa655d6b1d7d1e355c96d00fdb58e08926\": not found" Jun 25 18:33:57.642107 kubelet[3193]: I0625 18:33:57.640351 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f73f535-2c81-4c68-b3d3-81f0aed1692a-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "3f73f535-2c81-4c68-b3d3-81f0aed1692a" (UID: "3f73f535-2c81-4c68-b3d3-81f0aed1692a"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:33:57.648397 kubelet[3193]: E0625 18:33:57.648377 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.648528 kubelet[3193]: W0625 18:33:57.648513 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.648980 kubelet[3193]: E0625 18:33:57.648578 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.652258 kubelet[3193]: I0625 18:33:57.649958 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f73f535-2c81-4c68-b3d3-81f0aed1692a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "3f73f535-2c81-4c68-b3d3-81f0aed1692a" (UID: "3f73f535-2c81-4c68-b3d3-81f0aed1692a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:33:57.650501 systemd[1]: Created slice kubepods-besteffort-podf5203218_9faa_4454_9eea_92e349493be5.slice - libcontainer container kubepods-besteffort-podf5203218_9faa_4454_9eea_92e349493be5.slice. Jun 25 18:33:57.702446 kubelet[3193]: E0625 18:33:57.702260 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.702446 kubelet[3193]: W0625 18:33:57.702440 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.702609 kubelet[3193]: E0625 18:33:57.702469 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.702707 kubelet[3193]: E0625 18:33:57.702671 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.702707 kubelet[3193]: W0625 18:33:57.702684 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.702707 kubelet[3193]: E0625 18:33:57.702697 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.702911 kubelet[3193]: E0625 18:33:57.702875 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.702911 kubelet[3193]: W0625 18:33:57.702890 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.702911 kubelet[3193]: E0625 18:33:57.702902 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.703873 kubelet[3193]: E0625 18:33:57.703848 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.703873 kubelet[3193]: W0625 18:33:57.703865 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.703979 kubelet[3193]: E0625 18:33:57.703893 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.704113 kubelet[3193]: E0625 18:33:57.704099 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.704155 kubelet[3193]: W0625 18:33:57.704111 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.704155 kubelet[3193]: E0625 18:33:57.704131 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.704433 kubelet[3193]: E0625 18:33:57.704365 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.704433 kubelet[3193]: W0625 18:33:57.704379 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.704433 kubelet[3193]: E0625 18:33:57.704392 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.704684 kubelet[3193]: E0625 18:33:57.704662 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.704684 kubelet[3193]: W0625 18:33:57.704677 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.704767 kubelet[3193]: E0625 18:33:57.704691 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.705444 kubelet[3193]: E0625 18:33:57.705330 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.705444 kubelet[3193]: W0625 18:33:57.705352 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.705444 kubelet[3193]: E0625 18:33:57.705367 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.705584 kubelet[3193]: E0625 18:33:57.705572 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.705584 kubelet[3193]: W0625 18:33:57.705580 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.705628 kubelet[3193]: E0625 18:33:57.705591 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.705751 kubelet[3193]: E0625 18:33:57.705737 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.705751 kubelet[3193]: W0625 18:33:57.705748 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.705822 kubelet[3193]: E0625 18:33:57.705759 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.705968 kubelet[3193]: E0625 18:33:57.705903 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.705968 kubelet[3193]: W0625 18:33:57.705914 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.705968 kubelet[3193]: E0625 18:33:57.705925 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.706219 kubelet[3193]: E0625 18:33:57.706061 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.706219 kubelet[3193]: W0625 18:33:57.706073 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.706219 kubelet[3193]: E0625 18:33:57.706083 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.723646 kubelet[3193]: E0625 18:33:57.722071 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.723646 kubelet[3193]: W0625 18:33:57.722086 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.723646 kubelet[3193]: E0625 18:33:57.722103 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.723646 kubelet[3193]: I0625 18:33:57.722135 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f5203218-9faa-4454-9eea-92e349493be5-typha-certs\") pod \"calico-typha-86f54f78c-5ftvk\" (UID: \"f5203218-9faa-4454-9eea-92e349493be5\") " pod="calico-system/calico-typha-86f54f78c-5ftvk" Jun 25 18:33:57.723646 kubelet[3193]: E0625 18:33:57.722417 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.723646 kubelet[3193]: W0625 18:33:57.722430 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.723646 kubelet[3193]: E0625 18:33:57.722455 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.723646 kubelet[3193]: E0625 18:33:57.722602 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.723646 kubelet[3193]: W0625 18:33:57.722609 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.723879 kubelet[3193]: E0625 18:33:57.722620 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.723879 kubelet[3193]: E0625 18:33:57.722742 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.723879 kubelet[3193]: W0625 18:33:57.722748 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.723879 kubelet[3193]: E0625 18:33:57.722757 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.723879 kubelet[3193]: I0625 18:33:57.722776 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5203218-9faa-4454-9eea-92e349493be5-tigera-ca-bundle\") pod \"calico-typha-86f54f78c-5ftvk\" (UID: \"f5203218-9faa-4454-9eea-92e349493be5\") " pod="calico-system/calico-typha-86f54f78c-5ftvk" Jun 25 18:33:57.723879 kubelet[3193]: E0625 18:33:57.722896 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.723879 kubelet[3193]: W0625 18:33:57.722903 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.723879 kubelet[3193]: E0625 18:33:57.722913 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.724045 kubelet[3193]: I0625 18:33:57.722932 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnl5v\" (UniqueName: \"kubernetes.io/projected/f5203218-9faa-4454-9eea-92e349493be5-kube-api-access-jnl5v\") pod \"calico-typha-86f54f78c-5ftvk\" (UID: \"f5203218-9faa-4454-9eea-92e349493be5\") " pod="calico-system/calico-typha-86f54f78c-5ftvk" Jun 25 18:33:57.724045 kubelet[3193]: I0625 18:33:57.722956 3193 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-v4krv\" (UniqueName: \"kubernetes.io/projected/3f73f535-2c81-4c68-b3d3-81f0aed1692a-kube-api-access-v4krv\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:57.724045 kubelet[3193]: I0625 18:33:57.722966 3193 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3f73f535-2c81-4c68-b3d3-81f0aed1692a-typha-certs\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:57.724045 kubelet[3193]: I0625 18:33:57.722978 3193 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f73f535-2c81-4c68-b3d3-81f0aed1692a-tigera-ca-bundle\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:57.724045 kubelet[3193]: E0625 18:33:57.723108 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.724045 kubelet[3193]: W0625 18:33:57.723115 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.724045 kubelet[3193]: E0625 18:33:57.723125 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.724045 kubelet[3193]: E0625 18:33:57.723259 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.724229 kubelet[3193]: W0625 18:33:57.723267 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.724229 kubelet[3193]: E0625 18:33:57.723277 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.724229 kubelet[3193]: E0625 18:33:57.723405 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.724229 kubelet[3193]: W0625 18:33:57.723412 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.724229 kubelet[3193]: E0625 18:33:57.723422 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.724229 kubelet[3193]: E0625 18:33:57.723524 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.724229 kubelet[3193]: W0625 18:33:57.723531 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.724229 kubelet[3193]: E0625 18:33:57.723541 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.740534 systemd[1]: var-lib-kubelet-pods-3f73f535\x2d2c81\x2d4c68\x2db3d3\x2d81f0aed1692a-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jun 25 18:33:57.823155 containerd[1711]: time="2024-06-25T18:33:57.821842005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:57.823554 kubelet[3193]: E0625 18:33:57.823522 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.823554 kubelet[3193]: W0625 18:33:57.823546 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.823646 kubelet[3193]: E0625 18:33:57.823573 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.823784 kubelet[3193]: E0625 18:33:57.823736 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.823784 kubelet[3193]: W0625 18:33:57.823750 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.823784 kubelet[3193]: E0625 18:33:57.823762 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.824490 kubelet[3193]: E0625 18:33:57.823922 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.824490 kubelet[3193]: W0625 18:33:57.823935 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.824490 kubelet[3193]: E0625 18:33:57.823947 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.824490 kubelet[3193]: E0625 18:33:57.824134 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.824490 kubelet[3193]: W0625 18:33:57.824142 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.824490 kubelet[3193]: E0625 18:33:57.824162 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.824490 kubelet[3193]: E0625 18:33:57.824316 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.824490 kubelet[3193]: W0625 18:33:57.824324 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.824490 kubelet[3193]: E0625 18:33:57.824335 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.824850 kubelet[3193]: E0625 18:33:57.824820 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.824850 kubelet[3193]: W0625 18:33:57.824831 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.824850 kubelet[3193]: E0625 18:33:57.824851 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.824914 containerd[1711]: time="2024-06-25T18:33:57.824655159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 18:33:57.825092 kubelet[3193]: E0625 18:33:57.825053 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.825092 kubelet[3193]: W0625 18:33:57.825070 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.825092 kubelet[3193]: E0625 18:33:57.825090 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.825314 kubelet[3193]: E0625 18:33:57.825293 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.825314 kubelet[3193]: W0625 18:33:57.825308 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.825387 kubelet[3193]: E0625 18:33:57.825325 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.825733 kubelet[3193]: E0625 18:33:57.825707 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.825733 kubelet[3193]: W0625 18:33:57.825725 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.825951 kubelet[3193]: E0625 18:33:57.825741 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.826476 kubelet[3193]: E0625 18:33:57.826239 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.826476 kubelet[3193]: W0625 18:33:57.826252 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.826476 kubelet[3193]: E0625 18:33:57.826265 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.827495 kubelet[3193]: E0625 18:33:57.827463 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.827495 kubelet[3193]: W0625 18:33:57.827484 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.827606 kubelet[3193]: E0625 18:33:57.827499 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.833184 containerd[1711]: time="2024-06-25T18:33:57.829317671Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:57.833242 kubelet[3193]: E0625 18:33:57.833123 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.833242 kubelet[3193]: W0625 18:33:57.833135 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.833242 kubelet[3193]: E0625 18:33:57.833149 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.833426 kubelet[3193]: E0625 18:33:57.833379 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.833426 kubelet[3193]: W0625 18:33:57.833396 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.833426 kubelet[3193]: E0625 18:33:57.833408 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.833568 kubelet[3193]: E0625 18:33:57.833538 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.833568 kubelet[3193]: W0625 18:33:57.833545 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.833568 kubelet[3193]: E0625 18:33:57.833557 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.835193 kubelet[3193]: E0625 18:33:57.833743 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.835193 kubelet[3193]: W0625 18:33:57.833759 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.835193 kubelet[3193]: E0625 18:33:57.833770 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.837476 kubelet[3193]: E0625 18:33:57.837342 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.837476 kubelet[3193]: W0625 18:33:57.837362 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.837476 kubelet[3193]: E0625 18:33:57.837387 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.838091 kubelet[3193]: E0625 18:33:57.838053 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.838091 kubelet[3193]: W0625 18:33:57.838080 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.838091 kubelet[3193]: E0625 18:33:57.838095 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.842767 containerd[1711]: time="2024-06-25T18:33:57.842711487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:33:57.843415 containerd[1711]: time="2024-06-25T18:33:57.843357765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 2.106469684s" Jun 25 18:33:57.843415 containerd[1711]: time="2024-06-25T18:33:57.843396365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 18:33:57.848420 kubelet[3193]: E0625 18:33:57.848086 3193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:33:57.848420 kubelet[3193]: W0625 18:33:57.848105 3193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:33:57.848420 kubelet[3193]: E0625 18:33:57.848121 3193 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:33:57.848639 containerd[1711]: time="2024-06-25T18:33:57.848136517Z" level=info msg="CreateContainer within sandbox \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:33:57.896940 containerd[1711]: time="2024-06-25T18:33:57.896885348Z" level=info msg="CreateContainer within sandbox \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52\"" Jun 25 18:33:57.897451 containerd[1711]: time="2024-06-25T18:33:57.897429107Z" level=info msg="StartContainer for \"ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52\"" Jun 25 18:33:57.909157 systemd[1]: Removed slice kubepods-besteffort-pod3f73f535_2c81_4c68_b3d3_81f0aed1692a.slice - libcontainer container kubepods-besteffort-pod3f73f535_2c81_4c68_b3d3_81f0aed1692a.slice. Jun 25 18:33:57.938473 systemd[1]: Started cri-containerd-ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52.scope - libcontainer container ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52. Jun 25 18:33:57.959207 containerd[1711]: time="2024-06-25T18:33:57.959044315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86f54f78c-5ftvk,Uid:f5203218-9faa-4454-9eea-92e349493be5,Namespace:calico-system,Attempt:0,}" Jun 25 18:33:57.970953 containerd[1711]: time="2024-06-25T18:33:57.970701014Z" level=info msg="StartContainer for \"ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52\" returns successfully" Jun 25 18:33:58.002996 systemd[1]: cri-containerd-ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52.scope: Deactivated successfully. Jun 25 18:33:58.031690 containerd[1711]: time="2024-06-25T18:33:58.030966504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:33:58.031690 containerd[1711]: time="2024-06-25T18:33:58.031462983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:58.031690 containerd[1711]: time="2024-06-25T18:33:58.031524423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:33:58.031690 containerd[1711]: time="2024-06-25T18:33:58.031554983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:33:58.050357 systemd[1]: Started cri-containerd-c314e2127611910c63bf09d7f4049538f056a9e9b43ba1233b63d49ecf4c820f.scope - libcontainer container c314e2127611910c63bf09d7f4049538f056a9e9b43ba1233b63d49ecf4c820f. Jun 25 18:33:58.160935 containerd[1711]: time="2024-06-25T18:33:58.160564788Z" level=info msg="shim disconnected" id=ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52 namespace=k8s.io Jun 25 18:33:58.160935 containerd[1711]: time="2024-06-25T18:33:58.160728988Z" level=warning msg="cleaning up after shim disconnected" id=ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52 namespace=k8s.io Jun 25 18:33:58.161380 containerd[1711]: time="2024-06-25T18:33:58.160740428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:33:58.162289 containerd[1711]: time="2024-06-25T18:33:58.162162025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86f54f78c-5ftvk,Uid:f5203218-9faa-4454-9eea-92e349493be5,Namespace:calico-system,Attempt:0,} returns sandbox id \"c314e2127611910c63bf09d7f4049538f056a9e9b43ba1233b63d49ecf4c820f\"" Jun 25 18:33:58.174881 containerd[1711]: time="2024-06-25T18:33:58.174754602Z" level=info msg="CreateContainer within sandbox \"c314e2127611910c63bf09d7f4049538f056a9e9b43ba1233b63d49ecf4c820f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:33:58.185237 containerd[1711]: time="2024-06-25T18:33:58.184774064Z" level=warning msg="cleanup warnings time=\"2024-06-25T18:33:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 18:33:58.244334 containerd[1711]: time="2024-06-25T18:33:58.244271675Z" level=info msg="CreateContainer within sandbox \"c314e2127611910c63bf09d7f4049538f056a9e9b43ba1233b63d49ecf4c820f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b7efd1d586c854b55823d631ead406663c1be2cef4dc8ebb8dd165ddabc37338\"" Jun 25 18:33:58.245207 containerd[1711]: time="2024-06-25T18:33:58.245159794Z" level=info msg="StartContainer for \"b7efd1d586c854b55823d631ead406663c1be2cef4dc8ebb8dd165ddabc37338\"" Jun 25 18:33:58.267860 systemd[1]: Started cri-containerd-b7efd1d586c854b55823d631ead406663c1be2cef4dc8ebb8dd165ddabc37338.scope - libcontainer container b7efd1d586c854b55823d631ead406663c1be2cef4dc8ebb8dd165ddabc37338. Jun 25 18:33:58.313639 containerd[1711]: time="2024-06-25T18:33:58.312722951Z" level=info msg="StartContainer for \"b7efd1d586c854b55823d631ead406663c1be2cef4dc8ebb8dd165ddabc37338\" returns successfully" Jun 25 18:33:58.472905 kubelet[3193]: I0625 18:33:58.472871 3193 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3f73f535-2c81-4c68-b3d3-81f0aed1692a" path="/var/lib/kubelet/pods/3f73f535-2c81-4c68-b3d3-81f0aed1692a/volumes" Jun 25 18:33:58.612438 containerd[1711]: time="2024-06-25T18:33:58.612079086Z" level=info msg="StopPodSandbox for \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\"" Jun 25 18:33:58.612438 containerd[1711]: time="2024-06-25T18:33:58.612136126Z" level=info msg="Container to stop \"ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:33:58.625436 systemd[1]: cri-containerd-3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151.scope: Deactivated successfully. Jun 25 18:33:58.635200 kubelet[3193]: I0625 18:33:58.634714 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-86f54f78c-5ftvk" podStartSLOduration=4.634675125 podCreationTimestamp="2024-06-25 18:33:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:33:58.632125169 +0000 UTC m=+26.564623374" watchObservedRunningTime="2024-06-25 18:33:58.634675125 +0000 UTC m=+26.567173250" Jun 25 18:33:58.675688 containerd[1711]: time="2024-06-25T18:33:58.674902011Z" level=info msg="shim disconnected" id=3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151 namespace=k8s.io Jun 25 18:33:58.675688 containerd[1711]: time="2024-06-25T18:33:58.674960011Z" level=warning msg="cleaning up after shim disconnected" id=3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151 namespace=k8s.io Jun 25 18:33:58.675688 containerd[1711]: time="2024-06-25T18:33:58.674968131Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:33:58.688626 containerd[1711]: time="2024-06-25T18:33:58.688472987Z" level=info msg="TearDown network for sandbox \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\" successfully" Jun 25 18:33:58.688626 containerd[1711]: time="2024-06-25T18:33:58.688508227Z" level=info msg="StopPodSandbox for \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\" returns successfully" Jun 25 18:33:58.732164 kubelet[3193]: I0625 18:33:58.731822 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-lib-modules\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.732417 kubelet[3193]: I0625 18:33:58.732249 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-bin-dir\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.732417 kubelet[3193]: I0625 18:33:58.732284 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-tigera-ca-bundle\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.732417 kubelet[3193]: I0625 18:33:58.732305 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-node-certs\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.732417 kubelet[3193]: I0625 18:33:58.732335 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-policysync\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.732417 kubelet[3193]: I0625 18:33:58.732352 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-var-lib-calico\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.732417 kubelet[3193]: I0625 18:33:58.732369 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-xtables-lock\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.732576 kubelet[3193]: I0625 18:33:58.732389 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-var-run-calico\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.732576 kubelet[3193]: I0625 18:33:58.732426 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz8hv\" (UniqueName: \"kubernetes.io/projected/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-kube-api-access-mz8hv\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.732576 kubelet[3193]: I0625 18:33:58.732449 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-log-dir\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.732576 kubelet[3193]: I0625 18:33:58.732471 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-flexvol-driver-host\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.732667 kubelet[3193]: I0625 18:33:58.732612 3193 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-net-dir\") pod \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\" (UID: \"30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7\") " Jun 25 18:33:58.733291 kubelet[3193]: I0625 18:33:58.732743 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:33:58.733291 kubelet[3193]: I0625 18:33:58.732754 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:33:58.733291 kubelet[3193]: I0625 18:33:58.732114 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:33:58.733291 kubelet[3193]: I0625 18:33:58.732794 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:33:58.733291 kubelet[3193]: I0625 18:33:58.732988 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:33:58.733478 kubelet[3193]: I0625 18:33:58.733013 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:33:58.733478 kubelet[3193]: I0625 18:33:58.733131 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:33:58.733478 kubelet[3193]: I0625 18:33:58.733452 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-policysync" (OuterVolumeSpecName: "policysync") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:33:58.735253 kubelet[3193]: I0625 18:33:58.733482 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:33:58.735253 kubelet[3193]: I0625 18:33:58.733590 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:33:58.735989 kubelet[3193]: I0625 18:33:58.735964 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-node-certs" (OuterVolumeSpecName: "node-certs") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:33:58.738900 kubelet[3193]: I0625 18:33:58.738873 3193 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-kube-api-access-mz8hv" (OuterVolumeSpecName: "kube-api-access-mz8hv") pod "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" (UID: "30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7"). InnerVolumeSpecName "kube-api-access-mz8hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:33:58.746611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151-rootfs.mount: Deactivated successfully. Jun 25 18:33:58.746698 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151-shm.mount: Deactivated successfully. Jun 25 18:33:58.746753 systemd[1]: var-lib-kubelet-pods-30e0d1d4\x2dd9e6\x2d48b9\x2d8af6\x2dc5f7621de9a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmz8hv.mount: Deactivated successfully. Jun 25 18:33:58.746804 systemd[1]: var-lib-kubelet-pods-30e0d1d4\x2dd9e6\x2d48b9\x2d8af6\x2dc5f7621de9a7-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 18:33:58.833136 kubelet[3193]: I0625 18:33:58.833086 3193 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-node-certs\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:58.833447 kubelet[3193]: I0625 18:33:58.833307 3193 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-policysync\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:58.833447 kubelet[3193]: I0625 18:33:58.833327 3193 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-var-lib-calico\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:58.833447 kubelet[3193]: I0625 18:33:58.833342 3193 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-var-run-calico\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:58.833447 kubelet[3193]: I0625 18:33:58.833353 3193 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-xtables-lock\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:58.833447 kubelet[3193]: I0625 18:33:58.833374 3193 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mz8hv\" (UniqueName: \"kubernetes.io/projected/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-kube-api-access-mz8hv\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:58.833447 kubelet[3193]: I0625 18:33:58.833386 3193 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-log-dir\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:58.833447 kubelet[3193]: I0625 18:33:58.833397 3193 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-flexvol-driver-host\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:58.833447 kubelet[3193]: I0625 18:33:58.833406 3193 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-net-dir\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:58.833652 kubelet[3193]: I0625 18:33:58.833415 3193 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-lib-modules\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:58.833652 kubelet[3193]: I0625 18:33:58.833424 3193 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-cni-bin-dir\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:58.833652 kubelet[3193]: I0625 18:33:58.833432 3193 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7-tigera-ca-bundle\") on node \"ci-4012.0.0-a-71b05979e1\" DevicePath \"\"" Jun 25 18:33:59.470743 kubelet[3193]: E0625 18:33:59.470707 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5d2z5" podUID="13f88024-04f7-4d51-8fb3-1cee9d125eda" Jun 25 18:33:59.629595 kubelet[3193]: I0625 18:33:59.629557 3193 scope.go:117] "RemoveContainer" containerID="ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52" Jun 25 18:33:59.632204 containerd[1711]: time="2024-06-25T18:33:59.632133229Z" level=info msg="RemoveContainer for \"ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52\"" Jun 25 18:33:59.639598 systemd[1]: Removed slice kubepods-besteffort-pod30e0d1d4_d9e6_48b9_8af6_c5f7621de9a7.slice - libcontainer container kubepods-besteffort-pod30e0d1d4_d9e6_48b9_8af6_c5f7621de9a7.slice. Jun 25 18:33:59.642885 containerd[1711]: time="2024-06-25T18:33:59.642613689Z" level=info msg="RemoveContainer for \"ac40e3a1d3b653cf7c98b82e064b2a1e78bb8ef9c331f838fde2080d059dba52\" returns successfully" Jun 25 18:33:59.672645 kubelet[3193]: I0625 18:33:59.672603 3193 topology_manager.go:215] "Topology Admit Handler" podUID="1db70406-0b82-4911-a9e3-ed7c58183d63" podNamespace="calico-system" podName="calico-node-j59f2" Jun 25 18:33:59.672989 kubelet[3193]: E0625 18:33:59.672696 3193 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" containerName="flexvol-driver" Jun 25 18:33:59.672989 kubelet[3193]: I0625 18:33:59.672723 3193 memory_manager.go:346] "RemoveStaleState removing state" podUID="30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" containerName="flexvol-driver" Jun 25 18:33:59.681801 systemd[1]: Created slice kubepods-besteffort-pod1db70406_0b82_4911_a9e3_ed7c58183d63.slice - libcontainer container kubepods-besteffort-pod1db70406_0b82_4911_a9e3_ed7c58183d63.slice. Jun 25 18:33:59.738507 kubelet[3193]: I0625 18:33:59.738295 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1db70406-0b82-4911-a9e3-ed7c58183d63-node-certs\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.738507 kubelet[3193]: I0625 18:33:59.738331 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1db70406-0b82-4911-a9e3-ed7c58183d63-var-run-calico\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.738507 kubelet[3193]: I0625 18:33:59.738365 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1db70406-0b82-4911-a9e3-ed7c58183d63-tigera-ca-bundle\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.738507 kubelet[3193]: I0625 18:33:59.738387 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1db70406-0b82-4911-a9e3-ed7c58183d63-var-lib-calico\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.738507 kubelet[3193]: I0625 18:33:59.738410 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1db70406-0b82-4911-a9e3-ed7c58183d63-cni-log-dir\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.738739 kubelet[3193]: I0625 18:33:59.738430 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1db70406-0b82-4911-a9e3-ed7c58183d63-flexvol-driver-host\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.738739 kubelet[3193]: I0625 18:33:59.738453 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1db70406-0b82-4911-a9e3-ed7c58183d63-policysync\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.738739 kubelet[3193]: I0625 18:33:59.738472 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1db70406-0b82-4911-a9e3-ed7c58183d63-lib-modules\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.738739 kubelet[3193]: I0625 18:33:59.738490 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1db70406-0b82-4911-a9e3-ed7c58183d63-xtables-lock\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.738739 kubelet[3193]: I0625 18:33:59.738511 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkvlm\" (UniqueName: \"kubernetes.io/projected/1db70406-0b82-4911-a9e3-ed7c58183d63-kube-api-access-qkvlm\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.738853 kubelet[3193]: I0625 18:33:59.738530 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1db70406-0b82-4911-a9e3-ed7c58183d63-cni-bin-dir\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.738853 kubelet[3193]: I0625 18:33:59.738550 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1db70406-0b82-4911-a9e3-ed7c58183d63-cni-net-dir\") pod \"calico-node-j59f2\" (UID: \"1db70406-0b82-4911-a9e3-ed7c58183d63\") " pod="calico-system/calico-node-j59f2" Jun 25 18:33:59.986513 containerd[1711]: time="2024-06-25T18:33:59.986349904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j59f2,Uid:1db70406-0b82-4911-a9e3-ed7c58183d63,Namespace:calico-system,Attempt:0,}" Jun 25 18:34:00.030497 containerd[1711]: time="2024-06-25T18:34:00.030399703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:00.030497 containerd[1711]: time="2024-06-25T18:34:00.030450703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:00.030497 containerd[1711]: time="2024-06-25T18:34:00.030467543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:00.030792 containerd[1711]: time="2024-06-25T18:34:00.030480703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:00.056372 systemd[1]: Started cri-containerd-18f84b66b8777053da9072bd1ec204d805baa27c8355c3ff3048afb530038f83.scope - libcontainer container 18f84b66b8777053da9072bd1ec204d805baa27c8355c3ff3048afb530038f83. Jun 25 18:34:00.076247 containerd[1711]: time="2024-06-25T18:34:00.076155820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j59f2,Uid:1db70406-0b82-4911-a9e3-ed7c58183d63,Namespace:calico-system,Attempt:0,} returns sandbox id \"18f84b66b8777053da9072bd1ec204d805baa27c8355c3ff3048afb530038f83\"" Jun 25 18:34:00.080217 containerd[1711]: time="2024-06-25T18:34:00.079882133Z" level=info msg="CreateContainer within sandbox \"18f84b66b8777053da9072bd1ec204d805baa27c8355c3ff3048afb530038f83\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:34:00.122121 containerd[1711]: time="2024-06-25T18:34:00.122076776Z" level=info msg="CreateContainer within sandbox \"18f84b66b8777053da9072bd1ec204d805baa27c8355c3ff3048afb530038f83\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ed3e910b0f0830e943e24a64e7512711f9aaac906ab36f4166aa3274bd16a502\"" Jun 25 18:34:00.123579 containerd[1711]: time="2024-06-25T18:34:00.123449734Z" level=info msg="StartContainer for \"ed3e910b0f0830e943e24a64e7512711f9aaac906ab36f4166aa3274bd16a502\"" Jun 25 18:34:00.147330 systemd[1]: Started cri-containerd-ed3e910b0f0830e943e24a64e7512711f9aaac906ab36f4166aa3274bd16a502.scope - libcontainer container ed3e910b0f0830e943e24a64e7512711f9aaac906ab36f4166aa3274bd16a502. Jun 25 18:34:00.179150 containerd[1711]: time="2024-06-25T18:34:00.178960553Z" level=info msg="StartContainer for \"ed3e910b0f0830e943e24a64e7512711f9aaac906ab36f4166aa3274bd16a502\" returns successfully" Jun 25 18:34:00.186316 systemd[1]: cri-containerd-ed3e910b0f0830e943e24a64e7512711f9aaac906ab36f4166aa3274bd16a502.scope: Deactivated successfully. Jun 25 18:34:00.244724 containerd[1711]: time="2024-06-25T18:34:00.244652273Z" level=info msg="shim disconnected" id=ed3e910b0f0830e943e24a64e7512711f9aaac906ab36f4166aa3274bd16a502 namespace=k8s.io Jun 25 18:34:00.244724 containerd[1711]: time="2024-06-25T18:34:00.244717633Z" level=warning msg="cleaning up after shim disconnected" id=ed3e910b0f0830e943e24a64e7512711f9aaac906ab36f4166aa3274bd16a502 namespace=k8s.io Jun 25 18:34:00.244724 containerd[1711]: time="2024-06-25T18:34:00.244726353Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:34:00.472744 kubelet[3193]: I0625 18:34:00.472650 3193 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7" path="/var/lib/kubelet/pods/30e0d1d4-d9e6-48b9-8af6-c5f7621de9a7/volumes" Jun 25 18:34:00.635391 containerd[1711]: time="2024-06-25T18:34:00.634904843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:34:00.845547 systemd[1]: run-containerd-runc-k8s.io-18f84b66b8777053da9072bd1ec204d805baa27c8355c3ff3048afb530038f83-runc.qH73HL.mount: Deactivated successfully. Jun 25 18:34:01.470340 kubelet[3193]: E0625 18:34:01.469744 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5d2z5" podUID="13f88024-04f7-4d51-8fb3-1cee9d125eda" Jun 25 18:34:01.921799 kubelet[3193]: I0625 18:34:01.921761 3193 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:34:03.367611 containerd[1711]: time="2024-06-25T18:34:03.367557575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:03.373131 containerd[1711]: time="2024-06-25T18:34:03.373010051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 18:34:03.378448 containerd[1711]: time="2024-06-25T18:34:03.378140926Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:03.383061 containerd[1711]: time="2024-06-25T18:34:03.383007722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:03.383895 containerd[1711]: time="2024-06-25T18:34:03.383860241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 2.748918158s" Jun 25 18:34:03.383895 containerd[1711]: time="2024-06-25T18:34:03.383892201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 18:34:03.387180 containerd[1711]: time="2024-06-25T18:34:03.386922039Z" level=info msg="CreateContainer within sandbox \"18f84b66b8777053da9072bd1ec204d805baa27c8355c3ff3048afb530038f83\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:34:03.435047 containerd[1711]: time="2024-06-25T18:34:03.434997877Z" level=info msg="CreateContainer within sandbox \"18f84b66b8777053da9072bd1ec204d805baa27c8355c3ff3048afb530038f83\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"90ab4696b0dfdd7ecfc049afa0f63a6dd145436fb610b21131e190212f451e64\"" Jun 25 18:34:03.437234 containerd[1711]: time="2024-06-25T18:34:03.435779997Z" level=info msg="StartContainer for \"90ab4696b0dfdd7ecfc049afa0f63a6dd145436fb610b21131e190212f451e64\"" Jun 25 18:34:03.467438 systemd[1]: Started cri-containerd-90ab4696b0dfdd7ecfc049afa0f63a6dd145436fb610b21131e190212f451e64.scope - libcontainer container 90ab4696b0dfdd7ecfc049afa0f63a6dd145436fb610b21131e190212f451e64. Jun 25 18:34:03.469521 kubelet[3193]: E0625 18:34:03.469494 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5d2z5" podUID="13f88024-04f7-4d51-8fb3-1cee9d125eda" Jun 25 18:34:03.502386 containerd[1711]: time="2024-06-25T18:34:03.502150739Z" level=info msg="StartContainer for \"90ab4696b0dfdd7ecfc049afa0f63a6dd145436fb610b21131e190212f451e64\" returns successfully" Jun 25 18:34:04.555345 systemd[1]: cri-containerd-90ab4696b0dfdd7ecfc049afa0f63a6dd145436fb610b21131e190212f451e64.scope: Deactivated successfully. Jun 25 18:34:04.575788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90ab4696b0dfdd7ecfc049afa0f63a6dd145436fb610b21131e190212f451e64-rootfs.mount: Deactivated successfully. Jun 25 18:34:04.609470 kubelet[3193]: I0625 18:34:04.608236 3193 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 18:34:04.893226 kubelet[3193]: I0625 18:34:04.627815 3193 topology_manager.go:215] "Topology Admit Handler" podUID="97580023-6067-45ba-b88a-4e958a1b396d" podNamespace="kube-system" podName="coredns-5dd5756b68-jg64n" Jun 25 18:34:04.893226 kubelet[3193]: I0625 18:34:04.628033 3193 topology_manager.go:215] "Topology Admit Handler" podUID="ca321c87-846f-4ad0-9416-c23d29b7c862" podNamespace="kube-system" podName="coredns-5dd5756b68-bwjhk" Jun 25 18:34:04.893226 kubelet[3193]: I0625 18:34:04.635337 3193 topology_manager.go:215] "Topology Admit Handler" podUID="05bc352c-3c9e-4252-a67d-1f6ac75aea93" podNamespace="calico-system" podName="calico-kube-controllers-bc465bdb8-f2lb6" Jun 25 18:34:04.893226 kubelet[3193]: I0625 18:34:04.670492 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05bc352c-3c9e-4252-a67d-1f6ac75aea93-tigera-ca-bundle\") pod \"calico-kube-controllers-bc465bdb8-f2lb6\" (UID: \"05bc352c-3c9e-4252-a67d-1f6ac75aea93\") " pod="calico-system/calico-kube-controllers-bc465bdb8-f2lb6" Jun 25 18:34:04.893226 kubelet[3193]: I0625 18:34:04.670530 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvd2z\" (UniqueName: \"kubernetes.io/projected/97580023-6067-45ba-b88a-4e958a1b396d-kube-api-access-nvd2z\") pod \"coredns-5dd5756b68-jg64n\" (UID: \"97580023-6067-45ba-b88a-4e958a1b396d\") " pod="kube-system/coredns-5dd5756b68-jg64n" Jun 25 18:34:04.893226 kubelet[3193]: I0625 18:34:04.670555 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsdp2\" (UniqueName: \"kubernetes.io/projected/ca321c87-846f-4ad0-9416-c23d29b7c862-kube-api-access-rsdp2\") pod \"coredns-5dd5756b68-bwjhk\" (UID: \"ca321c87-846f-4ad0-9416-c23d29b7c862\") " pod="kube-system/coredns-5dd5756b68-bwjhk" Jun 25 18:34:04.640746 systemd[1]: Created slice kubepods-burstable-pod97580023_6067_45ba_b88a_4e958a1b396d.slice - libcontainer container kubepods-burstable-pod97580023_6067_45ba_b88a_4e958a1b396d.slice. Jun 25 18:34:04.893539 kubelet[3193]: I0625 18:34:04.670644 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6zh\" (UniqueName: \"kubernetes.io/projected/05bc352c-3c9e-4252-a67d-1f6ac75aea93-kube-api-access-sc6zh\") pod \"calico-kube-controllers-bc465bdb8-f2lb6\" (UID: \"05bc352c-3c9e-4252-a67d-1f6ac75aea93\") " pod="calico-system/calico-kube-controllers-bc465bdb8-f2lb6" Jun 25 18:34:04.893539 kubelet[3193]: I0625 18:34:04.670787 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca321c87-846f-4ad0-9416-c23d29b7c862-config-volume\") pod \"coredns-5dd5756b68-bwjhk\" (UID: \"ca321c87-846f-4ad0-9416-c23d29b7c862\") " pod="kube-system/coredns-5dd5756b68-bwjhk" Jun 25 18:34:04.893539 kubelet[3193]: I0625 18:34:04.670815 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97580023-6067-45ba-b88a-4e958a1b396d-config-volume\") pod \"coredns-5dd5756b68-jg64n\" (UID: \"97580023-6067-45ba-b88a-4e958a1b396d\") " pod="kube-system/coredns-5dd5756b68-jg64n" Jun 25 18:34:04.651112 systemd[1]: Created slice kubepods-burstable-podca321c87_846f_4ad0_9416_c23d29b7c862.slice - libcontainer container kubepods-burstable-podca321c87_846f_4ad0_9416_c23d29b7c862.slice. Jun 25 18:34:04.661549 systemd[1]: Created slice kubepods-besteffort-pod05bc352c_3c9e_4252_a67d_1f6ac75aea93.slice - libcontainer container kubepods-besteffort-pod05bc352c_3c9e_4252_a67d_1f6ac75aea93.slice. Jun 25 18:34:05.476422 systemd[1]: Created slice kubepods-besteffort-pod13f88024_04f7_4d51_8fb3_1cee9d125eda.slice - libcontainer container kubepods-besteffort-pod13f88024_04f7_4d51_8fb3_1cee9d125eda.slice. Jun 25 18:34:05.478590 containerd[1711]: time="2024-06-25T18:34:05.478551392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5d2z5,Uid:13f88024-04f7-4d51-8fb3-1cee9d125eda,Namespace:calico-system,Attempt:0,}" Jun 25 18:34:05.740329 containerd[1711]: time="2024-06-25T18:34:05.739713646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc465bdb8-f2lb6,Uid:05bc352c-3c9e-4252-a67d-1f6ac75aea93,Namespace:calico-system,Attempt:0,}" Jun 25 18:34:05.740329 containerd[1711]: time="2024-06-25T18:34:05.739907086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jg64n,Uid:97580023-6067-45ba-b88a-4e958a1b396d,Namespace:kube-system,Attempt:0,}" Jun 25 18:34:05.743151 containerd[1711]: time="2024-06-25T18:34:05.743044803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bwjhk,Uid:ca321c87-846f-4ad0-9416-c23d29b7c862,Namespace:kube-system,Attempt:0,}" Jun 25 18:34:05.797429 containerd[1711]: time="2024-06-25T18:34:05.797291396Z" level=info msg="shim disconnected" id=90ab4696b0dfdd7ecfc049afa0f63a6dd145436fb610b21131e190212f451e64 namespace=k8s.io Jun 25 18:34:05.797429 containerd[1711]: time="2024-06-25T18:34:05.797342676Z" level=warning msg="cleaning up after shim disconnected" id=90ab4696b0dfdd7ecfc049afa0f63a6dd145436fb610b21131e190212f451e64 namespace=k8s.io Jun 25 18:34:05.797751 containerd[1711]: time="2024-06-25T18:34:05.797351436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:34:05.982694 containerd[1711]: time="2024-06-25T18:34:05.982574156Z" level=error msg="Failed to destroy network for sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:05.984610 containerd[1711]: time="2024-06-25T18:34:05.984370435Z" level=error msg="encountered an error cleaning up failed sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:05.984610 containerd[1711]: time="2024-06-25T18:34:05.984433235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5d2z5,Uid:13f88024-04f7-4d51-8fb3-1cee9d125eda,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:05.985254 kubelet[3193]: E0625 18:34:05.984983 3193 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:05.985254 kubelet[3193]: E0625 18:34:05.985068 3193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5d2z5" Jun 25 18:34:05.985254 kubelet[3193]: E0625 18:34:05.985092 3193 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5d2z5" Jun 25 18:34:05.985578 kubelet[3193]: E0625 18:34:05.985154 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5d2z5_calico-system(13f88024-04f7-4d51-8fb3-1cee9d125eda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5d2z5_calico-system(13f88024-04f7-4d51-8fb3-1cee9d125eda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5d2z5" podUID="13f88024-04f7-4d51-8fb3-1cee9d125eda" Jun 25 18:34:06.014971 containerd[1711]: time="2024-06-25T18:34:06.014917648Z" level=error msg="Failed to destroy network for sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.015571 containerd[1711]: time="2024-06-25T18:34:06.015462248Z" level=error msg="encountered an error cleaning up failed sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.015571 containerd[1711]: time="2024-06-25T18:34:06.015526688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc465bdb8-f2lb6,Uid:05bc352c-3c9e-4252-a67d-1f6ac75aea93,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.016089 kubelet[3193]: E0625 18:34:06.016053 3193 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.016208 kubelet[3193]: E0625 18:34:06.016121 3193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc465bdb8-f2lb6" Jun 25 18:34:06.016208 kubelet[3193]: E0625 18:34:06.016144 3193 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc465bdb8-f2lb6" Jun 25 18:34:06.016325 kubelet[3193]: E0625 18:34:06.016220 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bc465bdb8-f2lb6_calico-system(05bc352c-3c9e-4252-a67d-1f6ac75aea93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bc465bdb8-f2lb6_calico-system(05bc352c-3c9e-4252-a67d-1f6ac75aea93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc465bdb8-f2lb6" podUID="05bc352c-3c9e-4252-a67d-1f6ac75aea93" Jun 25 18:34:06.020470 containerd[1711]: time="2024-06-25T18:34:06.020439123Z" level=error msg="Failed to destroy network for sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.021055 containerd[1711]: time="2024-06-25T18:34:06.021028003Z" level=error msg="encountered an error cleaning up failed sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.021251 containerd[1711]: time="2024-06-25T18:34:06.021162403Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bwjhk,Uid:ca321c87-846f-4ad0-9416-c23d29b7c862,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.021555 kubelet[3193]: E0625 18:34:06.021524 3193 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.021618 kubelet[3193]: E0625 18:34:06.021580 3193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-bwjhk" Jun 25 18:34:06.021618 kubelet[3193]: E0625 18:34:06.021600 3193 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-bwjhk" Jun 25 18:34:06.021712 kubelet[3193]: E0625 18:34:06.021656 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-bwjhk_kube-system(ca321c87-846f-4ad0-9416-c23d29b7c862)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-bwjhk_kube-system(ca321c87-846f-4ad0-9416-c23d29b7c862)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-bwjhk" podUID="ca321c87-846f-4ad0-9416-c23d29b7c862" Jun 25 18:34:06.023264 containerd[1711]: time="2024-06-25T18:34:06.022581962Z" level=error msg="Failed to destroy network for sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.024157 containerd[1711]: time="2024-06-25T18:34:06.023558041Z" level=error msg="encountered an error cleaning up failed sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.024157 containerd[1711]: time="2024-06-25T18:34:06.023646801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jg64n,Uid:97580023-6067-45ba-b88a-4e958a1b396d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.024390 kubelet[3193]: E0625 18:34:06.023850 3193 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.024390 kubelet[3193]: E0625 18:34:06.023885 3193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-jg64n" Jun 25 18:34:06.024390 kubelet[3193]: E0625 18:34:06.023902 3193 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-jg64n" Jun 25 18:34:06.024469 kubelet[3193]: E0625 18:34:06.023941 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-jg64n_kube-system(97580023-6067-45ba-b88a-4e958a1b396d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-jg64n_kube-system(97580023-6067-45ba-b88a-4e958a1b396d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-jg64n" podUID="97580023-6067-45ba-b88a-4e958a1b396d" Jun 25 18:34:06.651463 kubelet[3193]: I0625 18:34:06.651432 3193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:06.653350 containerd[1711]: time="2024-06-25T18:34:06.653207084Z" level=info msg="StopPodSandbox for \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\"" Jun 25 18:34:06.654196 containerd[1711]: time="2024-06-25T18:34:06.653805323Z" level=info msg="Ensure that sandbox 47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a in task-service has been cleanup successfully" Jun 25 18:34:06.655152 kubelet[3193]: I0625 18:34:06.654489 3193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:06.655472 containerd[1711]: time="2024-06-25T18:34:06.655331440Z" level=info msg="StopPodSandbox for \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\"" Jun 25 18:34:06.656547 kubelet[3193]: I0625 18:34:06.656530 3193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:06.656776 containerd[1711]: time="2024-06-25T18:34:06.656727197Z" level=info msg="Ensure that sandbox a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0 in task-service has been cleanup successfully" Jun 25 18:34:06.658432 containerd[1711]: time="2024-06-25T18:34:06.658022115Z" level=info msg="StopPodSandbox for \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\"" Jun 25 18:34:06.658432 containerd[1711]: time="2024-06-25T18:34:06.658207594Z" level=info msg="Ensure that sandbox 8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e in task-service has been cleanup successfully" Jun 25 18:34:06.667750 containerd[1711]: time="2024-06-25T18:34:06.667209096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:34:06.669655 kubelet[3193]: I0625 18:34:06.669522 3193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:06.673898 containerd[1711]: time="2024-06-25T18:34:06.673541483Z" level=info msg="StopPodSandbox for \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\"" Jun 25 18:34:06.673898 containerd[1711]: time="2024-06-25T18:34:06.673715683Z" level=info msg="Ensure that sandbox 534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31 in task-service has been cleanup successfully" Jun 25 18:34:06.731373 containerd[1711]: time="2024-06-25T18:34:06.731321205Z" level=error msg="StopPodSandbox for \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\" failed" error="failed to destroy network for sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.731632 kubelet[3193]: E0625 18:34:06.731581 3193 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:06.731632 kubelet[3193]: E0625 18:34:06.731631 3193 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31"} Jun 25 18:34:06.731713 kubelet[3193]: E0625 18:34:06.731667 3193 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca321c87-846f-4ad0-9416-c23d29b7c862\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:34:06.731713 kubelet[3193]: E0625 18:34:06.731694 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca321c87-846f-4ad0-9416-c23d29b7c862\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-bwjhk" podUID="ca321c87-846f-4ad0-9416-c23d29b7c862" Jun 25 18:34:06.732263 containerd[1711]: time="2024-06-25T18:34:06.732054883Z" level=error msg="StopPodSandbox for \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\" failed" error="failed to destroy network for sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.732486 kubelet[3193]: E0625 18:34:06.732455 3193 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:06.732486 kubelet[3193]: E0625 18:34:06.732479 3193 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a"} Jun 25 18:34:06.732744 kubelet[3193]: E0625 18:34:06.732508 3193 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"13f88024-04f7-4d51-8fb3-1cee9d125eda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:34:06.732744 kubelet[3193]: E0625 18:34:06.732530 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"13f88024-04f7-4d51-8fb3-1cee9d125eda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5d2z5" podUID="13f88024-04f7-4d51-8fb3-1cee9d125eda" Jun 25 18:34:06.735590 containerd[1711]: time="2024-06-25T18:34:06.735533636Z" level=error msg="StopPodSandbox for \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\" failed" error="failed to destroy network for sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.735903 kubelet[3193]: E0625 18:34:06.735887 3193 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:06.736058 kubelet[3193]: E0625 18:34:06.735952 3193 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0"} Jun 25 18:34:06.736058 kubelet[3193]: E0625 18:34:06.735983 3193 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97580023-6067-45ba-b88a-4e958a1b396d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:34:06.736206 kubelet[3193]: E0625 18:34:06.736147 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97580023-6067-45ba-b88a-4e958a1b396d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-jg64n" podUID="97580023-6067-45ba-b88a-4e958a1b396d" Jun 25 18:34:06.737326 containerd[1711]: time="2024-06-25T18:34:06.737294113Z" level=error msg="StopPodSandbox for \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\" failed" error="failed to destroy network for sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:34:06.737671 kubelet[3193]: E0625 18:34:06.737545 3193 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:06.737671 kubelet[3193]: E0625 18:34:06.737596 3193 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e"} Jun 25 18:34:06.737671 kubelet[3193]: E0625 18:34:06.737624 3193 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"05bc352c-3c9e-4252-a67d-1f6ac75aea93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:34:06.737671 kubelet[3193]: E0625 18:34:06.737654 3193 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"05bc352c-3c9e-4252-a67d-1f6ac75aea93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc465bdb8-f2lb6" podUID="05bc352c-3c9e-4252-a67d-1f6ac75aea93" Jun 25 18:34:06.872635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0-shm.mount: Deactivated successfully. Jun 25 18:34:06.872719 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e-shm.mount: Deactivated successfully. Jun 25 18:34:06.872769 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a-shm.mount: Deactivated successfully. Jun 25 18:34:11.667993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052095334.mount: Deactivated successfully. Jun 25 18:34:11.903414 containerd[1711]: time="2024-06-25T18:34:11.903353089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:11.906961 containerd[1711]: time="2024-06-25T18:34:11.906828123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 18:34:11.912962 containerd[1711]: time="2024-06-25T18:34:11.911819233Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:11.917577 containerd[1711]: time="2024-06-25T18:34:11.916778984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:11.917577 containerd[1711]: time="2024-06-25T18:34:11.917444983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 5.250088247s" Jun 25 18:34:11.917577 containerd[1711]: time="2024-06-25T18:34:11.917474543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 18:34:11.931291 containerd[1711]: time="2024-06-25T18:34:11.931193677Z" level=info msg="CreateContainer within sandbox \"18f84b66b8777053da9072bd1ec204d805baa27c8355c3ff3048afb530038f83\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:34:11.965578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount510574091.mount: Deactivated successfully. Jun 25 18:34:11.978625 containerd[1711]: time="2024-06-25T18:34:11.978579147Z" level=info msg="CreateContainer within sandbox \"18f84b66b8777053da9072bd1ec204d805baa27c8355c3ff3048afb530038f83\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6c3d0b5589a9302f14d1284d76ab393c199a0e048acaf1aa320f11727edf026e\"" Jun 25 18:34:11.980705 containerd[1711]: time="2024-06-25T18:34:11.979113146Z" level=info msg="StartContainer for \"6c3d0b5589a9302f14d1284d76ab393c199a0e048acaf1aa320f11727edf026e\"" Jun 25 18:34:12.004351 systemd[1]: Started cri-containerd-6c3d0b5589a9302f14d1284d76ab393c199a0e048acaf1aa320f11727edf026e.scope - libcontainer container 6c3d0b5589a9302f14d1284d76ab393c199a0e048acaf1aa320f11727edf026e. Jun 25 18:34:12.035689 containerd[1711]: time="2024-06-25T18:34:12.035601438Z" level=info msg="StartContainer for \"6c3d0b5589a9302f14d1284d76ab393c199a0e048acaf1aa320f11727edf026e\" returns successfully" Jun 25 18:34:12.448247 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:34:12.448356 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:34:14.132734 systemd-networkd[1472]: vxlan.calico: Link UP Jun 25 18:34:14.132742 systemd-networkd[1472]: vxlan.calico: Gained carrier Jun 25 18:34:15.457292 systemd-networkd[1472]: vxlan.calico: Gained IPv6LL Jun 25 18:34:18.471931 containerd[1711]: time="2024-06-25T18:34:18.471166978Z" level=info msg="StopPodSandbox for \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\"" Jun 25 18:34:18.521139 kubelet[3193]: I0625 18:34:18.520612 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-j59f2" podStartSLOduration=8.237254586 podCreationTimestamp="2024-06-25 18:33:59 +0000 UTC" firstStartedPulling="2024-06-25 18:34:00.634524483 +0000 UTC m=+28.567022608" lastFinishedPulling="2024-06-25 18:34:11.917836902 +0000 UTC m=+39.850335027" observedRunningTime="2024-06-25 18:34:12.716353586 +0000 UTC m=+40.648851711" watchObservedRunningTime="2024-06-25 18:34:18.520567005 +0000 UTC m=+46.453065210" Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.518 [INFO][4713] k8s.go 608: Cleaning up netns ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.519 [INFO][4713] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" iface="eth0" netns="/var/run/netns/cni-30e8983f-20d2-7121-3b44-3bb0a481c1bf" Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.520 [INFO][4713] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" iface="eth0" netns="/var/run/netns/cni-30e8983f-20d2-7121-3b44-3bb0a481c1bf" Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.520 [INFO][4713] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" iface="eth0" netns="/var/run/netns/cni-30e8983f-20d2-7121-3b44-3bb0a481c1bf" Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.520 [INFO][4713] k8s.go 615: Releasing IP address(es) ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.520 [INFO][4713] utils.go 188: Calico CNI releasing IP address ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.539 [INFO][4719] ipam_plugin.go 411: Releasing address using handleID ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" HandleID="k8s-pod-network.47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.539 [INFO][4719] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.539 [INFO][4719] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.547 [WARNING][4719] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" HandleID="k8s-pod-network.47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.548 [INFO][4719] ipam_plugin.go 439: Releasing address using workloadID ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" HandleID="k8s-pod-network.47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.549 [INFO][4719] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:18.552711 containerd[1711]: 2024-06-25 18:34:18.550 [INFO][4713] k8s.go 621: Teardown processing complete. ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:18.553660 containerd[1711]: time="2024-06-25T18:34:18.553338822Z" level=info msg="TearDown network for sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\" successfully" Jun 25 18:34:18.553660 containerd[1711]: time="2024-06-25T18:34:18.553368222Z" level=info msg="StopPodSandbox for \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\" returns successfully" Jun 25 18:34:18.555249 systemd[1]: run-netns-cni\x2d30e8983f\x2d20d2\x2d7121\x2d3b44\x2d3bb0a481c1bf.mount: Deactivated successfully. Jun 25 18:34:18.557220 containerd[1711]: time="2024-06-25T18:34:18.556018537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5d2z5,Uid:13f88024-04f7-4d51-8fb3-1cee9d125eda,Namespace:calico-system,Attempt:1,}" Jun 25 18:34:18.704736 systemd-networkd[1472]: cali142d14ec535: Link UP Jun 25 18:34:18.705305 systemd-networkd[1472]: cali142d14ec535: Gained carrier Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.637 [INFO][4728] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0 csi-node-driver- calico-system 13f88024-04f7-4d51-8fb3-1cee9d125eda 761 0 2024-06-25 18:33:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012.0.0-a-71b05979e1 csi-node-driver-5d2z5 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali142d14ec535 [] []}} ContainerID="6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" Namespace="calico-system" Pod="csi-node-driver-5d2z5" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.637 [INFO][4728] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" Namespace="calico-system" Pod="csi-node-driver-5d2z5" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.662 [INFO][4737] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" HandleID="k8s-pod-network.6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.673 [INFO][4737] ipam_plugin.go 264: Auto assigning IP ContainerID="6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" HandleID="k8s-pod-network.6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebd40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-71b05979e1", "pod":"csi-node-driver-5d2z5", "timestamp":"2024-06-25 18:34:18.662315376 +0000 UTC"}, Hostname:"ci-4012.0.0-a-71b05979e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.673 [INFO][4737] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.673 [INFO][4737] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.673 [INFO][4737] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-71b05979e1' Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.675 [INFO][4737] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.679 [INFO][4737] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.683 [INFO][4737] ipam.go 489: Trying affinity for 192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.684 [INFO][4737] ipam.go 155: Attempting to load block cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.686 [INFO][4737] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.686 [INFO][4737] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.688 [INFO][4737] ipam.go 1685: Creating new handle: k8s-pod-network.6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6 Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.691 [INFO][4737] ipam.go 1203: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.697 [INFO][4737] ipam.go 1216: Successfully claimed IPs: [192.168.117.129/26] block=192.168.117.128/26 handle="k8s-pod-network.6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.698 [INFO][4737] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.117.129/26] handle="k8s-pod-network.6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.698 [INFO][4737] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:18.729811 containerd[1711]: 2024-06-25 18:34:18.698 [INFO][4737] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.117.129/26] IPv6=[] ContainerID="6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" HandleID="k8s-pod-network.6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:18.730419 containerd[1711]: 2024-06-25 18:34:18.702 [INFO][4728] k8s.go 386: Populated endpoint ContainerID="6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" Namespace="calico-system" Pod="csi-node-driver-5d2z5" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13f88024-04f7-4d51-8fb3-1cee9d125eda", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"", Pod:"csi-node-driver-5d2z5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.117.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali142d14ec535", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:18.730419 containerd[1711]: 2024-06-25 18:34:18.702 [INFO][4728] k8s.go 387: Calico CNI using IPs: [192.168.117.129/32] ContainerID="6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" Namespace="calico-system" Pod="csi-node-driver-5d2z5" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:18.730419 containerd[1711]: 2024-06-25 18:34:18.702 [INFO][4728] dataplane_linux.go 68: Setting the host side veth name to cali142d14ec535 ContainerID="6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" Namespace="calico-system" Pod="csi-node-driver-5d2z5" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:18.730419 containerd[1711]: 2024-06-25 18:34:18.705 [INFO][4728] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" Namespace="calico-system" Pod="csi-node-driver-5d2z5" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:18.730419 containerd[1711]: 2024-06-25 18:34:18.706 [INFO][4728] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" Namespace="calico-system" Pod="csi-node-driver-5d2z5" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13f88024-04f7-4d51-8fb3-1cee9d125eda", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6", Pod:"csi-node-driver-5d2z5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.117.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali142d14ec535", MAC:"f2:4a:76:de:0a:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:18.730419 containerd[1711]: 2024-06-25 18:34:18.719 [INFO][4728] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6" Namespace="calico-system" Pod="csi-node-driver-5d2z5" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:18.754844 containerd[1711]: time="2024-06-25T18:34:18.754744000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:18.754844 containerd[1711]: time="2024-06-25T18:34:18.754798280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:18.754844 containerd[1711]: time="2024-06-25T18:34:18.754811520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:18.754844 containerd[1711]: time="2024-06-25T18:34:18.754822040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:18.779326 systemd[1]: Started cri-containerd-6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6.scope - libcontainer container 6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6. Jun 25 18:34:18.799132 containerd[1711]: time="2024-06-25T18:34:18.799090516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5d2z5,Uid:13f88024-04f7-4d51-8fb3-1cee9d125eda,Namespace:calico-system,Attempt:1,} returns sandbox id \"6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6\"" Jun 25 18:34:18.801680 containerd[1711]: time="2024-06-25T18:34:18.801260192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:34:19.104047 kubelet[3193]: I0625 18:34:19.103939 3193 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:34:19.471689 containerd[1711]: time="2024-06-25T18:34:19.470453881Z" level=info msg="StopPodSandbox for \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\"" Jun 25 18:34:19.471689 containerd[1711]: time="2024-06-25T18:34:19.470577521Z" level=info msg="StopPodSandbox for \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\"" Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.531 [INFO][4873] k8s.go 608: Cleaning up netns ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.531 [INFO][4873] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" iface="eth0" netns="/var/run/netns/cni-7a7ee664-b047-9e49-b682-9c29aae49bfb" Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.531 [INFO][4873] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" iface="eth0" netns="/var/run/netns/cni-7a7ee664-b047-9e49-b682-9c29aae49bfb" Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.531 [INFO][4873] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" iface="eth0" netns="/var/run/netns/cni-7a7ee664-b047-9e49-b682-9c29aae49bfb" Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.531 [INFO][4873] k8s.go 615: Releasing IP address(es) ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.531 [INFO][4873] utils.go 188: Calico CNI releasing IP address ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.558 [INFO][4886] ipam_plugin.go 411: Releasing address using handleID ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" HandleID="k8s-pod-network.534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.558 [INFO][4886] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.558 [INFO][4886] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.569 [WARNING][4886] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" HandleID="k8s-pod-network.534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.569 [INFO][4886] ipam_plugin.go 439: Releasing address using workloadID ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" HandleID="k8s-pod-network.534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.571 [INFO][4886] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:19.577200 containerd[1711]: 2024-06-25 18:34:19.572 [INFO][4873] k8s.go 621: Teardown processing complete. ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:19.577739 containerd[1711]: time="2024-06-25T18:34:19.577712331Z" level=info msg="TearDown network for sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\" successfully" Jun 25 18:34:19.577764 containerd[1711]: time="2024-06-25T18:34:19.577744331Z" level=info msg="StopPodSandbox for \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\" returns successfully" Jun 25 18:34:19.578577 containerd[1711]: time="2024-06-25T18:34:19.578407690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bwjhk,Uid:ca321c87-846f-4ad0-9416-c23d29b7c862,Namespace:kube-system,Attempt:1,}" Jun 25 18:34:19.579142 systemd[1]: run-netns-cni\x2d7a7ee664\x2db047\x2d9e49\x2db682\x2d9c29aae49bfb.mount: Deactivated successfully. Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.539 [INFO][4874] k8s.go 608: Cleaning up netns ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.539 [INFO][4874] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" iface="eth0" netns="/var/run/netns/cni-eb74197b-d5f9-1cb1-bb32-5d29815432fc" Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.539 [INFO][4874] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" iface="eth0" netns="/var/run/netns/cni-eb74197b-d5f9-1cb1-bb32-5d29815432fc" Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.541 [INFO][4874] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" iface="eth0" netns="/var/run/netns/cni-eb74197b-d5f9-1cb1-bb32-5d29815432fc" Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.541 [INFO][4874] k8s.go 615: Releasing IP address(es) ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.542 [INFO][4874] utils.go 188: Calico CNI releasing IP address ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.581 [INFO][4892] ipam_plugin.go 411: Releasing address using handleID ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" HandleID="k8s-pod-network.8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.582 [INFO][4892] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.582 [INFO][4892] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.592 [WARNING][4892] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" HandleID="k8s-pod-network.8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.592 [INFO][4892] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" HandleID="k8s-pod-network.8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.594 [INFO][4892] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:19.597926 containerd[1711]: 2024-06-25 18:34:19.595 [INFO][4874] k8s.go 621: Teardown processing complete. ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:19.597926 containerd[1711]: time="2024-06-25T18:34:19.597811870Z" level=info msg="TearDown network for sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\" successfully" Jun 25 18:34:19.597926 containerd[1711]: time="2024-06-25T18:34:19.597838830Z" level=info msg="StopPodSandbox for \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\" returns successfully" Jun 25 18:34:19.601001 containerd[1711]: time="2024-06-25T18:34:19.600626627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc465bdb8-f2lb6,Uid:05bc352c-3c9e-4252-a67d-1f6ac75aea93,Namespace:calico-system,Attempt:1,}" Jun 25 18:34:19.601265 systemd[1]: run-netns-cni\x2deb74197b\x2dd5f9\x2d1cb1\x2dbb32\x2d5d29815432fc.mount: Deactivated successfully. Jun 25 18:34:19.761942 systemd-networkd[1472]: cali373ed6a9c80: Link UP Jun 25 18:34:19.762851 systemd-networkd[1472]: cali373ed6a9c80: Gained carrier Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.665 [INFO][4900] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0 coredns-5dd5756b68- kube-system ca321c87-846f-4ad0-9416-c23d29b7c862 771 0 2024-06-25 18:33:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-71b05979e1 coredns-5dd5756b68-bwjhk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali373ed6a9c80 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" Namespace="kube-system" Pod="coredns-5dd5756b68-bwjhk" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.665 [INFO][4900] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" Namespace="kube-system" Pod="coredns-5dd5756b68-bwjhk" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.693 [INFO][4911] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" HandleID="k8s-pod-network.812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.713 [INFO][4911] ipam_plugin.go 264: Auto assigning IP ContainerID="812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" HandleID="k8s-pod-network.812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c700), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-71b05979e1", "pod":"coredns-5dd5756b68-bwjhk", "timestamp":"2024-06-25 18:34:19.693499732 +0000 UTC"}, Hostname:"ci-4012.0.0-a-71b05979e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.713 [INFO][4911] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.714 [INFO][4911] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.714 [INFO][4911] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-71b05979e1' Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.716 [INFO][4911] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.720 [INFO][4911] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.728 [INFO][4911] ipam.go 489: Trying affinity for 192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.730 [INFO][4911] ipam.go 155: Attempting to load block cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.732 [INFO][4911] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.732 [INFO][4911] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.741 [INFO][4911] ipam.go 1685: Creating new handle: k8s-pod-network.812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410 Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.746 [INFO][4911] ipam.go 1203: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.751 [INFO][4911] ipam.go 1216: Successfully claimed IPs: [192.168.117.130/26] block=192.168.117.128/26 handle="k8s-pod-network.812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.751 [INFO][4911] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.117.130/26] handle="k8s-pod-network.812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.751 [INFO][4911] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:19.778156 containerd[1711]: 2024-06-25 18:34:19.751 [INFO][4911] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.117.130/26] IPv6=[] ContainerID="812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" HandleID="k8s-pod-network.812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:19.779157 containerd[1711]: 2024-06-25 18:34:19.755 [INFO][4900] k8s.go 386: Populated endpoint ContainerID="812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" Namespace="kube-system" Pod="coredns-5dd5756b68-bwjhk" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ca321c87-846f-4ad0-9416-c23d29b7c862", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"", Pod:"coredns-5dd5756b68-bwjhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali373ed6a9c80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:19.779157 containerd[1711]: 2024-06-25 18:34:19.756 [INFO][4900] k8s.go 387: Calico CNI using IPs: [192.168.117.130/32] ContainerID="812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" Namespace="kube-system" Pod="coredns-5dd5756b68-bwjhk" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:19.779157 containerd[1711]: 2024-06-25 18:34:19.756 [INFO][4900] dataplane_linux.go 68: Setting the host side veth name to cali373ed6a9c80 ContainerID="812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" Namespace="kube-system" Pod="coredns-5dd5756b68-bwjhk" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:19.779157 containerd[1711]: 2024-06-25 18:34:19.763 [INFO][4900] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" Namespace="kube-system" Pod="coredns-5dd5756b68-bwjhk" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:19.779157 containerd[1711]: 2024-06-25 18:34:19.763 [INFO][4900] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" Namespace="kube-system" Pod="coredns-5dd5756b68-bwjhk" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ca321c87-846f-4ad0-9416-c23d29b7c862", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410", Pod:"coredns-5dd5756b68-bwjhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali373ed6a9c80", MAC:"72:b0:08:26:bf:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:19.779157 containerd[1711]: 2024-06-25 18:34:19.775 [INFO][4900] k8s.go 500: Wrote updated endpoint to datastore ContainerID="812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410" Namespace="kube-system" Pod="coredns-5dd5756b68-bwjhk" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:19.824400 containerd[1711]: time="2024-06-25T18:34:19.824235038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:19.826652 containerd[1711]: time="2024-06-25T18:34:19.825181877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:19.826652 containerd[1711]: time="2024-06-25T18:34:19.825233837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:19.826652 containerd[1711]: time="2024-06-25T18:34:19.825274117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:19.846623 systemd[1]: Started cri-containerd-812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410.scope - libcontainer container 812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410. Jun 25 18:34:19.861860 systemd-networkd[1472]: calia52f8094c9d: Link UP Jun 25 18:34:19.863284 systemd-networkd[1472]: calia52f8094c9d: Gained carrier Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.749 [INFO][4917] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0 calico-kube-controllers-bc465bdb8- calico-system 05bc352c-3c9e-4252-a67d-1f6ac75aea93 772 0 2024-06-25 18:33:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bc465bdb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012.0.0-a-71b05979e1 calico-kube-controllers-bc465bdb8-f2lb6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia52f8094c9d [] []}} ContainerID="f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" Namespace="calico-system" Pod="calico-kube-controllers-bc465bdb8-f2lb6" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.749 [INFO][4917] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" Namespace="calico-system" Pod="calico-kube-controllers-bc465bdb8-f2lb6" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.800 [INFO][4933] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" HandleID="k8s-pod-network.f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.812 [INFO][4933] ipam_plugin.go 264: Auto assigning IP ContainerID="f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" HandleID="k8s-pod-network.f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c2c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.0.0-a-71b05979e1", "pod":"calico-kube-controllers-bc465bdb8-f2lb6", "timestamp":"2024-06-25 18:34:19.800128662 +0000 UTC"}, Hostname:"ci-4012.0.0-a-71b05979e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.812 [INFO][4933] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.812 [INFO][4933] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.812 [INFO][4933] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-71b05979e1' Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.814 [INFO][4933] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.818 [INFO][4933] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.824 [INFO][4933] ipam.go 489: Trying affinity for 192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.828 [INFO][4933] ipam.go 155: Attempting to load block cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.831 [INFO][4933] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.831 [INFO][4933] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.833 [INFO][4933] ipam.go 1685: Creating new handle: k8s-pod-network.f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.844 [INFO][4933] ipam.go 1203: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.855 [INFO][4933] ipam.go 1216: Successfully claimed IPs: [192.168.117.131/26] block=192.168.117.128/26 handle="k8s-pod-network.f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.855 [INFO][4933] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.117.131/26] handle="k8s-pod-network.f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.855 [INFO][4933] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:19.885122 containerd[1711]: 2024-06-25 18:34:19.855 [INFO][4933] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.117.131/26] IPv6=[] ContainerID="f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" HandleID="k8s-pod-network.f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:19.885727 containerd[1711]: 2024-06-25 18:34:19.857 [INFO][4917] k8s.go 386: Populated endpoint ContainerID="f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" Namespace="calico-system" Pod="calico-kube-controllers-bc465bdb8-f2lb6" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0", GenerateName:"calico-kube-controllers-bc465bdb8-", Namespace:"calico-system", SelfLink:"", UID:"05bc352c-3c9e-4252-a67d-1f6ac75aea93", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc465bdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"", Pod:"calico-kube-controllers-bc465bdb8-f2lb6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia52f8094c9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:19.885727 containerd[1711]: 2024-06-25 18:34:19.858 [INFO][4917] k8s.go 387: Calico CNI using IPs: [192.168.117.131/32] ContainerID="f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" Namespace="calico-system" Pod="calico-kube-controllers-bc465bdb8-f2lb6" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:19.885727 containerd[1711]: 2024-06-25 18:34:19.858 [INFO][4917] dataplane_linux.go 68: Setting the host side veth name to calia52f8094c9d ContainerID="f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" Namespace="calico-system" Pod="calico-kube-controllers-bc465bdb8-f2lb6" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:19.885727 containerd[1711]: 2024-06-25 18:34:19.865 [INFO][4917] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" Namespace="calico-system" Pod="calico-kube-controllers-bc465bdb8-f2lb6" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:19.885727 containerd[1711]: 2024-06-25 18:34:19.866 [INFO][4917] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" Namespace="calico-system" Pod="calico-kube-controllers-bc465bdb8-f2lb6" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0", GenerateName:"calico-kube-controllers-bc465bdb8-", Namespace:"calico-system", SelfLink:"", UID:"05bc352c-3c9e-4252-a67d-1f6ac75aea93", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc465bdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d", Pod:"calico-kube-controllers-bc465bdb8-f2lb6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia52f8094c9d", MAC:"4e:ec:20:23:28:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:19.885727 containerd[1711]: 2024-06-25 18:34:19.877 [INFO][4917] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d" Namespace="calico-system" Pod="calico-kube-controllers-bc465bdb8-f2lb6" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:19.917048 containerd[1711]: time="2024-06-25T18:34:19.917001582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bwjhk,Uid:ca321c87-846f-4ad0-9416-c23d29b7c862,Namespace:kube-system,Attempt:1,} returns sandbox id \"812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410\"" Jun 25 18:34:19.921323 containerd[1711]: time="2024-06-25T18:34:19.921281738Z" level=info msg="CreateContainer within sandbox \"812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:34:19.961774 containerd[1711]: time="2024-06-25T18:34:19.961461977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:19.961774 containerd[1711]: time="2024-06-25T18:34:19.961529776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:19.961774 containerd[1711]: time="2024-06-25T18:34:19.961547576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:19.961774 containerd[1711]: time="2024-06-25T18:34:19.961561176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:19.982365 systemd[1]: Started cri-containerd-f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d.scope - libcontainer container f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d. Jun 25 18:34:19.991184 containerd[1711]: time="2024-06-25T18:34:19.990819466Z" level=info msg="CreateContainer within sandbox \"812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17616a8b8d04444b1f8b382cb538fbf2ed812f4e3fee8ec6bff772c6ca54f3a0\"" Jun 25 18:34:19.993562 containerd[1711]: time="2024-06-25T18:34:19.993501424Z" level=info msg="StartContainer for \"17616a8b8d04444b1f8b382cb538fbf2ed812f4e3fee8ec6bff772c6ca54f3a0\"" Jun 25 18:34:20.029749 systemd[1]: Started cri-containerd-17616a8b8d04444b1f8b382cb538fbf2ed812f4e3fee8ec6bff772c6ca54f3a0.scope - libcontainer container 17616a8b8d04444b1f8b382cb538fbf2ed812f4e3fee8ec6bff772c6ca54f3a0. Jun 25 18:34:20.049723 containerd[1711]: time="2024-06-25T18:34:20.049381246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc465bdb8-f2lb6,Uid:05bc352c-3c9e-4252-a67d-1f6ac75aea93,Namespace:calico-system,Attempt:1,} returns sandbox id \"f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d\"" Jun 25 18:34:20.093230 containerd[1711]: time="2024-06-25T18:34:20.092897881Z" level=info msg="StartContainer for \"17616a8b8d04444b1f8b382cb538fbf2ed812f4e3fee8ec6bff772c6ca54f3a0\" returns successfully" Jun 25 18:34:20.165940 containerd[1711]: time="2024-06-25T18:34:20.165890006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:20.168681 containerd[1711]: time="2024-06-25T18:34:20.168647604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 18:34:20.173991 containerd[1711]: time="2024-06-25T18:34:20.173958558Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:20.178688 containerd[1711]: time="2024-06-25T18:34:20.178639273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:20.179670 containerd[1711]: time="2024-06-25T18:34:20.179278993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.377774442s" Jun 25 18:34:20.179670 containerd[1711]: time="2024-06-25T18:34:20.179316513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 18:34:20.179861 containerd[1711]: time="2024-06-25T18:34:20.179825952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:34:20.183284 containerd[1711]: time="2024-06-25T18:34:20.183257189Z" level=info msg="CreateContainer within sandbox \"6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:34:20.228812 containerd[1711]: time="2024-06-25T18:34:20.228725662Z" level=info msg="CreateContainer within sandbox \"6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d209e4d64e7475acdc620512fb3841d386e1d528b64e52b47ba2df10e058dc0d\"" Jun 25 18:34:20.232211 containerd[1711]: time="2024-06-25T18:34:20.229540661Z" level=info msg="StartContainer for \"d209e4d64e7475acdc620512fb3841d386e1d528b64e52b47ba2df10e058dc0d\"" Jun 25 18:34:20.255348 systemd[1]: Started cri-containerd-d209e4d64e7475acdc620512fb3841d386e1d528b64e52b47ba2df10e058dc0d.scope - libcontainer container d209e4d64e7475acdc620512fb3841d386e1d528b64e52b47ba2df10e058dc0d. Jun 25 18:34:20.284511 containerd[1711]: time="2024-06-25T18:34:20.284390205Z" level=info msg="StartContainer for \"d209e4d64e7475acdc620512fb3841d386e1d528b64e52b47ba2df10e058dc0d\" returns successfully" Jun 25 18:34:20.513377 systemd-networkd[1472]: cali142d14ec535: Gained IPv6LL Jun 25 18:34:20.719693 kubelet[3193]: I0625 18:34:20.719581 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-bwjhk" podStartSLOduration=34.719547077 podCreationTimestamp="2024-06-25 18:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:34:20.719149838 +0000 UTC m=+48.651647963" watchObservedRunningTime="2024-06-25 18:34:20.719547077 +0000 UTC m=+48.652045202" Jun 25 18:34:21.153477 systemd-networkd[1472]: calia52f8094c9d: Gained IPv6LL Jun 25 18:34:21.153761 systemd-networkd[1472]: cali373ed6a9c80: Gained IPv6LL Jun 25 18:34:21.470944 containerd[1711]: time="2024-06-25T18:34:21.470688346Z" level=info msg="StopPodSandbox for \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\"" Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.513 [INFO][5139] k8s.go 608: Cleaning up netns ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.515 [INFO][5139] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" iface="eth0" netns="/var/run/netns/cni-981adbec-72db-6640-3c41-8058539818e4" Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.515 [INFO][5139] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" iface="eth0" netns="/var/run/netns/cni-981adbec-72db-6640-3c41-8058539818e4" Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.515 [INFO][5139] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" iface="eth0" netns="/var/run/netns/cni-981adbec-72db-6640-3c41-8058539818e4" Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.515 [INFO][5139] k8s.go 615: Releasing IP address(es) ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.515 [INFO][5139] utils.go 188: Calico CNI releasing IP address ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.533 [INFO][5146] ipam_plugin.go 411: Releasing address using handleID ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" HandleID="k8s-pod-network.a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.533 [INFO][5146] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.533 [INFO][5146] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.542 [WARNING][5146] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" HandleID="k8s-pod-network.a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.542 [INFO][5146] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" HandleID="k8s-pod-network.a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.543 [INFO][5146] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:21.546052 containerd[1711]: 2024-06-25 18:34:21.544 [INFO][5139] k8s.go 621: Teardown processing complete. ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:21.548832 containerd[1711]: time="2024-06-25T18:34:21.548791745Z" level=info msg="TearDown network for sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\" successfully" Jun 25 18:34:21.548832 containerd[1711]: time="2024-06-25T18:34:21.548827225Z" level=info msg="StopPodSandbox for \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\" returns successfully" Jun 25 18:34:21.549835 containerd[1711]: time="2024-06-25T18:34:21.549766344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jg64n,Uid:97580023-6067-45ba-b88a-4e958a1b396d,Namespace:kube-system,Attempt:1,}" Jun 25 18:34:21.549979 systemd[1]: run-netns-cni\x2d981adbec\x2d72db\x2d6640\x2d3c41\x2d8058539818e4.mount: Deactivated successfully. Jun 25 18:34:21.707921 systemd-networkd[1472]: cali23a4b6e7778: Link UP Jun 25 18:34:21.708135 systemd-networkd[1472]: cali23a4b6e7778: Gained carrier Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.620 [INFO][5156] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0 coredns-5dd5756b68- kube-system 97580023-6067-45ba-b88a-4e958a1b396d 803 0 2024-06-25 18:33:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.0.0-a-71b05979e1 coredns-5dd5756b68-jg64n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali23a4b6e7778 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" Namespace="kube-system" Pod="coredns-5dd5756b68-jg64n" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.620 [INFO][5156] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" Namespace="kube-system" Pod="coredns-5dd5756b68-jg64n" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.646 [INFO][5163] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" HandleID="k8s-pod-network.a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.657 [INFO][5163] ipam_plugin.go 264: Auto assigning IP ContainerID="a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" HandleID="k8s-pod-network.a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ce4e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.0.0-a-71b05979e1", "pod":"coredns-5dd5756b68-jg64n", "timestamp":"2024-06-25 18:34:21.646491365 +0000 UTC"}, Hostname:"ci-4012.0.0-a-71b05979e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.657 [INFO][5163] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.657 [INFO][5163] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.657 [INFO][5163] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-71b05979e1' Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.659 [INFO][5163] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.663 [INFO][5163] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.667 [INFO][5163] ipam.go 489: Trying affinity for 192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.668 [INFO][5163] ipam.go 155: Attempting to load block cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.670 [INFO][5163] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.671 [INFO][5163] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.672 [INFO][5163] ipam.go 1685: Creating new handle: k8s-pod-network.a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131 Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.679 [INFO][5163] ipam.go 1203: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.696 [INFO][5163] ipam.go 1216: Successfully claimed IPs: [192.168.117.132/26] block=192.168.117.128/26 handle="k8s-pod-network.a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.696 [INFO][5163] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.117.132/26] handle="k8s-pod-network.a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.696 [INFO][5163] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:21.748269 containerd[1711]: 2024-06-25 18:34:21.696 [INFO][5163] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.117.132/26] IPv6=[] ContainerID="a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" HandleID="k8s-pod-network.a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:21.748894 containerd[1711]: 2024-06-25 18:34:21.700 [INFO][5156] k8s.go 386: Populated endpoint ContainerID="a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" Namespace="kube-system" Pod="coredns-5dd5756b68-jg64n" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"97580023-6067-45ba-b88a-4e958a1b396d", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"", Pod:"coredns-5dd5756b68-jg64n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23a4b6e7778", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:21.748894 containerd[1711]: 2024-06-25 18:34:21.700 [INFO][5156] k8s.go 387: Calico CNI using IPs: [192.168.117.132/32] ContainerID="a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" Namespace="kube-system" Pod="coredns-5dd5756b68-jg64n" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:21.748894 containerd[1711]: 2024-06-25 18:34:21.700 [INFO][5156] dataplane_linux.go 68: Setting the host side veth name to cali23a4b6e7778 ContainerID="a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" Namespace="kube-system" Pod="coredns-5dd5756b68-jg64n" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:21.748894 containerd[1711]: 2024-06-25 18:34:21.705 [INFO][5156] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" Namespace="kube-system" Pod="coredns-5dd5756b68-jg64n" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:21.748894 containerd[1711]: 2024-06-25 18:34:21.706 [INFO][5156] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" Namespace="kube-system" Pod="coredns-5dd5756b68-jg64n" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"97580023-6067-45ba-b88a-4e958a1b396d", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131", Pod:"coredns-5dd5756b68-jg64n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23a4b6e7778", MAC:"86:e8:95:86:33:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:21.748894 containerd[1711]: 2024-06-25 18:34:21.745 [INFO][5156] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131" Namespace="kube-system" Pod="coredns-5dd5756b68-jg64n" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:21.939623 containerd[1711]: time="2024-06-25T18:34:21.938752905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:21.939623 containerd[1711]: time="2024-06-25T18:34:21.938816864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:21.939623 containerd[1711]: time="2024-06-25T18:34:21.938833504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:21.939623 containerd[1711]: time="2024-06-25T18:34:21.938865784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:21.967354 systemd[1]: Started cri-containerd-a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131.scope - libcontainer container a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131. Jun 25 18:34:22.002653 containerd[1711]: time="2024-06-25T18:34:22.002545999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jg64n,Uid:97580023-6067-45ba-b88a-4e958a1b396d,Namespace:kube-system,Attempt:1,} returns sandbox id \"a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131\"" Jun 25 18:34:22.008796 containerd[1711]: time="2024-06-25T18:34:22.008348913Z" level=info msg="CreateContainer within sandbox \"a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:34:22.063462 containerd[1711]: time="2024-06-25T18:34:22.063411736Z" level=info msg="CreateContainer within sandbox \"a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c13a26195a8fe0e2c1a96ed774ddcfb987fb62e37913592061da99175080aa00\"" Jun 25 18:34:22.065278 containerd[1711]: time="2024-06-25T18:34:22.065204015Z" level=info msg="StartContainer for \"c13a26195a8fe0e2c1a96ed774ddcfb987fb62e37913592061da99175080aa00\"" Jun 25 18:34:22.104477 systemd[1]: Started cri-containerd-c13a26195a8fe0e2c1a96ed774ddcfb987fb62e37913592061da99175080aa00.scope - libcontainer container c13a26195a8fe0e2c1a96ed774ddcfb987fb62e37913592061da99175080aa00. Jun 25 18:34:22.142851 containerd[1711]: time="2024-06-25T18:34:22.142804535Z" level=info msg="StartContainer for \"c13a26195a8fe0e2c1a96ed774ddcfb987fb62e37913592061da99175080aa00\" returns successfully" Jun 25 18:34:22.422370 containerd[1711]: time="2024-06-25T18:34:22.422322488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:22.424968 containerd[1711]: time="2024-06-25T18:34:22.424937205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 18:34:22.430768 containerd[1711]: time="2024-06-25T18:34:22.430739479Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:22.435428 containerd[1711]: time="2024-06-25T18:34:22.435380754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:22.436279 containerd[1711]: time="2024-06-25T18:34:22.436140433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 2.256276641s" Jun 25 18:34:22.436279 containerd[1711]: time="2024-06-25T18:34:22.436186633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 18:34:22.439303 containerd[1711]: time="2024-06-25T18:34:22.438516511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:34:22.446487 containerd[1711]: time="2024-06-25T18:34:22.445998023Z" level=info msg="CreateContainer within sandbox \"f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:34:22.489856 containerd[1711]: time="2024-06-25T18:34:22.489811538Z" level=info msg="CreateContainer within sandbox \"f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c06994f6211baa73de71dbbbec15b27c28efed594e4a9150bd5f7153b6375f06\"" Jun 25 18:34:22.490723 containerd[1711]: time="2024-06-25T18:34:22.490694377Z" level=info msg="StartContainer for \"c06994f6211baa73de71dbbbec15b27c28efed594e4a9150bd5f7153b6375f06\"" Jun 25 18:34:22.512477 systemd[1]: Started cri-containerd-c06994f6211baa73de71dbbbec15b27c28efed594e4a9150bd5f7153b6375f06.scope - libcontainer container c06994f6211baa73de71dbbbec15b27c28efed594e4a9150bd5f7153b6375f06. Jun 25 18:34:22.545409 containerd[1711]: time="2024-06-25T18:34:22.545213681Z" level=info msg="StartContainer for \"c06994f6211baa73de71dbbbec15b27c28efed594e4a9150bd5f7153b6375f06\" returns successfully" Jun 25 18:34:22.755664 kubelet[3193]: I0625 18:34:22.755614 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bc465bdb8-f2lb6" podStartSLOduration=26.372544834 podCreationTimestamp="2024-06-25 18:33:54 +0000 UTC" firstStartedPulling="2024-06-25 18:34:20.053885842 +0000 UTC m=+47.986383967" lastFinishedPulling="2024-06-25 18:34:22.436802393 +0000 UTC m=+50.369300478" observedRunningTime="2024-06-25 18:34:22.753791587 +0000 UTC m=+50.686289752" watchObservedRunningTime="2024-06-25 18:34:22.755461345 +0000 UTC m=+50.687959510" Jun 25 18:34:22.783290 systemd[1]: run-containerd-runc-k8s.io-c06994f6211baa73de71dbbbec15b27c28efed594e4a9150bd5f7153b6375f06-runc.RCZE8p.mount: Deactivated successfully. Jun 25 18:34:22.798130 kubelet[3193]: I0625 18:34:22.797782 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jg64n" podStartSLOduration=36.796366263 podCreationTimestamp="2024-06-25 18:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:34:22.795314744 +0000 UTC m=+50.727812869" watchObservedRunningTime="2024-06-25 18:34:22.796366263 +0000 UTC m=+50.728864388" Jun 25 18:34:23.137359 systemd-networkd[1472]: cali23a4b6e7778: Gained IPv6LL Jun 25 18:34:24.051068 containerd[1711]: time="2024-06-25T18:34:24.050411761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:24.053349 containerd[1711]: time="2024-06-25T18:34:24.053319836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 18:34:24.059667 containerd[1711]: time="2024-06-25T18:34:24.058757105Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:24.064835 containerd[1711]: time="2024-06-25T18:34:24.064807534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:24.065671 containerd[1711]: time="2024-06-25T18:34:24.065346213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.625998383s" Jun 25 18:34:24.066016 containerd[1711]: time="2024-06-25T18:34:24.065995691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 18:34:24.075586 containerd[1711]: time="2024-06-25T18:34:24.075550793Z" level=info msg="CreateContainer within sandbox \"6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:34:24.123430 containerd[1711]: time="2024-06-25T18:34:24.123389782Z" level=info msg="CreateContainer within sandbox \"6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2abc130e10d966a7c09015c333ebbd0cb20b1ab9b2a72f9d2a16a8991efe7a99\"" Jun 25 18:34:24.125569 containerd[1711]: time="2024-06-25T18:34:24.125443018Z" level=info msg="StartContainer for \"2abc130e10d966a7c09015c333ebbd0cb20b1ab9b2a72f9d2a16a8991efe7a99\"" Jun 25 18:34:24.160347 systemd[1]: Started cri-containerd-2abc130e10d966a7c09015c333ebbd0cb20b1ab9b2a72f9d2a16a8991efe7a99.scope - libcontainer container 2abc130e10d966a7c09015c333ebbd0cb20b1ab9b2a72f9d2a16a8991efe7a99. Jun 25 18:34:24.192082 containerd[1711]: time="2024-06-25T18:34:24.192048210Z" level=info msg="StartContainer for \"2abc130e10d966a7c09015c333ebbd0cb20b1ab9b2a72f9d2a16a8991efe7a99\" returns successfully" Jun 25 18:34:24.629524 kubelet[3193]: I0625 18:34:24.629279 3193 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:34:24.629524 kubelet[3193]: I0625 18:34:24.629313 3193 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:34:24.761914 kubelet[3193]: I0625 18:34:24.761869 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-5d2z5" podStartSLOduration=26.493460149 podCreationTimestamp="2024-06-25 18:33:53 +0000 UTC" firstStartedPulling="2024-06-25 18:34:18.800473993 +0000 UTC m=+46.732972078" lastFinishedPulling="2024-06-25 18:34:24.068847086 +0000 UTC m=+52.001345211" observedRunningTime="2024-06-25 18:34:24.761554722 +0000 UTC m=+52.694052847" watchObservedRunningTime="2024-06-25 18:34:24.761833282 +0000 UTC m=+52.694331367" Jun 25 18:34:27.382322 kubelet[3193]: I0625 18:34:27.382267 3193 topology_manager.go:215] "Topology Admit Handler" podUID="79b03f4d-b70d-48ea-9f4e-c5de68ea45b0" podNamespace="calico-apiserver" podName="calico-apiserver-558c4d9b74-f69gg" Jun 25 18:34:27.391575 kubelet[3193]: W0625 18:34:27.391443 3193 reflector.go:535] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4012.0.0-a-71b05979e1" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4012.0.0-a-71b05979e1' and this object Jun 25 18:34:27.391575 kubelet[3193]: E0625 18:34:27.391497 3193 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4012.0.0-a-71b05979e1" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4012.0.0-a-71b05979e1' and this object Jun 25 18:34:27.391575 kubelet[3193]: W0625 18:34:27.391539 3193 reflector.go:535] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4012.0.0-a-71b05979e1" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4012.0.0-a-71b05979e1' and this object Jun 25 18:34:27.391575 kubelet[3193]: E0625 18:34:27.391548 3193 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4012.0.0-a-71b05979e1" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4012.0.0-a-71b05979e1' and this object Jun 25 18:34:27.393063 systemd[1]: Created slice kubepods-besteffort-pod79b03f4d_b70d_48ea_9f4e_c5de68ea45b0.slice - libcontainer container kubepods-besteffort-pod79b03f4d_b70d_48ea_9f4e_c5de68ea45b0.slice. Jun 25 18:34:27.415736 kubelet[3193]: I0625 18:34:27.415679 3193 topology_manager.go:215] "Topology Admit Handler" podUID="0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292" podNamespace="calico-apiserver" podName="calico-apiserver-558c4d9b74-tr64b" Jun 25 18:34:27.416326 kubelet[3193]: I0625 18:34:27.416215 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/79b03f4d-b70d-48ea-9f4e-c5de68ea45b0-calico-apiserver-certs\") pod \"calico-apiserver-558c4d9b74-f69gg\" (UID: \"79b03f4d-b70d-48ea-9f4e-c5de68ea45b0\") " pod="calico-apiserver/calico-apiserver-558c4d9b74-f69gg" Jun 25 18:34:27.416326 kubelet[3193]: I0625 18:34:27.416262 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j6zx\" (UniqueName: \"kubernetes.io/projected/79b03f4d-b70d-48ea-9f4e-c5de68ea45b0-kube-api-access-2j6zx\") pod \"calico-apiserver-558c4d9b74-f69gg\" (UID: \"79b03f4d-b70d-48ea-9f4e-c5de68ea45b0\") " pod="calico-apiserver/calico-apiserver-558c4d9b74-f69gg" Jun 25 18:34:27.425983 systemd[1]: Created slice kubepods-besteffort-pod0f3fc3a4_36e9_4ff6_8c40_4d9c7f7dd292.slice - libcontainer container kubepods-besteffort-pod0f3fc3a4_36e9_4ff6_8c40_4d9c7f7dd292.slice. Jun 25 18:34:27.516672 kubelet[3193]: I0625 18:34:27.516433 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292-calico-apiserver-certs\") pod \"calico-apiserver-558c4d9b74-tr64b\" (UID: \"0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292\") " pod="calico-apiserver/calico-apiserver-558c4d9b74-tr64b" Jun 25 18:34:27.516672 kubelet[3193]: I0625 18:34:27.516503 3193 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8czsp\" (UniqueName: \"kubernetes.io/projected/0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292-kube-api-access-8czsp\") pod \"calico-apiserver-558c4d9b74-tr64b\" (UID: \"0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292\") " pod="calico-apiserver/calico-apiserver-558c4d9b74-tr64b" Jun 25 18:34:28.237079 kubelet[3193]: E0625 18:34:28.237041 3193 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:34:28.237250 kubelet[3193]: E0625 18:34:28.237133 3193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b03f4d-b70d-48ea-9f4e-c5de68ea45b0-calico-apiserver-certs podName:79b03f4d-b70d-48ea-9f4e-c5de68ea45b0 nodeName:}" failed. No retries permitted until 2024-06-25 18:34:28.737110416 +0000 UTC m=+56.669608501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/79b03f4d-b70d-48ea-9f4e-c5de68ea45b0-calico-apiserver-certs") pod "calico-apiserver-558c4d9b74-f69gg" (UID: "79b03f4d-b70d-48ea-9f4e-c5de68ea45b0") : secret "calico-apiserver-certs" not found Jun 25 18:34:28.238230 kubelet[3193]: E0625 18:34:28.238088 3193 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:34:28.238230 kubelet[3193]: E0625 18:34:28.238139 3193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292-calico-apiserver-certs podName:0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292 nodeName:}" failed. No retries permitted until 2024-06-25 18:34:28.738126454 +0000 UTC m=+56.670624579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292-calico-apiserver-certs") pod "calico-apiserver-558c4d9b74-tr64b" (UID: "0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292") : secret "calico-apiserver-certs" not found Jun 25 18:34:28.525203 kubelet[3193]: E0625 18:34:28.523786 3193 projected.go:292] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jun 25 18:34:28.525203 kubelet[3193]: E0625 18:34:28.523822 3193 projected.go:198] Error preparing data for projected volume kube-api-access-2j6zx for pod calico-apiserver/calico-apiserver-558c4d9b74-f69gg: failed to sync configmap cache: timed out waiting for the condition Jun 25 18:34:28.525203 kubelet[3193]: E0625 18:34:28.523896 3193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/79b03f4d-b70d-48ea-9f4e-c5de68ea45b0-kube-api-access-2j6zx podName:79b03f4d-b70d-48ea-9f4e-c5de68ea45b0 nodeName:}" failed. No retries permitted until 2024-06-25 18:34:29.023868405 +0000 UTC m=+56.956366530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2j6zx" (UniqueName: "kubernetes.io/projected/79b03f4d-b70d-48ea-9f4e-c5de68ea45b0-kube-api-access-2j6zx") pod "calico-apiserver-558c4d9b74-f69gg" (UID: "79b03f4d-b70d-48ea-9f4e-c5de68ea45b0") : failed to sync configmap cache: timed out waiting for the condition Jun 25 18:34:28.625920 kubelet[3193]: E0625 18:34:28.625873 3193 projected.go:292] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jun 25 18:34:28.625920 kubelet[3193]: E0625 18:34:28.625920 3193 projected.go:198] Error preparing data for projected volume kube-api-access-8czsp for pod calico-apiserver/calico-apiserver-558c4d9b74-tr64b: failed to sync configmap cache: timed out waiting for the condition Jun 25 18:34:28.626161 kubelet[3193]: E0625 18:34:28.625989 3193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292-kube-api-access-8czsp podName:0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292 nodeName:}" failed. No retries permitted until 2024-06-25 18:34:29.125971657 +0000 UTC m=+57.058469782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8czsp" (UniqueName: "kubernetes.io/projected/0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292-kube-api-access-8czsp") pod "calico-apiserver-558c4d9b74-tr64b" (UID: "0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292") : failed to sync configmap cache: timed out waiting for the condition Jun 25 18:34:29.198762 containerd[1711]: time="2024-06-25T18:34:29.198699077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558c4d9b74-f69gg,Uid:79b03f4d-b70d-48ea-9f4e-c5de68ea45b0,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:34:29.233788 containerd[1711]: time="2024-06-25T18:34:29.233332733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558c4d9b74-tr64b,Uid:0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:34:29.476879 systemd-networkd[1472]: cali0897d879571: Link UP Jun 25 18:34:29.478041 systemd-networkd[1472]: cali0897d879571: Gained carrier Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.290 [INFO][5393] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0 calico-apiserver-558c4d9b74- calico-apiserver 79b03f4d-b70d-48ea-9f4e-c5de68ea45b0 887 0 2024-06-25 18:34:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:558c4d9b74 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-71b05979e1 calico-apiserver-558c4d9b74-f69gg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0897d879571 [] []}} ContainerID="d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-f69gg" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.291 [INFO][5393] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-f69gg" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.370 [INFO][5414] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" HandleID="k8s-pod-network.d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.392 [INFO][5414] ipam_plugin.go 264: Auto assigning IP ContainerID="d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" HandleID="k8s-pod-network.d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316eb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-71b05979e1", "pod":"calico-apiserver-558c4d9b74-f69gg", "timestamp":"2024-06-25 18:34:29.37022712 +0000 UTC"}, Hostname:"ci-4012.0.0-a-71b05979e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.392 [INFO][5414] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.392 [INFO][5414] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.392 [INFO][5414] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-71b05979e1' Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.397 [INFO][5414] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.406 [INFO][5414] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.423 [INFO][5414] ipam.go 489: Trying affinity for 192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.431 [INFO][5414] ipam.go 155: Attempting to load block cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.434 [INFO][5414] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.435 [INFO][5414] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.438 [INFO][5414] ipam.go 1685: Creating new handle: k8s-pod-network.d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949 Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.456 [INFO][5414] ipam.go 1203: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.469 [INFO][5414] ipam.go 1216: Successfully claimed IPs: [192.168.117.133/26] block=192.168.117.128/26 handle="k8s-pod-network.d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.470 [INFO][5414] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.117.133/26] handle="k8s-pod-network.d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.471 [INFO][5414] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:29.501688 containerd[1711]: 2024-06-25 18:34:29.471 [INFO][5414] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.117.133/26] IPv6=[] ContainerID="d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" HandleID="k8s-pod-network.d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0" Jun 25 18:34:29.503596 containerd[1711]: 2024-06-25 18:34:29.473 [INFO][5393] k8s.go 386: Populated endpoint ContainerID="d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-f69gg" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0", GenerateName:"calico-apiserver-558c4d9b74-", Namespace:"calico-apiserver", SelfLink:"", UID:"79b03f4d-b70d-48ea-9f4e-c5de68ea45b0", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 34, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"558c4d9b74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"", Pod:"calico-apiserver-558c4d9b74-f69gg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0897d879571", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:29.503596 containerd[1711]: 2024-06-25 18:34:29.473 [INFO][5393] k8s.go 387: Calico CNI using IPs: [192.168.117.133/32] ContainerID="d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-f69gg" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0" Jun 25 18:34:29.503596 containerd[1711]: 2024-06-25 18:34:29.474 [INFO][5393] dataplane_linux.go 68: Setting the host side veth name to cali0897d879571 ContainerID="d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-f69gg" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0" Jun 25 18:34:29.503596 containerd[1711]: 2024-06-25 18:34:29.477 [INFO][5393] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-f69gg" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0" Jun 25 18:34:29.503596 containerd[1711]: 2024-06-25 18:34:29.480 [INFO][5393] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-f69gg" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0", GenerateName:"calico-apiserver-558c4d9b74-", Namespace:"calico-apiserver", SelfLink:"", UID:"79b03f4d-b70d-48ea-9f4e-c5de68ea45b0", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 34, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"558c4d9b74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949", Pod:"calico-apiserver-558c4d9b74-f69gg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0897d879571", MAC:"26:ba:e2:8c:6c:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:29.503596 containerd[1711]: 2024-06-25 18:34:29.493 [INFO][5393] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-f69gg" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--f69gg-eth0" Jun 25 18:34:29.534797 containerd[1711]: time="2024-06-25T18:34:29.534518096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:29.534797 containerd[1711]: time="2024-06-25T18:34:29.534578456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:29.534797 containerd[1711]: time="2024-06-25T18:34:29.534606936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:29.534797 containerd[1711]: time="2024-06-25T18:34:29.534621296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:29.564518 systemd[1]: Started cri-containerd-d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949.scope - libcontainer container d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949. Jun 25 18:34:29.568169 systemd-networkd[1472]: cali688dfd08ebf: Link UP Jun 25 18:34:29.569520 systemd-networkd[1472]: cali688dfd08ebf: Gained carrier Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.389 [INFO][5405] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0 calico-apiserver-558c4d9b74- calico-apiserver 0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292 893 0 2024-06-25 18:34:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:558c4d9b74 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.0.0-a-71b05979e1 calico-apiserver-558c4d9b74-tr64b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali688dfd08ebf [] []}} ContainerID="87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-tr64b" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.389 [INFO][5405] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-tr64b" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.444 [INFO][5422] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" HandleID="k8s-pod-network.87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.481 [INFO][5422] ipam_plugin.go 264: Auto assigning IP ContainerID="87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" HandleID="k8s-pod-network.87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000263b20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.0.0-a-71b05979e1", "pod":"calico-apiserver-558c4d9b74-tr64b", "timestamp":"2024-06-25 18:34:29.444361383 +0000 UTC"}, Hostname:"ci-4012.0.0-a-71b05979e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.481 [INFO][5422] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.481 [INFO][5422] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.483 [INFO][5422] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.0.0-a-71b05979e1' Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.490 [INFO][5422] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.506 [INFO][5422] ipam.go 372: Looking up existing affinities for host host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.514 [INFO][5422] ipam.go 489: Trying affinity for 192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.517 [INFO][5422] ipam.go 155: Attempting to load block cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.521 [INFO][5422] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.117.128/26 host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.521 [INFO][5422] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.117.128/26 handle="k8s-pod-network.87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.528 [INFO][5422] ipam.go 1685: Creating new handle: k8s-pod-network.87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.537 [INFO][5422] ipam.go 1203: Writing block in order to claim IPs block=192.168.117.128/26 handle="k8s-pod-network.87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.545 [INFO][5422] ipam.go 1216: Successfully claimed IPs: [192.168.117.134/26] block=192.168.117.128/26 handle="k8s-pod-network.87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.545 [INFO][5422] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.117.134/26] handle="k8s-pod-network.87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" host="ci-4012.0.0-a-71b05979e1" Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.545 [INFO][5422] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:29.599648 containerd[1711]: 2024-06-25 18:34:29.546 [INFO][5422] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.117.134/26] IPv6=[] ContainerID="87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" HandleID="k8s-pod-network.87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0" Jun 25 18:34:29.601610 containerd[1711]: 2024-06-25 18:34:29.560 [INFO][5405] k8s.go 386: Populated endpoint ContainerID="87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-tr64b" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0", GenerateName:"calico-apiserver-558c4d9b74-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 34, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"558c4d9b74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"", Pod:"calico-apiserver-558c4d9b74-tr64b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali688dfd08ebf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:29.601610 containerd[1711]: 2024-06-25 18:34:29.563 [INFO][5405] k8s.go 387: Calico CNI using IPs: [192.168.117.134/32] ContainerID="87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-tr64b" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0" Jun 25 18:34:29.601610 containerd[1711]: 2024-06-25 18:34:29.563 [INFO][5405] dataplane_linux.go 68: Setting the host side veth name to cali688dfd08ebf ContainerID="87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-tr64b" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0" Jun 25 18:34:29.601610 containerd[1711]: 2024-06-25 18:34:29.571 [INFO][5405] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-tr64b" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0" Jun 25 18:34:29.601610 containerd[1711]: 2024-06-25 18:34:29.572 [INFO][5405] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-tr64b" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0", GenerateName:"calico-apiserver-558c4d9b74-", Namespace:"calico-apiserver", SelfLink:"", UID:"0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 34, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"558c4d9b74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc", Pod:"calico-apiserver-558c4d9b74-tr64b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali688dfd08ebf", MAC:"c2:ce:35:76:7e:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:29.601610 containerd[1711]: 2024-06-25 18:34:29.594 [INFO][5405] k8s.go 500: Wrote updated endpoint to datastore ContainerID="87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc" Namespace="calico-apiserver" Pod="calico-apiserver-558c4d9b74-tr64b" WorkloadEndpoint="ci--4012.0.0--a--71b05979e1-k8s-calico--apiserver--558c4d9b74--tr64b-eth0" Jun 25 18:34:29.645725 containerd[1711]: time="2024-06-25T18:34:29.645493491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:34:29.645725 containerd[1711]: time="2024-06-25T18:34:29.645547371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:29.645725 containerd[1711]: time="2024-06-25T18:34:29.645565211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:34:29.645725 containerd[1711]: time="2024-06-25T18:34:29.645578611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:34:29.655464 containerd[1711]: time="2024-06-25T18:34:29.655018114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558c4d9b74-f69gg,Uid:79b03f4d-b70d-48ea-9f4e-c5de68ea45b0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949\"" Jun 25 18:34:29.656899 containerd[1711]: time="2024-06-25T18:34:29.656864310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:34:29.674449 systemd[1]: Started cri-containerd-87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc.scope - libcontainer container 87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc. Jun 25 18:34:29.717918 containerd[1711]: time="2024-06-25T18:34:29.717880597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558c4d9b74-tr64b,Uid:0f3fc3a4-36e9-4ff6-8c40-4d9c7f7dd292,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc\"" Jun 25 18:34:31.201513 systemd-networkd[1472]: cali0897d879571: Gained IPv6LL Jun 25 18:34:31.201922 systemd-networkd[1472]: cali688dfd08ebf: Gained IPv6LL Jun 25 18:34:31.783320 containerd[1711]: time="2024-06-25T18:34:31.783193698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:31.796935 containerd[1711]: time="2024-06-25T18:34:31.796883633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 18:34:31.802890 containerd[1711]: time="2024-06-25T18:34:31.802849342Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:31.808626 containerd[1711]: time="2024-06-25T18:34:31.808557211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:31.809446 containerd[1711]: time="2024-06-25T18:34:31.809302090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 2.151512542s" Jun 25 18:34:31.809446 containerd[1711]: time="2024-06-25T18:34:31.809332890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 18:34:31.810157 containerd[1711]: time="2024-06-25T18:34:31.810126128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:34:31.812478 containerd[1711]: time="2024-06-25T18:34:31.812439084Z" level=info msg="CreateContainer within sandbox \"d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:34:31.854268 containerd[1711]: time="2024-06-25T18:34:31.854152367Z" level=info msg="CreateContainer within sandbox \"d44809498c3736c40dac61b79c231331001e25991a7e764383524fb7636c3949\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7c9cec9b92a6b5a82e794193c4cada7bd4fd85f2fd166b0a98c0b797faaaaa72\"" Jun 25 18:34:31.855294 containerd[1711]: time="2024-06-25T18:34:31.855119645Z" level=info msg="StartContainer for \"7c9cec9b92a6b5a82e794193c4cada7bd4fd85f2fd166b0a98c0b797faaaaa72\"" Jun 25 18:34:31.888037 systemd[1]: run-containerd-runc-k8s.io-7c9cec9b92a6b5a82e794193c4cada7bd4fd85f2fd166b0a98c0b797faaaaa72-runc.Z6YR93.mount: Deactivated successfully. Jun 25 18:34:31.894372 systemd[1]: Started cri-containerd-7c9cec9b92a6b5a82e794193c4cada7bd4fd85f2fd166b0a98c0b797faaaaa72.scope - libcontainer container 7c9cec9b92a6b5a82e794193c4cada7bd4fd85f2fd166b0a98c0b797faaaaa72. Jun 25 18:34:31.931229 containerd[1711]: time="2024-06-25T18:34:31.931068064Z" level=info msg="StartContainer for \"7c9cec9b92a6b5a82e794193c4cada7bd4fd85f2fd166b0a98c0b797faaaaa72\" returns successfully" Jun 25 18:34:32.124854 containerd[1711]: time="2024-06-25T18:34:32.124717586Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:34:32.128862 containerd[1711]: time="2024-06-25T18:34:32.127277822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jun 25 18:34:32.130584 containerd[1711]: time="2024-06-25T18:34:32.130509176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 320.339808ms" Jun 25 18:34:32.130584 containerd[1711]: time="2024-06-25T18:34:32.130567055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 18:34:32.132897 containerd[1711]: time="2024-06-25T18:34:32.132832771Z" level=info msg="CreateContainer within sandbox \"87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:34:32.168672 containerd[1711]: time="2024-06-25T18:34:32.168612825Z" level=info msg="CreateContainer within sandbox \"87a2e1f124bc6936ee51137e27839f47d6b128dbf20c6b7d76b1d98cfdc0ebfc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5341f0d972772f2c3267739a1a6a9caff7b51411ac1a44a1b2df92086633c2cb\"" Jun 25 18:34:32.169585 containerd[1711]: time="2024-06-25T18:34:32.169543943Z" level=info msg="StartContainer for \"5341f0d972772f2c3267739a1a6a9caff7b51411ac1a44a1b2df92086633c2cb\"" Jun 25 18:34:32.196355 systemd[1]: Started cri-containerd-5341f0d972772f2c3267739a1a6a9caff7b51411ac1a44a1b2df92086633c2cb.scope - libcontainer container 5341f0d972772f2c3267739a1a6a9caff7b51411ac1a44a1b2df92086633c2cb. Jun 25 18:34:32.241567 containerd[1711]: time="2024-06-25T18:34:32.241517130Z" level=info msg="StartContainer for \"5341f0d972772f2c3267739a1a6a9caff7b51411ac1a44a1b2df92086633c2cb\" returns successfully" Jun 25 18:34:32.502335 containerd[1711]: time="2024-06-25T18:34:32.502208688Z" level=info msg="StopPodSandbox for \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\"" Jun 25 18:34:32.502455 containerd[1711]: time="2024-06-25T18:34:32.502306568Z" level=info msg="TearDown network for sandbox \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\" successfully" Jun 25 18:34:32.502455 containerd[1711]: time="2024-06-25T18:34:32.502363048Z" level=info msg="StopPodSandbox for \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\" returns successfully" Jun 25 18:34:32.503116 containerd[1711]: time="2024-06-25T18:34:32.503076367Z" level=info msg="RemovePodSandbox for \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\"" Jun 25 18:34:32.503206 containerd[1711]: time="2024-06-25T18:34:32.503115527Z" level=info msg="Forcibly stopping sandbox \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\"" Jun 25 18:34:32.503240 containerd[1711]: time="2024-06-25T18:34:32.503214926Z" level=info msg="TearDown network for sandbox \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\" successfully" Jun 25 18:34:32.518616 containerd[1711]: time="2024-06-25T18:34:32.518296538Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:34:32.518616 containerd[1711]: time="2024-06-25T18:34:32.518399138Z" level=info msg="RemovePodSandbox \"3490e03c268e58c9197138d19a5e2a6ef9dfb7ebb3401cb0cf45652e20c2a151\" returns successfully" Jun 25 18:34:32.519305 containerd[1711]: time="2024-06-25T18:34:32.518988217Z" level=info msg="StopPodSandbox for \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\"" Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.580 [WARNING][5632] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"97580023-6067-45ba-b88a-4e958a1b396d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131", Pod:"coredns-5dd5756b68-jg64n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23a4b6e7778", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.580 [INFO][5632] k8s.go 608: Cleaning up netns ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.580 [INFO][5632] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" iface="eth0" netns="" Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.580 [INFO][5632] k8s.go 615: Releasing IP address(es) ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.580 [INFO][5632] utils.go 188: Calico CNI releasing IP address ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.627 [INFO][5638] ipam_plugin.go 411: Releasing address using handleID ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" HandleID="k8s-pod-network.a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.628 [INFO][5638] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.628 [INFO][5638] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.642 [WARNING][5638] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" HandleID="k8s-pod-network.a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.643 [INFO][5638] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" HandleID="k8s-pod-network.a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.644 [INFO][5638] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:32.651420 containerd[1711]: 2024-06-25 18:34:32.647 [INFO][5632] k8s.go 621: Teardown processing complete. ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:32.651420 containerd[1711]: time="2024-06-25T18:34:32.651299693Z" level=info msg="TearDown network for sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\" successfully" Jun 25 18:34:32.651420 containerd[1711]: time="2024-06-25T18:34:32.651322372Z" level=info msg="StopPodSandbox for \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\" returns successfully" Jun 25 18:34:32.651945 containerd[1711]: time="2024-06-25T18:34:32.651742812Z" level=info msg="RemovePodSandbox for \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\"" Jun 25 18:34:32.651945 containerd[1711]: time="2024-06-25T18:34:32.651771332Z" level=info msg="Forcibly stopping sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\"" Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.717 [WARNING][5656] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"97580023-6067-45ba-b88a-4e958a1b396d", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"a69bb24fe5c21b9d86c1274a0526fd623c3239eba4776fe472dcefc8a4df0131", Pod:"coredns-5dd5756b68-jg64n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23a4b6e7778", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.718 [INFO][5656] k8s.go 608: Cleaning up netns ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.718 [INFO][5656] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" iface="eth0" netns="" Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.718 [INFO][5656] k8s.go 615: Releasing IP address(es) ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.718 [INFO][5656] utils.go 188: Calico CNI releasing IP address ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.768 [INFO][5662] ipam_plugin.go 411: Releasing address using handleID ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" HandleID="k8s-pod-network.a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.768 [INFO][5662] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.769 [INFO][5662] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.788 [WARNING][5662] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" HandleID="k8s-pod-network.a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.788 [INFO][5662] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" HandleID="k8s-pod-network.a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--jg64n-eth0" Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.798 [INFO][5662] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:32.802012 containerd[1711]: 2024-06-25 18:34:32.800 [INFO][5656] k8s.go 621: Teardown processing complete. ContainerID="a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0" Jun 25 18:34:32.804346 containerd[1711]: time="2024-06-25T18:34:32.803056372Z" level=info msg="TearDown network for sandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\" successfully" Jun 25 18:34:32.814351 containerd[1711]: time="2024-06-25T18:34:32.813954392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:34:32.814351 containerd[1711]: time="2024-06-25T18:34:32.814063072Z" level=info msg="RemovePodSandbox \"a934f5584dba89a663c41c6e0e2d3d62254091dac83d28526b1636f5658ff2b0\" returns successfully" Jun 25 18:34:32.815282 containerd[1711]: time="2024-06-25T18:34:32.815247389Z" level=info msg="StopPodSandbox for \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\"" Jun 25 18:34:32.834965 kubelet[3193]: I0625 18:34:32.834908 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-558c4d9b74-f69gg" podStartSLOduration=3.681546736 podCreationTimestamp="2024-06-25 18:34:27 +0000 UTC" firstStartedPulling="2024-06-25 18:34:29.656616791 +0000 UTC m=+57.589114916" lastFinishedPulling="2024-06-25 18:34:31.809933408 +0000 UTC m=+59.742431533" observedRunningTime="2024-06-25 18:34:32.834418754 +0000 UTC m=+60.766916879" watchObservedRunningTime="2024-06-25 18:34:32.834863353 +0000 UTC m=+60.767361478" Jun 25 18:34:32.840302 kubelet[3193]: I0625 18:34:32.835002 3193 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-558c4d9b74-tr64b" podStartSLOduration=3.423644212 podCreationTimestamp="2024-06-25 18:34:27 +0000 UTC" firstStartedPulling="2024-06-25 18:34:29.719440794 +0000 UTC m=+57.651938879" lastFinishedPulling="2024-06-25 18:34:32.130781375 +0000 UTC m=+60.063279500" observedRunningTime="2024-06-25 18:34:32.807196004 +0000 UTC m=+60.739694129" watchObservedRunningTime="2024-06-25 18:34:32.834984833 +0000 UTC m=+60.767482958" Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.880 [WARNING][5681] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13f88024-04f7-4d51-8fb3-1cee9d125eda", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6", Pod:"csi-node-driver-5d2z5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.117.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali142d14ec535", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.881 [INFO][5681] k8s.go 608: Cleaning up netns ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.881 [INFO][5681] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" iface="eth0" netns="" Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.881 [INFO][5681] k8s.go 615: Releasing IP address(es) ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.881 [INFO][5681] utils.go 188: Calico CNI releasing IP address ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.937 [INFO][5688] ipam_plugin.go 411: Releasing address using handleID ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" HandleID="k8s-pod-network.47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.937 [INFO][5688] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.938 [INFO][5688] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.952 [WARNING][5688] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" HandleID="k8s-pod-network.47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.952 [INFO][5688] ipam_plugin.go 439: Releasing address using workloadID ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" HandleID="k8s-pod-network.47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.956 [INFO][5688] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:32.961592 containerd[1711]: 2024-06-25 18:34:32.958 [INFO][5681] k8s.go 621: Teardown processing complete. ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:32.962016 containerd[1711]: time="2024-06-25T18:34:32.961643919Z" level=info msg="TearDown network for sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\" successfully" Jun 25 18:34:32.962016 containerd[1711]: time="2024-06-25T18:34:32.961674519Z" level=info msg="StopPodSandbox for \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\" returns successfully" Jun 25 18:34:32.962477 containerd[1711]: time="2024-06-25T18:34:32.962449477Z" level=info msg="RemovePodSandbox for \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\"" Jun 25 18:34:32.962538 containerd[1711]: time="2024-06-25T18:34:32.962490037Z" level=info msg="Forcibly stopping sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\"" Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.015 [WARNING][5708] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"13f88024-04f7-4d51-8fb3-1cee9d125eda", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"6a38c90304dcd976ce5c556ae47c4921f341a133bc21563905deadab7c8e15e6", Pod:"csi-node-driver-5d2z5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.117.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali142d14ec535", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.016 [INFO][5708] k8s.go 608: Cleaning up netns ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.016 [INFO][5708] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" iface="eth0" netns="" Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.016 [INFO][5708] k8s.go 615: Releasing IP address(es) ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.016 [INFO][5708] utils.go 188: Calico CNI releasing IP address ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.051 [INFO][5715] ipam_plugin.go 411: Releasing address using handleID ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" HandleID="k8s-pod-network.47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.051 [INFO][5715] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.051 [INFO][5715] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.068 [WARNING][5715] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" HandleID="k8s-pod-network.47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.068 [INFO][5715] ipam_plugin.go 439: Releasing address using workloadID ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" HandleID="k8s-pod-network.47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Workload="ci--4012.0.0--a--71b05979e1-k8s-csi--node--driver--5d2z5-eth0" Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.070 [INFO][5715] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:33.072839 containerd[1711]: 2024-06-25 18:34:33.071 [INFO][5708] k8s.go 621: Teardown processing complete. ContainerID="47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a" Jun 25 18:34:33.074398 containerd[1711]: time="2024-06-25T18:34:33.074363950Z" level=info msg="TearDown network for sandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\" successfully" Jun 25 18:34:33.083999 containerd[1711]: time="2024-06-25T18:34:33.083938852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:34:33.084349 containerd[1711]: time="2024-06-25T18:34:33.084274212Z" level=info msg="RemovePodSandbox \"47e0b7577e51fc4f840904c3ab2195338c43479a3d77e5284c4e97e7897b436a\" returns successfully" Jun 25 18:34:33.085812 containerd[1711]: time="2024-06-25T18:34:33.085778329Z" level=info msg="StopPodSandbox for \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\"" Jun 25 18:34:33.085934 containerd[1711]: time="2024-06-25T18:34:33.085877449Z" level=info msg="TearDown network for sandbox \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\" successfully" Jun 25 18:34:33.085934 containerd[1711]: time="2024-06-25T18:34:33.085927889Z" level=info msg="StopPodSandbox for \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\" returns successfully" Jun 25 18:34:33.086262 containerd[1711]: time="2024-06-25T18:34:33.086186128Z" level=info msg="RemovePodSandbox for \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\"" Jun 25 18:34:33.086262 containerd[1711]: time="2024-06-25T18:34:33.086207928Z" level=info msg="Forcibly stopping sandbox \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\"" Jun 25 18:34:33.086312 containerd[1711]: time="2024-06-25T18:34:33.086266728Z" level=info msg="TearDown network for sandbox \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\" successfully" Jun 25 18:34:33.094200 containerd[1711]: time="2024-06-25T18:34:33.094143234Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:34:33.094312 containerd[1711]: time="2024-06-25T18:34:33.094231833Z" level=info msg="RemovePodSandbox \"7f6af483f5e5e438d51716f07bd5ae981f6497d456d1266efe0997b07cf20cec\" returns successfully" Jun 25 18:34:33.095331 containerd[1711]: time="2024-06-25T18:34:33.095306431Z" level=info msg="StopPodSandbox for \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\"" Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.143 [WARNING][5733] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ca321c87-846f-4ad0-9416-c23d29b7c862", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410", Pod:"coredns-5dd5756b68-bwjhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali373ed6a9c80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.144 [INFO][5733] k8s.go 608: Cleaning up netns ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.144 [INFO][5733] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" iface="eth0" netns="" Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.144 [INFO][5733] k8s.go 615: Releasing IP address(es) ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.144 [INFO][5733] utils.go 188: Calico CNI releasing IP address ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.184 [INFO][5739] ipam_plugin.go 411: Releasing address using handleID ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" HandleID="k8s-pod-network.534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.185 [INFO][5739] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.185 [INFO][5739] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.194 [WARNING][5739] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" HandleID="k8s-pod-network.534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.195 [INFO][5739] ipam_plugin.go 439: Releasing address using workloadID ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" HandleID="k8s-pod-network.534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.196 [INFO][5739] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:33.201449 containerd[1711]: 2024-06-25 18:34:33.199 [INFO][5733] k8s.go 621: Teardown processing complete. ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:33.202751 containerd[1711]: time="2024-06-25T18:34:33.201633115Z" level=info msg="TearDown network for sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\" successfully" Jun 25 18:34:33.202751 containerd[1711]: time="2024-06-25T18:34:33.201664595Z" level=info msg="StopPodSandbox for \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\" returns successfully" Jun 25 18:34:33.202751 containerd[1711]: time="2024-06-25T18:34:33.202414633Z" level=info msg="RemovePodSandbox for \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\"" Jun 25 18:34:33.202751 containerd[1711]: time="2024-06-25T18:34:33.202450673Z" level=info msg="Forcibly stopping sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\"" Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.248 [WARNING][5757] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ca321c87-846f-4ad0-9416-c23d29b7c862", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"812631fbe3ee2a15df6d23a92231ddbb500a7b0877b1ec6e6c270ec458f05410", Pod:"coredns-5dd5756b68-bwjhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali373ed6a9c80", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.248 [INFO][5757] k8s.go 608: Cleaning up netns ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.248 [INFO][5757] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" iface="eth0" netns="" Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.248 [INFO][5757] k8s.go 615: Releasing IP address(es) ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.249 [INFO][5757] utils.go 188: Calico CNI releasing IP address ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.279 [INFO][5763] ipam_plugin.go 411: Releasing address using handleID ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" HandleID="k8s-pod-network.534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.281 [INFO][5763] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.281 [INFO][5763] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.292 [WARNING][5763] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" HandleID="k8s-pod-network.534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.293 [INFO][5763] ipam_plugin.go 439: Releasing address using workloadID ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" HandleID="k8s-pod-network.534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Workload="ci--4012.0.0--a--71b05979e1-k8s-coredns--5dd5756b68--bwjhk-eth0" Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.294 [INFO][5763] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:33.299476 containerd[1711]: 2024-06-25 18:34:33.297 [INFO][5757] k8s.go 621: Teardown processing complete. ContainerID="534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31" Jun 25 18:34:33.299960 containerd[1711]: time="2024-06-25T18:34:33.299533174Z" level=info msg="TearDown network for sandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\" successfully" Jun 25 18:34:33.307846 containerd[1711]: time="2024-06-25T18:34:33.307768119Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:34:33.309335 containerd[1711]: time="2024-06-25T18:34:33.307869598Z" level=info msg="RemovePodSandbox \"534b15ee50efd12c4200226ee18724db51337d78a2950d3abec9d931f12e8a31\" returns successfully" Jun 25 18:34:33.309335 containerd[1711]: time="2024-06-25T18:34:33.308328597Z" level=info msg="StopPodSandbox for \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\"" Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.360 [WARNING][5781] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0", GenerateName:"calico-kube-controllers-bc465bdb8-", Namespace:"calico-system", SelfLink:"", UID:"05bc352c-3c9e-4252-a67d-1f6ac75aea93", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc465bdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d", Pod:"calico-kube-controllers-bc465bdb8-f2lb6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia52f8094c9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.360 [INFO][5781] k8s.go 608: Cleaning up netns ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.360 [INFO][5781] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" iface="eth0" netns="" Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.360 [INFO][5781] k8s.go 615: Releasing IP address(es) ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.360 [INFO][5781] utils.go 188: Calico CNI releasing IP address ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.389 [INFO][5787] ipam_plugin.go 411: Releasing address using handleID ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" HandleID="k8s-pod-network.8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.390 [INFO][5787] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.390 [INFO][5787] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.411 [WARNING][5787] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" HandleID="k8s-pod-network.8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.412 [INFO][5787] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" HandleID="k8s-pod-network.8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.415 [INFO][5787] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:33.420453 containerd[1711]: 2024-06-25 18:34:33.417 [INFO][5781] k8s.go 621: Teardown processing complete. ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:33.420453 containerd[1711]: time="2024-06-25T18:34:33.420350750Z" level=info msg="TearDown network for sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\" successfully" Jun 25 18:34:33.420453 containerd[1711]: time="2024-06-25T18:34:33.420376150Z" level=info msg="StopPodSandbox for \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\" returns successfully" Jun 25 18:34:33.424378 containerd[1711]: time="2024-06-25T18:34:33.423167065Z" level=info msg="RemovePodSandbox for \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\"" Jun 25 18:34:33.424378 containerd[1711]: time="2024-06-25T18:34:33.423302105Z" level=info msg="Forcibly stopping sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\"" Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.468 [WARNING][5805] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0", GenerateName:"calico-kube-controllers-bc465bdb8-", Namespace:"calico-system", SelfLink:"", UID:"05bc352c-3c9e-4252-a67d-1f6ac75aea93", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 33, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc465bdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.0.0-a-71b05979e1", ContainerID:"f7c3780120ef82da4433669d0e3ae971d40f7d76699ea4ec1426d19ff03dad1d", Pod:"calico-kube-controllers-bc465bdb8-f2lb6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia52f8094c9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.468 [INFO][5805] k8s.go 608: Cleaning up netns ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.468 [INFO][5805] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" iface="eth0" netns="" Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.468 [INFO][5805] k8s.go 615: Releasing IP address(es) ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.468 [INFO][5805] utils.go 188: Calico CNI releasing IP address ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.499 [INFO][5811] ipam_plugin.go 411: Releasing address using handleID ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" HandleID="k8s-pod-network.8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.499 [INFO][5811] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.499 [INFO][5811] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.509 [WARNING][5811] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" HandleID="k8s-pod-network.8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.509 [INFO][5811] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" HandleID="k8s-pod-network.8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Workload="ci--4012.0.0--a--71b05979e1-k8s-calico--kube--controllers--bc465bdb8--f2lb6-eth0" Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.511 [INFO][5811] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:34:33.513449 containerd[1711]: 2024-06-25 18:34:33.512 [INFO][5805] k8s.go 621: Teardown processing complete. ContainerID="8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e" Jun 25 18:34:33.516022 containerd[1711]: time="2024-06-25T18:34:33.513986697Z" level=info msg="TearDown network for sandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\" successfully" Jun 25 18:34:33.526244 containerd[1711]: time="2024-06-25T18:34:33.526194275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:34:33.527226 containerd[1711]: time="2024-06-25T18:34:33.526451074Z" level=info msg="RemovePodSandbox \"8fa1ee1bdce7dc0d1cf36cd6cbf312292c0d5cea30f2803b50959b87add03e7e\" returns successfully" Jun 25 18:34:33.796308 kubelet[3193]: I0625 18:34:33.796238 3193 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:34:33.797128 kubelet[3193]: I0625 18:34:33.796580 3193 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:34:49.568239 kubelet[3193]: I0625 18:34:49.567955 3193 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:34:59.902337 systemd[1]: Started sshd@7-10.200.20.27:22-10.200.16.10:54536.service - OpenSSH per-connection server daemon (10.200.16.10:54536). Jun 25 18:35:00.383723 sshd[5924]: Accepted publickey for core from 10.200.16.10 port 54536 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:00.386003 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:00.389861 systemd-logind[1685]: New session 10 of user core. Jun 25 18:35:00.396478 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:35:00.796455 sshd[5924]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:00.799056 systemd[1]: sshd@7-10.200.20.27:22-10.200.16.10:54536.service: Deactivated successfully. Jun 25 18:35:00.801011 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:35:00.802367 systemd-logind[1685]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:35:00.803922 systemd-logind[1685]: Removed session 10. Jun 25 18:35:05.881474 systemd[1]: Started sshd@8-10.200.20.27:22-10.200.16.10:52558.service - OpenSSH per-connection server daemon (10.200.16.10:52558). Jun 25 18:35:06.312135 sshd[5957]: Accepted publickey for core from 10.200.16.10 port 52558 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:06.313541 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:06.319366 systemd-logind[1685]: New session 11 of user core. Jun 25 18:35:06.320575 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:35:06.701752 sshd[5957]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:06.705484 systemd[1]: sshd@8-10.200.20.27:22-10.200.16.10:52558.service: Deactivated successfully. Jun 25 18:35:06.707598 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:35:06.708546 systemd-logind[1685]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:35:06.709769 systemd-logind[1685]: Removed session 11. Jun 25 18:35:11.787933 systemd[1]: Started sshd@9-10.200.20.27:22-10.200.16.10:52572.service - OpenSSH per-connection server daemon (10.200.16.10:52572). Jun 25 18:35:12.268042 sshd[5978]: Accepted publickey for core from 10.200.16.10 port 52572 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:12.269374 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:12.273105 systemd-logind[1685]: New session 12 of user core. Jun 25 18:35:12.277316 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:35:12.669966 sshd[5978]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:12.675687 systemd[1]: sshd@9-10.200.20.27:22-10.200.16.10:52572.service: Deactivated successfully. Jun 25 18:35:12.678833 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:35:12.680388 systemd-logind[1685]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:35:12.681285 systemd-logind[1685]: Removed session 12. Jun 25 18:35:17.753668 systemd[1]: Started sshd@10-10.200.20.27:22-10.200.16.10:58252.service - OpenSSH per-connection server daemon (10.200.16.10:58252). Jun 25 18:35:18.188562 sshd[5998]: Accepted publickey for core from 10.200.16.10 port 58252 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:18.189586 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:18.193443 systemd-logind[1685]: New session 13 of user core. Jun 25 18:35:18.201321 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:35:18.568382 sshd[5998]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:18.571740 systemd[1]: sshd@10-10.200.20.27:22-10.200.16.10:58252.service: Deactivated successfully. Jun 25 18:35:18.573736 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:35:18.574700 systemd-logind[1685]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:35:18.575585 systemd-logind[1685]: Removed session 13. Jun 25 18:35:18.660079 systemd[1]: Started sshd@11-10.200.20.27:22-10.200.16.10:58260.service - OpenSSH per-connection server daemon (10.200.16.10:58260). Jun 25 18:35:19.082790 sshd[6012]: Accepted publickey for core from 10.200.16.10 port 58260 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:19.084141 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:19.088441 systemd-logind[1685]: New session 14 of user core. Jun 25 18:35:19.092304 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:35:20.115423 sshd[6012]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:20.119144 systemd[1]: sshd@11-10.200.20.27:22-10.200.16.10:58260.service: Deactivated successfully. Jun 25 18:35:20.121033 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:35:20.123656 systemd-logind[1685]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:35:20.124771 systemd-logind[1685]: Removed session 14. Jun 25 18:35:20.200022 systemd[1]: Started sshd@12-10.200.20.27:22-10.200.16.10:58266.service - OpenSSH per-connection server daemon (10.200.16.10:58266). Jun 25 18:35:20.666626 sshd[6045]: Accepted publickey for core from 10.200.16.10 port 58266 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:20.667987 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:20.672130 systemd-logind[1685]: New session 15 of user core. Jun 25 18:35:20.680331 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:35:21.075123 sshd[6045]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:21.078449 systemd-logind[1685]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:35:21.079162 systemd[1]: sshd@12-10.200.20.27:22-10.200.16.10:58266.service: Deactivated successfully. Jun 25 18:35:21.081292 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:35:21.082959 systemd-logind[1685]: Removed session 15. Jun 25 18:35:21.160917 kubelet[3193]: I0625 18:35:21.160704 3193 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:35:26.165470 systemd[1]: Started sshd@13-10.200.20.27:22-10.200.16.10:49946.service - OpenSSH per-connection server daemon (10.200.16.10:49946). Jun 25 18:35:26.630086 sshd[6063]: Accepted publickey for core from 10.200.16.10 port 49946 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:26.631471 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:26.635955 systemd-logind[1685]: New session 16 of user core. Jun 25 18:35:26.640359 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:35:27.045227 sshd[6063]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:27.049750 systemd-logind[1685]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:35:27.051900 systemd[1]: sshd@13-10.200.20.27:22-10.200.16.10:49946.service: Deactivated successfully. Jun 25 18:35:27.055378 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:35:27.057046 systemd-logind[1685]: Removed session 16. Jun 25 18:35:32.132479 systemd[1]: Started sshd@14-10.200.20.27:22-10.200.16.10:49950.service - OpenSSH per-connection server daemon (10.200.16.10:49950). Jun 25 18:35:32.599218 sshd[6081]: Accepted publickey for core from 10.200.16.10 port 49950 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:32.601141 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:32.608320 systemd-logind[1685]: New session 17 of user core. Jun 25 18:35:32.614413 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:35:33.014770 sshd[6081]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:33.018919 systemd[1]: sshd@14-10.200.20.27:22-10.200.16.10:49950.service: Deactivated successfully. Jun 25 18:35:33.022052 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:35:33.026339 systemd-logind[1685]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:35:33.027495 systemd-logind[1685]: Removed session 17. Jun 25 18:35:38.104480 systemd[1]: Started sshd@15-10.200.20.27:22-10.200.16.10:56532.service - OpenSSH per-connection server daemon (10.200.16.10:56532). Jun 25 18:35:38.565559 sshd[6121]: Accepted publickey for core from 10.200.16.10 port 56532 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:38.567003 sshd[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:38.571643 systemd-logind[1685]: New session 18 of user core. Jun 25 18:35:38.575358 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:35:38.969015 sshd[6121]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:38.972951 systemd[1]: sshd@15-10.200.20.27:22-10.200.16.10:56532.service: Deactivated successfully. Jun 25 18:35:38.975782 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:35:38.977519 systemd-logind[1685]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:35:38.978710 systemd-logind[1685]: Removed session 18. Jun 25 18:35:44.057543 systemd[1]: Started sshd@16-10.200.20.27:22-10.200.16.10:56534.service - OpenSSH per-connection server daemon (10.200.16.10:56534). Jun 25 18:35:44.487138 sshd[6139]: Accepted publickey for core from 10.200.16.10 port 56534 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:44.488607 sshd[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:44.492797 systemd-logind[1685]: New session 19 of user core. Jun 25 18:35:44.501356 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:35:44.866277 sshd[6139]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:44.870131 systemd[1]: sshd@16-10.200.20.27:22-10.200.16.10:56534.service: Deactivated successfully. Jun 25 18:35:44.872385 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:35:44.873125 systemd-logind[1685]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:35:44.874248 systemd-logind[1685]: Removed session 19. Jun 25 18:35:44.957449 systemd[1]: Started sshd@17-10.200.20.27:22-10.200.16.10:54006.service - OpenSSH per-connection server daemon (10.200.16.10:54006). Jun 25 18:35:45.426263 sshd[6152]: Accepted publickey for core from 10.200.16.10 port 54006 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:45.427658 sshd[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:45.432393 systemd-logind[1685]: New session 20 of user core. Jun 25 18:35:45.435427 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:35:45.950764 sshd[6152]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:45.954680 systemd[1]: sshd@17-10.200.20.27:22-10.200.16.10:54006.service: Deactivated successfully. Jun 25 18:35:45.957393 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:35:45.958911 systemd-logind[1685]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:35:45.960877 systemd-logind[1685]: Removed session 20. Jun 25 18:35:46.039525 systemd[1]: Started sshd@18-10.200.20.27:22-10.200.16.10:54014.service - OpenSSH per-connection server daemon (10.200.16.10:54014). Jun 25 18:35:46.509243 sshd[6162]: Accepted publickey for core from 10.200.16.10 port 54014 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:46.510748 sshd[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:46.515413 systemd-logind[1685]: New session 21 of user core. Jun 25 18:35:46.521401 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:35:47.594864 sshd[6162]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:47.598946 systemd-logind[1685]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:35:47.600096 systemd[1]: sshd@18-10.200.20.27:22-10.200.16.10:54014.service: Deactivated successfully. Jun 25 18:35:47.603760 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:35:47.605085 systemd-logind[1685]: Removed session 21. Jun 25 18:35:47.683473 systemd[1]: Started sshd@19-10.200.20.27:22-10.200.16.10:54026.service - OpenSSH per-connection server daemon (10.200.16.10:54026). Jun 25 18:35:48.111308 sshd[6182]: Accepted publickey for core from 10.200.16.10 port 54026 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:48.112921 sshd[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:48.117225 systemd-logind[1685]: New session 22 of user core. Jun 25 18:35:48.123351 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:35:48.674337 sshd[6182]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:48.678468 systemd[1]: sshd@19-10.200.20.27:22-10.200.16.10:54026.service: Deactivated successfully. Jun 25 18:35:48.681280 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:35:48.682321 systemd-logind[1685]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:35:48.683886 systemd-logind[1685]: Removed session 22. Jun 25 18:35:48.761352 systemd[1]: Started sshd@20-10.200.20.27:22-10.200.16.10:54040.service - OpenSSH per-connection server daemon (10.200.16.10:54040). Jun 25 18:35:49.230916 sshd[6193]: Accepted publickey for core from 10.200.16.10 port 54040 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:49.234729 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:49.238846 systemd-logind[1685]: New session 23 of user core. Jun 25 18:35:49.247401 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:35:49.640599 sshd[6193]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:49.645550 systemd[1]: sshd@20-10.200.20.27:22-10.200.16.10:54040.service: Deactivated successfully. Jun 25 18:35:49.649341 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:35:49.651721 systemd-logind[1685]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:35:49.652795 systemd-logind[1685]: Removed session 23. Jun 25 18:35:54.730460 systemd[1]: Started sshd@21-10.200.20.27:22-10.200.16.10:54892.service - OpenSSH per-connection server daemon (10.200.16.10:54892). Jun 25 18:35:55.194824 sshd[6264]: Accepted publickey for core from 10.200.16.10 port 54892 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:35:55.196209 sshd[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:35:55.200334 systemd-logind[1685]: New session 24 of user core. Jun 25 18:35:55.204397 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:35:55.598093 sshd[6264]: pam_unix(sshd:session): session closed for user core Jun 25 18:35:55.601798 systemd[1]: sshd@21-10.200.20.27:22-10.200.16.10:54892.service: Deactivated successfully. Jun 25 18:35:55.605147 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:35:55.606185 systemd-logind[1685]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:35:55.607237 systemd-logind[1685]: Removed session 24. Jun 25 18:36:00.690458 systemd[1]: Started sshd@22-10.200.20.27:22-10.200.16.10:54904.service - OpenSSH per-connection server daemon (10.200.16.10:54904). Jun 25 18:36:01.114621 sshd[6287]: Accepted publickey for core from 10.200.16.10 port 54904 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:36:01.116096 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:01.120553 systemd-logind[1685]: New session 25 of user core. Jun 25 18:36:01.124341 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:36:01.497740 sshd[6287]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:01.501870 systemd-logind[1685]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:36:01.502355 systemd[1]: sshd@22-10.200.20.27:22-10.200.16.10:54904.service: Deactivated successfully. Jun 25 18:36:01.504720 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:36:01.506826 systemd-logind[1685]: Removed session 25. Jun 25 18:36:06.581691 systemd[1]: Started sshd@23-10.200.20.27:22-10.200.16.10:34124.service - OpenSSH per-connection server daemon (10.200.16.10:34124). Jun 25 18:36:07.044690 sshd[6320]: Accepted publickey for core from 10.200.16.10 port 34124 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:36:07.046027 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:07.049645 systemd-logind[1685]: New session 26 of user core. Jun 25 18:36:07.055314 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:36:07.438227 sshd[6320]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:07.442427 systemd-logind[1685]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:36:07.443253 systemd[1]: sshd@23-10.200.20.27:22-10.200.16.10:34124.service: Deactivated successfully. Jun 25 18:36:07.446160 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:36:07.448013 systemd-logind[1685]: Removed session 26. Jun 25 18:36:12.517113 systemd[1]: Started sshd@24-10.200.20.27:22-10.200.16.10:34140.service - OpenSSH per-connection server daemon (10.200.16.10:34140). Jun 25 18:36:12.944479 sshd[6337]: Accepted publickey for core from 10.200.16.10 port 34140 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:36:12.945650 sshd[6337]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:12.950206 systemd-logind[1685]: New session 27 of user core. Jun 25 18:36:12.956320 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 18:36:13.321383 sshd[6337]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:13.324786 systemd[1]: sshd@24-10.200.20.27:22-10.200.16.10:34140.service: Deactivated successfully. Jun 25 18:36:13.327059 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 18:36:13.328541 systemd-logind[1685]: Session 27 logged out. Waiting for processes to exit. Jun 25 18:36:13.330224 systemd-logind[1685]: Removed session 27. Jun 25 18:36:18.403435 systemd[1]: Started sshd@25-10.200.20.27:22-10.200.16.10:51690.service - OpenSSH per-connection server daemon (10.200.16.10:51690). Jun 25 18:36:18.828053 sshd[6352]: Accepted publickey for core from 10.200.16.10 port 51690 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:36:18.829412 sshd[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:18.834411 systemd-logind[1685]: New session 28 of user core. Jun 25 18:36:18.843377 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 18:36:19.245988 sshd[6352]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:19.250021 systemd[1]: sshd@25-10.200.20.27:22-10.200.16.10:51690.service: Deactivated successfully. Jun 25 18:36:19.251814 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 18:36:19.253029 systemd-logind[1685]: Session 28 logged out. Waiting for processes to exit. Jun 25 18:36:19.254216 systemd-logind[1685]: Removed session 28. Jun 25 18:36:24.334455 systemd[1]: Started sshd@26-10.200.20.27:22-10.200.16.10:51700.service - OpenSSH per-connection server daemon (10.200.16.10:51700). Jun 25 18:36:24.793711 sshd[6391]: Accepted publickey for core from 10.200.16.10 port 51700 ssh2: RSA SHA256:SBKABtiW8KQd2cig87HG/D77J5dFhsUPSrWFjAykmvs Jun 25 18:36:24.795148 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:36:24.799640 systemd-logind[1685]: New session 29 of user core. Jun 25 18:36:24.804441 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 18:36:25.197517 sshd[6391]: pam_unix(sshd:session): session closed for user core Jun 25 18:36:25.201676 systemd[1]: sshd@26-10.200.20.27:22-10.200.16.10:51700.service: Deactivated successfully. Jun 25 18:36:25.203824 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 18:36:25.204952 systemd-logind[1685]: Session 29 logged out. Waiting for processes to exit. Jun 25 18:36:25.206031 systemd-logind[1685]: Removed session 29. Jun 25 18:36:49.120909 systemd[1]: run-containerd-runc-k8s.io-6c3d0b5589a9302f14d1284d76ab393c199a0e048acaf1aa320f11727edf026e-runc.wANGwZ.mount: Deactivated successfully. Jun 25 18:37:32.222207 update_engine[1688]: I0625 18:37:32.221737 1688 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 25 18:37:32.222207 update_engine[1688]: I0625 18:37:32.221774 1688 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 25 18:37:32.222207 update_engine[1688]: I0625 18:37:32.222021 1688 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 25 18:37:32.229377 update_engine[1688]: I0625 18:37:32.223201 1688 omaha_request_params.cc:62] Current group set to alpha Jun 25 18:37:32.229377 update_engine[1688]: I0625 18:37:32.223321 1688 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 25 18:37:32.229377 update_engine[1688]: I0625 18:37:32.223330 1688 update_attempter.cc:643] Scheduling an action processor start. Jun 25 18:37:32.229377 update_engine[1688]: I0625 18:37:32.223345 1688 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 18:37:32.229377 update_engine[1688]: I0625 18:37:32.223371 1688 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 25 18:37:32.229377 update_engine[1688]: I0625 18:37:32.223414 1688 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 18:37:32.229377 update_engine[1688]: I0625 18:37:32.223418 1688 omaha_request_action.cc:272] Request: Jun 25 18:37:32.229377 update_engine[1688]: Jun 25 18:37:32.229377 update_engine[1688]: Jun 25 18:37:32.229377 update_engine[1688]: Jun 25 18:37:32.229377 update_engine[1688]: Jun 25 18:37:32.229377 update_engine[1688]: Jun 25 18:37:32.229377 update_engine[1688]: Jun 25 18:37:32.229377 update_engine[1688]: Jun 25 18:37:32.229377 update_engine[1688]: Jun 25 18:37:32.229377 update_engine[1688]: I0625 18:37:32.223423 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:37:32.229377 update_engine[1688]: I0625 18:37:32.225558 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:37:32.229377 update_engine[1688]: I0625 18:37:32.225922 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:37:32.229821 locksmithd[1745]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 25 18:37:32.242367 update_engine[1688]: E0625 18:37:32.242249 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:37:32.242367 update_engine[1688]: I0625 18:37:32.242316 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 25 18:37:42.164720 update_engine[1688]: I0625 18:37:42.164279 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:37:42.164720 update_engine[1688]: I0625 18:37:42.164461 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:37:42.164720 update_engine[1688]: I0625 18:37:42.164685 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:37:42.174803 update_engine[1688]: E0625 18:37:42.174719 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:37:42.174803 update_engine[1688]: I0625 18:37:42.174780 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 25 18:37:52.165201 update_engine[1688]: I0625 18:37:52.164709 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:37:52.165201 update_engine[1688]: I0625 18:37:52.164898 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:37:52.165201 update_engine[1688]: I0625 18:37:52.165118 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:37:52.169490 update_engine[1688]: E0625 18:37:52.169427 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:37:52.169490 update_engine[1688]: I0625 18:37:52.169473 1688 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 25 18:38:02.161480 update_engine[1688]: I0625 18:38:02.161221 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:38:02.162061 update_engine[1688]: I0625 18:38:02.161798 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:38:02.162061 update_engine[1688]: I0625 18:38:02.162029 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:38:02.185774 update_engine[1688]: E0625 18:38:02.185393 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:38:02.185774 update_engine[1688]: I0625 18:38:02.185451 1688 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 18:38:02.185774 update_engine[1688]: I0625 18:38:02.185456 1688 omaha_request_action.cc:617] Omaha request response: Jun 25 18:38:02.185774 update_engine[1688]: E0625 18:38:02.185546 1688 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 25 18:38:02.185774 update_engine[1688]: I0625 18:38:02.185561 1688 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 25 18:38:02.185774 update_engine[1688]: I0625 18:38:02.185565 1688 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 18:38:02.185774 update_engine[1688]: I0625 18:38:02.185568 1688 update_attempter.cc:306] Processing Done. Jun 25 18:38:02.185774 update_engine[1688]: E0625 18:38:02.185581 1688 update_attempter.cc:619] Update failed. Jun 25 18:38:02.185774 update_engine[1688]: I0625 18:38:02.185585 1688 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 25 18:38:02.185774 update_engine[1688]: I0625 18:38:02.185587 1688 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 25 18:38:02.185774 update_engine[1688]: I0625 18:38:02.185590 1688 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 25 18:38:02.186144 locksmithd[1745]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 25 18:38:02.186442 update_engine[1688]: I0625 18:38:02.186223 1688 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 25 18:38:02.186442 update_engine[1688]: I0625 18:38:02.186250 1688 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 25 18:38:02.186442 update_engine[1688]: I0625 18:38:02.186254 1688 omaha_request_action.cc:272] Request: Jun 25 18:38:02.186442 update_engine[1688]: Jun 25 18:38:02.186442 update_engine[1688]: Jun 25 18:38:02.186442 update_engine[1688]: Jun 25 18:38:02.186442 update_engine[1688]: Jun 25 18:38:02.186442 update_engine[1688]: Jun 25 18:38:02.186442 update_engine[1688]: Jun 25 18:38:02.186442 update_engine[1688]: I0625 18:38:02.186257 1688 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 25 18:38:02.186442 update_engine[1688]: I0625 18:38:02.186378 1688 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 25 18:38:02.186681 update_engine[1688]: I0625 18:38:02.186574 1688 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 25 18:38:02.195028 update_engine[1688]: E0625 18:38:02.194998 1688 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 25 18:38:02.195097 update_engine[1688]: I0625 18:38:02.195055 1688 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 25 18:38:02.195097 update_engine[1688]: I0625 18:38:02.195060 1688 omaha_request_action.cc:617] Omaha request response: Jun 25 18:38:02.195097 update_engine[1688]: I0625 18:38:02.195065 1688 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 18:38:02.195097 update_engine[1688]: I0625 18:38:02.195067 1688 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 25 18:38:02.195097 update_engine[1688]: I0625 18:38:02.195070 1688 update_attempter.cc:306] Processing Done. Jun 25 18:38:02.195097 update_engine[1688]: I0625 18:38:02.195075 1688 update_attempter.cc:310] Error event sent. Jun 25 18:38:02.195097 update_engine[1688]: I0625 18:38:02.195080 1688 update_check_scheduler.cc:74] Next update check in 45m27s Jun 25 18:38:02.195466 locksmithd[1745]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 25 18:38:32.584581 kubelet[3193]: W0625 18:38:32.584284 3193 machine.go:65] Cannot read vendor id correctly, set empty.